With ChatGPT adding direct links to cloud storage, it will make it a lot easier to have AI, in effect, trained on one’s own data and projects.
This seems like a direct alternative to Google’s NotebookLM. Instead of having to use a web-based tool to create a workspace and load in relevant documents, if I read this correctly, with ChatGPT one can simply create a cloud folder and stuff it full of everything for a particular project and then issue prompts to work with that?
This is a really interesting shift — connecting cloud storage directly to LLMs definitely lowers the friction for using AI with your own data. That said, the tradeoff is still very cloud-centric. For people working with sensitive material (legal, research, health, etc.), uploading documents to third-party clouds — even via Dropbox or OneDrive — may still feel risky.
I’ve been building a Mac app called Elephas that takes a different angle: local-first, semantic search and chat over your own files (PDFs, notes, YouTube transcripts, etc.) You can use your own OpenAI or Claude key, or even run fully offline with smaller local models.
It’s not trying to replace GPT or ChatGPT’s convenience, but for people who care more about privacy and working within their own file system, it’s an alternative worth exploring. Curious to see if the cloud-first vs local-first divide grows sharper over time. What do you think?
FYI — this is my first time here, so excuse me if I’ve misunderstood the tone or norms. Just wanted to share a perspective from something I have been building.
The increased use of cloud based repositories for personal data to be processed by AI feeds right into Apple hands.
Lost in the AI bungle of last year was Apple’s innovative architecture for blending on-device and cloud-based AI using what Apple marketing calls PCC - Private Cloud Compute.
A lot of the details were never explained (some pundits posit that Apple was/is building custom data center GPU silicon as alternative to Apple buying 10’s of millions of $$$ of Nvidia data center GPU’s, or using racks of Mac Studio class servers.)
But the architecture has the potential to deliver a privacy-first, cloud-based AI solution that places Apple at the forefront of personal data stores used in cloud-based AI.
The missing piece, not just for PCC but for Apple’s AI strategy in general, is to open up Apple AI efforts to developers with an API.
If Apple announces at WWDC a developer AI interface to allow developers full, or at least extensive, access to Apple’s on-device, and even PCC on-cloud AI models, it could be significant and more interesting than just claiming they are (finally) fixing Siri.
True, public developer access for now is to on-device models only. Not to Apple’s Private Cloud Compute or third party cloud-based foundation models. Still, it’s a much needed step in the right direction.
It’s reasonable to assume Apple needs time for the API to be used by developers before they can extend it to PCC. Going off-device to the cloud has additional use cases and probably changes the sync/async paradigm used for on-device call backs to accomodate delay, outtages, and other factors when connecting to a remote compute cloud.