ChatGPT adding links to cloud file storage - game changing?

With ChatGPT adding direct links to cloud storage, it will make it a lot easier to have AI, in effect, trained on one’s own data and projects.

This seems like a direct alternative to Google’s NotebookLM. Instead of having to use a web-based tool to create a workspace and load in relevant documents, if I read this correctly, with ChatGPT one can simply create a cloud folder and stuff it full of everything for a particular project and then issue prompts to work with that?

1 Like

You can also give Claude direct access to Google Drive, Calendar, and Gmail in any of the paid plans—i.e., it’s not limited to enterprise tiers.

You don’t need NotebookLM to give Gemini access to your Google Drive files—you can do that right from within Google Drive itself.

2 Likes

Good to know.

Haven’t felt the need to pay for more than one service right now. They keep leapfrogging each other very quickly.

I hope that the MCP protocol is broadly adopted and that we’ll soon be able to hook AI into whatever we want or need to.

1 Like

You can connect Cursor or Claude Desktop now to Zapier MCP. That lets you use AI with 8,000 apps which work with Zapier.

2 Likes

This is a really interesting shift — connecting cloud storage directly to LLMs definitely lowers the friction for using AI with your own data. That said, the tradeoff is still very cloud-centric. For people working with sensitive material (legal, research, health, etc.), uploading documents to third-party clouds — even via Dropbox or OneDrive — may still feel risky.

I’ve been building a Mac app called Elephas that takes a different angle: local-first, semantic search and chat over your own files (PDFs, notes, YouTube transcripts, etc.) You can use your own OpenAI or Claude key, or even run fully offline with smaller local models.

It’s not trying to replace GPT or ChatGPT’s convenience, but for people who care more about privacy and working within their own file system, it’s an alternative worth exploring. Curious to see if the cloud-first vs local-first divide grows sharper over time. What do you think?

FYI — this is my first time here, so excuse me if I’ve misunderstood the tone or norms. Just wanted to share a perspective from something I have been building.

5 Likes

I am very concerned about privacy. I don’t give access to my data (as far as I can tell.) I don’t give sensitive information in my prompts.

The increased use of cloud based repositories for personal data to be processed by AI feeds right into Apple hands.

Lost in the AI bungle of last year was Apple’s innovative architecture for blending on-device and cloud-based AI using what Apple marketing calls PCC - Private Cloud Compute.

A lot of the details were never explained (some pundits posit that Apple was/is building custom data center GPU silicon as alternative to Apple buying 10’s of millions of $$$ of Nvidia data center GPU’s, or using racks of Mac Studio class servers.)

But the architecture has the potential to deliver a privacy-first, cloud-based AI solution that places Apple at the forefront of personal data stores used in cloud-based AI.

The missing piece, not just for PCC but for Apple’s AI strategy in general, is to open up Apple AI efforts to developers with an API.

If Apple announces at WWDC a developer AI interface to allow developers full, or at least extensive, access to Apple’s on-device, and even PCC on-cloud AI models, it could be significant and more interesting than just claiming they are (finally) fixing Siri.

With their new Foundation Models Framework API they did just that:

Yup. But no real magic 8 ball on my end; this is probably the safest WWDC prediction.

Disappointed, but not surprised, they didn’t open it up to PCC also. That’s probably next year?

True, public developer access for now is to on-device models only. Not to Apple’s Private Cloud Compute or third party cloud-based foundation models. Still, it’s a much needed step in the right direction.

That’s probably next year?

I do hope so.

It’s reasonable to assume Apple needs time for the API to be used by developers before they can extend it to PCC. Going off-device to the cloud has additional use cases and probably changes the sync/async paradigm used for on-device call backs to accomodate delay, outtages, and other factors when connecting to a remote compute cloud.

1 Like