(The stock jumped up the next day, hit an all-time high, and a large number of analysts are very bullish about an iPhone “supercycle” this fall.)
As I think we all need to be reminded, the public, at large, is not represented by the people here.
Most “regular” people I talk to have barely heard of AI at all and have no awareness of ChatGPT, or competing LLMs, but several are now asking me about AI and Apple because they follow the stock market or tv news.
I’m aware, my retirement account got a nice bump . Most investors are happy because Apple finally said something about AI.
And yes, they hope this will kick off a “super cycle” of upgrades in the fall which could break Apple’s streak of declining revenue (five times in the last six quarters). Mark Gurman, in a recent interview, said he did not expect this to happen.
What are the actual differences between the different AIs put forward by many of the big tech companies and others?
I used the ChatGPT OpenAI app for iPad last night to find sources for an article I was writing and it was charm. Made it so much easier to find sources online instead Googling
But are there any actual differences between the different AIs?
If I use ChatGPT the free version and then switch METAs or another AI, will I get different answers?
Probably, IMO. They can vary based on the data they were trained with, the “guard rails” that may have been created, and a ton of things of which I have absolutely no knowledge.
The CEO/CIO of a large investing firm pointed out on CNBC, the other day, that AI is where the Internet was in the early 90’s. These are very early days.
They have one technical advisor that thinks AI will just be integrated into today’s operating systems and another that thinks AI will become the operating system of our computing devices. Time will tell.
Most are more similar than different (but vary in quality of output, coverage of all possible topics, etc.)
What Apple has done is thrown down the gauntlet saying for privacy/security reasons, a smaller database running on device, but able to seamlessly combine your personal data into the corpus of knowledge, provides a unique capability that will have significantly different results.
This definitely sets a trap for everyone else to figure out how to include personal data while dancing around the privacy/security inherent in scanning user’s phone or device data (given that consumers as less trusting of Microsoft/Google/Meta/X, etc.) or neutralizing Apple’s unfair advantage some other way.
Switching to an esoteric techie view, I have read or heard some discussion (too many fragemented tidbits to provide good footnotes) discussing that today’s LLMs have no ability to dynamically incorporate new data - from any source (public or personal) and rely on periodic re-training to add to their corpus of knowledge.
But there is a technique being worked on, (sorry I don’t know the exact technical name) to inject each query with context that effectively dynamically tunes the model.
Anyone that has done “prompt engineering” will understand this is when you use very long prompts to ChatGPT and set the scenario like “You are a novel writer with 30 years of experience writing murder/mystery stories. Give me the outline or a new book that includes the following characters and overall plotline…”
The research is exploring a concept of “infinite context” where your interaction with an LLM is stored separately and continuously expanded and then each time you interact with the LLM it is pre-prended to your prompt to create more in-depth context simulating a dynamically trained model.
I’m already past my level of understanding and have probably mangled this explanation, but I think the goal is to somehow claim they can keep this context data, including your private personal info from your device, somehow secure and segregated to give you the same level of results while using a cloud-centric versus device-local implementation?
The current approaches to Infinite Context aren’t, infinite. Rather the information is too long, they summarize it and then use the summarized version as context.
I’ve also heard that some groups/orgs are working on making it easier to “retrain” a model. If you retrained a model to better suited for your purpose then you wouldn’t need as long of a context.
FWIW a recent ATP episode - past 6 weeks (?) discussed this in depth.
I don’t trust any of these orgs. OpenAI has made an interesting choice appointing a former NSA chief on it’s board.