The “random YouTuber hungry for clicks” focuses on Apple and has 1.5 million subscribers. He doesn’t tell us what Apple should do, but rather what Apple has done differently from its competitors in the past and why Apple has lost its leading position in both AI and the stock market, even though Apple has spent absurdly high amounts of many tens of billions of dollars on share buybacks.
The strategic question isn’t whether current AI companies are profitable. It’s whether Apple’s approach of prioritizing shareholder returns over technology investment will position them well for the future. This isn’t about YouTube subscriber counts. It’s about resource allocation priorities.
You’re correct that AI companies aren’t profitable, but this misses the strategic importance of positioning for future market shifts. Apple seems to have recognized this, given their recent massive investment announcements:
Apple has recently changed course dramatically, with Apple announcing plans to spend and invest more than $500 billion in the U.S. over the next four years. Apple’s top management plans to invest in a new AI server manufacturing facility and to hire 20,000 employees in AI, software, and R&D (Source).
Greg Wyatt Jr.'s core claim about Apple refusing substantial AI chip investments in 2023 while competitors spent heavily is substantiated. Apple spent more than double its R&D budget on share buybacks last year, some $77 billion. (Source). Apple’s current struggles with Apple Intelligence and Siri began in early 2023 when AI head John Giannandrea sought approval from CEO Tim Cook to purchase more AI chips for development. Cook initially approved doubling the team’s chip budget, but CFO Luca Maestri reportedly reduced the increase to less than half that amount. At the time, Apple’s data centers had about 50,000 GPUs that were more than five years old – far fewer than the hundreds of thousands of chips being purchased by competitors like Microsoft, Google, and Meta. (Source)
They announced a $400+ billion investment plan in the US over four years ago. This is a continuation of a course they were already on. An increase to something already in progress. By no means a recent course change.
LLMs are a dead end and have hit the wall. They require incredible amounts of power and compute for very little gain. Meanwhile, the productivity gains and employee replacement are not going all that well in the real world, despite what the snake oil salesmen continue to claim.
LLMs are a fantastic technology demonstration that has been majorly overhyped when it comes to real world benefits. And in no way does Apple need to be in the LLM business. Even Sam Altman recognizes it is a bubble, while at the same time saying OpenAI needs three trillion dollars to take the next step. Three Trillion Dollars, with no plan to be profitable. And clearly, despite the claims, Open AI has no idea how to achieve “AGI”.
The strategic question is are LLMs worth investing in. I’m thinking Mr. Maestri made the right call here.
I think Apple has been sensible by not going all out on AI. It’s better to let AI companies work out the kinks and develop the models, and then tap into their resources, than waste billions on a technology that has little to do with Apple’s core business.
Imagine they’d gone all out on Blockchain during the hype. At the height of the hype cycle, Blockchain evangelists believed it would replace the internet, but in the end it was just a useful database in some niche areas.
I think LLMs will be useful for text processing and for developers writing code, but not much more. The hype will die down and the evangelists that came from Blockchain to AI will leave for the next big shiny hyped up technology. AI will become truly useful when more advanced forms of intelligence are developed, using other types of machine learning that don’t rely on statistical models that process lots of data.
It’s not morally or economically feasible to use a LLM all day for automations and agents as the AI companies want because it uses so much power. This is irresponsible and wrong as if everyone did this it would lead to an environmental disaster. Normally, a simple shell script can do the same automations (using a scheduler like cron) with almost zero energy requirements.
People need to stop using these tools everywhere they can, instead they need to think about where they should use it because it’s the only alternative, choosing more efficient solutions where possible.
Apple is making a smart move by not jumping on the hype bandwagon and waiting for the GenAI bubble to burst.
I mean more that Apple didn’t do what Meta, X etc. did and make their own cloud based LLM with a massive dataset and custom model. That to me would be all out. Apple Intelligence seems like a compromise, especially in the way ChatGPT is integrated.
Sorry, but this quote seems to indicate someone is drinking the kool-aid of the YouTuber.
Saner minds have already debunked all the “big Apple investment” announcements of $500 billion and later $600 billion in the US as superb PR duping of our incumbent President.
Apple has merely reiterated investments it was always planning, which spread over many years, is quite reasonable for a company its size.
It did have the effect of appeasing our President and keeping the worst of tariffs away from smartphones and Apple.
Apple is the darling of Wall Street and investors, current negative PR about AI notwithstanding.
Through savvy financial management, of which share buybacks are just one piece, they have grown into one of the largest and most profitable companies in the world.
Not to mention the exploitation of workers in the Global South.
Many AI applications, including large language models, rely on patterns learned from labeled datasets to generate accurate responses to new inputs.
Large AI companies, such as OpenAI, often outsource the labeling of these vast datasets to regions like Africa, where workers face low pay, limited benefits, and long hours, often engaging with sensitive or graphic materials.
The quote is from Moving toward truly responsible AI development in the global AI market, a Brookings Institute report. There’s a considerable body of academic research on this topic, as illustrated by the copious footnotes to this SAIS Review of International Affairs article. Karen Hao’s Empire of AI documents the lives of these workers and gives them a human face. One may bristle at labeling this as “AI Colonialism,” but the human and environmental costs that underly the proliferation of LLMs should give us all pause.
There is, of course, the irony that many of these workers lost their jobs reviewing porn and other horrible materials for censoring the social web on behalf of FaceBook, Twitter, and many others when many online companies dismantled their self-censorship efforts.
They probably welcome gainful employment tagging datasets for AI as multiple orders of magnitude less disgusting work then their previous employers needs.
Alas, it appears that the working conditions are no better, and neither is the content they are asked to review.
Yes, there are workers who are tasked with data annotation, but they are paid little, have no job security, and no power to affect the terms of their employment.
But there are also workers who are tasked with precisely the kind of content review and moderation that’s applied to social media.
From the summary of a Stanford Graduate School of Business discussion between Karen Hao, author of Empire of AI and Stanford Professor Anat Admati:
To ensure that models don’t produce harmful or toxic content, companies rely on very low-paid contract workers to perform content moderation, data labeling, and cleanup—tasks that can be psychologically damaging due among other things to the horrific text and visual images they need to review.
“And just like in the era of social media workers who did content moderation, these workers became deeply traumatized. They suffered from PTSD, and so these AI companies are repeating those harms.
(Both the full summary and the video of the discussion are available at the link.)
The second type of worker in the AI industry is the content moderator. Because OpenAI and other AI companies are scraping the detritus of the internet for data, a substantial portion is saturated with racism, sexism, child pornography, fascist views, and every other ugly thing one can think of. A version of AI that doesn’t have the horrors of the internet filtered out will develop these characteristics in its responses; indeed, earlier versions of what would become ChatGPT did produce neo-Nazi propaganda, alarming OpenAI’s compliance team.
The solution has been to turn to human content moderators to extract the filth out of the AI’s system, in the same way content moderators have been tasked for years now with policing social media content.
Like the click workers, they are completing small digital tasks by annotating data, but the data they are annotating consists of the vilest content humans can produce. Because they are training the AI, it’s necessary for content moderators to look closely at all the gory details that flash up on their screen in order to label each part correctly. Being exposed to this repeatedly and exhaustively is a mental health nightmare.
Hao follows the story of Mophat Okinyi, a Kenyan content moderator working for outsourcing firm Sama, moderating Meta and OpenAI content. The longer Okinyi worked for Sama, the more his behavior became erratic and his personality changed, destroying his relationship and leading to spiraling costs for mental health support.
(The review is in Jacobin, so it deals with AI’s implications for labor at some length. I’ve read Hao’s book, and the review’s summary of this issue does track with the text. The review does go on to discuss what’s not in Hao’s book: the advent of Deep Seek, which the reviewer sees as a challenge to AI scaling laws and “big solutions”—the reviewer’s term—to curb the concentration of market power and ensure the equitable distribution of AI’s promise.)
„Apple Intelligence” is not the result of a long-planned strategy, but a hectic reaction to months of sharply falling share prices in spring 2024.
The AI hype caused Microsoft’s share price to boom at the same time.
Falling behind competitors and losing the leading position at the stock market was the reason why Apple first spread rumors about upcoming AI features and then officially announced Apple Intelligence in June 2024 (to be released later).
Since then, Apple has advertised every piece of hardware as “Built for Apple Intelligence”, even though they have been asleep for years and they won’t deliver before 2026 or 2027
For more than a decade, Apple’s management didn’t care that Siri was technically inferior to other voice assistants and that many users complained about it. This was a clear indicator that they didn’t care about AI.
You know this because you took part in Apple’s strategic planning meetings? Or because you watched some YouTube videos?
Yes knee jerk reactions to stock market trends is Apple’s modus operandi.
You continue to confuse LLMs with AI. And thus discount all the ways Apple has successfully built AI into its products.
And you have yet to address the fact that despite the high valuations, the major LLM companies have no clear path to profitability. A classic stock market bubble that Apple is correctly staying out of.
This is purely conjecture. Reverse-engineering what a company does based on its public posturing is a fool’s errand.
I didn’t work at Apple, but I have worked at other Fortune 500 companies and it was comical, inside the company, to see how our strategy was being dissected and analyzed by clueless people on the outside.
So I know all these so-called analysts, pro or amateur, have no clue about the real inner workings and strategy at most companies, not just Apple.
MevetS, your dismissal sidesteps the documented evidence entirely. I didn’t claim insider knowledge - I cited specific, verifiable financial data: Apple’s $77.5 billion in share buybacks versus minimal AI chip investment, sourced reporting about John Giannandrea’s budget requests being cut by CFO Luca Maestri, and the timeline of Apple’s AI announcements coinciding with stock pressure.
Your “YouTube videos” dismissal is intellectually lazy when the argument rests on publicly available financial statements and business reporting, not speculation.
Regarding your “knee-jerk reactions” sarcasm: Apple’s history actually supports reactive behavior. Remember their late entry into tablets (despite having tablet prototypes years earlier), their resistance to larger phones until market pressure forced the iPhone 6, and their delayed entry into streaming services. The pattern exists.
Your “LLMs vs AI” distinction misses the point entirely. Yes, Apple has narrow AI in photos and chips. But they’ve demonstrably fallen behind in conversational AI - the area driving current market valuations and user expectations. Siri’s continued limitations after a decade aren’t evidence of strategic AI success.
Your profitability argument, while valid, ignores strategic positioning. Companies often invest heavily in emerging technologies before profitability materializes. The question isn’t whether LLM companies are profitable today, but whether Apple’s resource allocation (prioritizing buybacks over AI infrastructure) positions them competitively for the next technology cycle.
Can you address the specific financial evidence rather than deflecting to insider knowledge claims?
SpivR, Your Fortune 500 experience doesn’t invalidate documented financial evidence. My analysis isn’t based on “public posturing” - it’s based on verifiable financial data: $77.5 billion in share buybacks, specific reporting about budget cuts to AI chip purchases, and observable timelines.
Your “outsiders can’t understand” argument is intellectually convenient but fundamentally flawed. It could be used to dismiss any corporate analysis, making it unfalsifiable. Should investors and financial analysts stop evaluating companies entirely?
Resource allocation is observable fact, not conjecture. When a company spends more than double its R&D budget on share buybacks while cutting AI infrastructure investments, that’s documentable behavior with strategic implications.
Your inside experience may give you perspective on strategy complexity, but it doesn’t erase the pattern of Apple’s reactive behavior: late smartphone screen size increases, delayed tablet launch despite early prototypes, and now AI infrastructure underinvestment followed by rushed announcements.
The timeline correlation between Apple’s stock decline, competitor AI advances, and Apple Intelligence announcements isn’t mysterious insider knowledge - it’s publicly observable sequence of events.
If you believe the financial evidence is wrong or misinterpreted, address that specifically. But dismissing documented analysis with appeals to insider mystique doesn’t strengthen your position.
Your underlying premise, that Apple needs to spend significantly to develop a LLM is fundamentally flawed. And thus invalidates you conspiracy theory like arguments.
The stock buy back versus R&D spending is not smoking gun you think it is. It is prudent business managemant. But like other conspiracy theories it can be contorted to fit the predetermined narrative.
Look at the thumbnail for the YouTube star you reference. No, not click bait at all. Now, imagine he had instead presented something along the lines of what is really happening, “Apple charts prudent AI development course as Open AI and others burn through billions with no profit in sight”. Fake controversy gets eyeballs which gets add revenue. Sadly, calm reasoned analysis is left by the wayside.
And as to intellectual laziness, your “journalistic” sources clearly mischaracterized Apple’s $500 billion dollar investment, when a bit of research easy shows what @SpivR and I both noted, it was an already underway program. But hey, that doesn’t fit the narrative.
And again you contort “Apple’s knee jerk reaction” to fit the narrative. Apple lets markets develop and then presents a finished product. But there is a narrative to support so ignore what really happens.
Is Apple perfect? No. Do they make mistakes? Yes. Is Siri as bad as bloggers claim or a good as it can be? No.
No it does not. It shows that Apple understands what a mature AI technology looks like and that they can seamlessly integrate it into their projects. Contrast with Microsoft and their Rewind project. And it shows sloppy thinking when one does not make that distinction.
And, as @Rob_Polding eloquently states above, it shows Apple understands the current LLM environment and is not buying the hype.
I have above, as have others, in this and other posts, but you’ve made up your mind. And as noted, the conclusions you draw from this ‘evidence’ is flawed. Apple has more than enough cash, on the order of $250 billion, to do the stock buy back and double the R&D budget. So another flawed argument.
Were you in the budget meetings? No? Then you have no idea how the chip discussions went. Maybe it was, “We can get a good deal if we buy N chips.” “Do we need that many?” “No we need half.” “Ok, then let’s buy half.” (And I’ve actually been in budget meetings with similar discussions.)
This “reporting” is like the annual “Apple cuts iPhone orders” stories.
Again, your fundamental premise is flawed, and your smoking gun is not smoking. Once upon a time ‘analysts’ were all over Apple because they didn’t have a net book. Or that large phone you went on about. Or their own search engine. Instead of a net book we have the MacBook Air (and net books are no more). And the large phone thing worked out pretty good for Apple (of course they are doomed because they don’t have a folding phone). And why spend all that money on a search engine when you can just use somebody else’s (and get paid to do it!),
Apple does not need to build a LLM. The deal with Open AI to make ChatGPT available, at no cost to Apple (albeit not as good a the Google deal, but then, Google is profitable) shows this. Let others burn through the cash.
Apple doesn’t own chip foundries. But because they understand what it means to be profitable, they can buy the capacity they need, and get priority. By not wasting their money, they will be well positioned to make a deal with whichever LLM company they choose, or even buy one once the bubble bursts.
Apple’s $250 billion cash position means they could afford both buybacks and R&D increases. Your partnership strategy argument has merit - leveraging others’ LLM investments while maintaining financial discipline could be smart positioning.
Your “conspiracy theory” labeling is pure ad hominem - a rhetorical dodge that attempts to discredit documented financial analysis by associating it with irrational thinking. This strategy backfires because it signals you can’t address the evidence directly. Financial resource allocation analysis isn’t conspiracy theorizing; it’s standard business evaluation.
Your “fundamentally flawed premise” claim mischaracterizes the argument. This isn’t about Apple building LLMs specifically - it’s about AI infrastructure investment during a critical technology transition. The chip budget cuts affected foundational AI capabilities, not just LLM development.
Attacking the YouTube source over and over again while ignoring the CNBC and business reporting I cited compounds the ad hominem pattern. The financial data stands regardless of Greg Wyatt’s thumbnail design.
Your “prudent business management” defense of buybacks versus R&D misses the strategic context. Having cash doesn’t make every spending decision prudent. Kodak had cash when they delayed digital camera investment too.
The $500 billion announcement timing supports rather than refutes the reactive argument. If this was long-planned, why announce it precisely when AI competitors were gaining market value and Apple’s stock was under pressure?
Your “let markets develop” philosophy ignores that in rapidly evolving technologies, late entry carries compounding costs. Apple’s current Siri limitations and reliance on buying services from competitors demonstrate this risk materializing.
Most importantly, you haven’t addressed the core timeline evidence. Can you explain why Apple’s AI infrastructure investment decisions in 2023 weren’t strategically shortsighted?
I fear you are conflating two very different things - technology research, product prototyping, and market/product launch timing.
It is well-known (Steve Job’s authorized biography and other sources), that Apple developed the now-famous iPad before the iPhone, but chose to pursue launching the iPhone first.
Your analysis assumes that if a company does not launch the first product to market within a new genre, or does not reactively respond to the launch of products by a competitor, that it is falling behind the market and any future product launch is a knee-jerk reactive move rather than a planned and calculated strategy,
Of course, competitive moves and evolving consumer and market trends influence the go-to-market tactics of a company, but Apple has been somewhat unique in not following trends and pioneering paths.
I apologize for botching the quote, Steve Jobs is famous for saying something to the effect “great companies don’t give consumers what they want; they give them what they don’t even know they need”.
LLM’s versus general AI is very much a replay. We are only at the beginning of this “technology versus feature versus company” AI. evolution.
Nope. It fits the facts better than your arguments. But. you be you.
No. There is not critical technology transition, despite what the snake oil salesmen might say. There is a bubble for a technology that will have niche applications at best.
As I and other have noted, the financial arguments are either incorrect or based on a false premise. And the thumbnail and his argument are flawed.
It was announced four years ago. Apple’s stock was not under pressure.