Revolutions are the DNA mutations of societies. Some are better than others but all of them are necessary.
Except, what I have in mind is that there arenât separate apps as such. All the apps are essentially the same âapp,â AI. One âappâ to rule them all. ![]()
I know what you are talking about, and I respect you opinion. I just disagree. The way I see it, right or wrong, the world made a choice. And it wasnât webkit.
Whatâs different is that Microsoft makes its own choices for Windows software and PC hardware that supports it.
Apple makes its own choices for macOS, iOS and Mac, iPhone, and iPad hardware that supports it.
But Google uses its market position to herd and control as many web users as it can. And that was not the early idealistic view nor hope for the web. Jen Simmons and others involved with real web standards have been fighting the good fight for decades.
Google dictating web standards is a bad thing of course but I donât see much of a difference here: Microsoft used its market dominance decades ago to herd and control as many users as possible. And Apple would also gladly do it if it could --and Iâd say itâs succeeding at least in the mobile battlefront.
I would even say that Google open sourcing Chromium has been a powerful gift not only because itâs giving its own competition and other startups an OS independent rendering engine --used even by its nemesis Microsoft to build its own browser! It has also enabled a plethora of companies to build & deploy their products around Electron, a second good order effect that wasnât so obvious.
If anything this goes to show that while in the ~10s it was thought that âopen source has wonâ in reality all of this base software (OSs, databases, browsersâŚ) is essentially free as in beer. Open source won, but that was not relevant any more: the money to be had was in the data that went through that software, and this is the power move that Google and the darker Meta did to move beyond MS or Apple: why build your dominance on the hardware choice of the consumer? Itâs much more effective to base your dominance on how the consumer behaves!
Google will not push any changes to a standard that doesnât benefit this strategy and this is the key aspect here: what are standards for, and what do we need them for --we canât have freedom of choice without neutral standards and thatâs the vector of Googleâs corporate evilness.
I donât want my carbon footprint to become a giant hoof! Running GenAI all the time while browsing the web isnât something Iâm willing to do, as I believe we should all be doing our part to help reduce energy usage. If everyone uses these tools which constantly query AI, our children wonât have a future, or at least it will be like the future described in Revelations. If Safari ever integrated AI like this, Iâd switch to another browser.
I much prefer to use GenAI intentionally when I really need it. Of course, this isnât what the AI companies want.
I agree. I use AI regularly, but for select tasks. I find it annoying when Google (which for me is still usually the best search) has often useless and often completely inaccurate AI results for every search I make. What a waste of energy! Especially since it has an AI tab I could use if I wanted some interactive element.
AI seems to be peaking, at least for the moment. The productivity gains are not nearly as high as businesses were promised given the need to check its results and limitations on its applicability in a corporate context. I have a Gemini subscription, but the integration with Docs/Sheets/Mail has not been particularly useful to me, except in summarising a particularly long-winded email.
Apple do need to up their game with Siri; releasing a poor AI product is worse than no product at all⌠but they donât need to replicate Google; simply provide easy access to third-party AI when people need it and include it where it makes sense - like summarisation, photo manipulation, translation and so on.
And I remember how people HATED Microsoftâs IE 6 and the way it enabled Microsoft to throw its weight around.
I would argue that the web is still hardware-agnostic, as (theoretically) anybody could implement the âstandardsâ that Google is attempting to shove down our throats.
The fact that a standard is miserable doesnât mean that itâs not interoperable from vendor to vendor.
Interoperability seemed to be @WayneGâs concern, not mine.
Miserable standards used as a competitive advantage is a concern.
Agreed 100%. And Iâve got concerns about the privacy aspects of AI in the browser as well. And then the issue of interaction with original content. In general much of the current AI push seems to often focus on reducing content to summaries and blurbs, at least on the consumption side of things.
We seem to increasingly be living in a world where people do not want to take the time to actually read anything more than a few sentences. As discussed countless times, going with the summary only opens up the door to accepting the wrong answer because verifying is too much work.
Yes! And this is a major problem.
The result will be an increasingly shallow, illiterate culture as people lose both the interest in and the attention span for heavy, substantive material, regardless of genre. At the more pedestrian level, it is becoming increasingly difficult to persuade people to read even a short two-paragraph email. It is a sad state of affairs. I read recently that we must ask two questions when using technology:
- What does this tool do FOR me?
- What does this tool do TO me?
Or write one. I exchange email every few months with one friend/former co-worker, that I havenât seen for 20+ years. The rest of my friends and family text.
The two paragraphs was more my focus than the medium. ![]()
I would add ability.
The more we outsource knowledge-based tasks to AI, the less likely we are to retain or develop the ability to perform those tasks ourselves over time. We trade capability for convenience. Vibe coding is a good example of this, but if you apply it more generally it gets dark.
You missed the scare quotes around âweb standards â. They are Googles proposals.
Nor perhaps even a business âŚ
No wonder OpenAI is turning to sex, because as we all know, sex sells.
The French Revolution was pretty roughâŚ
I am finding AI tools to be very useful in helping me with programming. Revolutions always make me nervous because so many of them go bad.
I was being somewhat tongue-in-cheek about revolutions
, though some do produce good outcomes. I, too, find AI tools useful; I simply donât want them to deskill me, nor am I willing to present AI-generated text as my own.
So much of the âjournalismâ and resulting discussion around âAIâ the past 4 years has just been so sloppy, lacking both understanding and healthy skepticism. I think the sloppiness reflects the informality of the information ecosystem.
On the one hand youâve got the non-tech normal folk that get their tech news from non-tech news sources and any buzz that might break through in chats or social media. It takes longer for this stuff to break through to them and they are often more cautious because theyâre happy with what theyâve been using.
But on this side of the fence, there is a well developed ecosystem of âcontent producersâ along with the specialty tech press that, in theory, cover tech professionally as journalists. I think itâs this ecosystem that has largely failed to be properly critical in the face of well funded hype.
The sloppiness is that we have this weird mix of influencers and non-journalist podcasters whoâs success is not based on accuracy or well researched journalism but viewership/listenership, which is to say, popularity, clicks, etc. And even in the journalism sphere you sometimes have companies that sit in the middle between journalism and a staff of people that are selling ads, writing fluff piece âtop 10 bestâ posts, and others that regurgitate the latest corporate press release.
Those outfits that actually cross over into actual journalism often just publishing a stream of half-formed analysis and speculation, especially with new technology like LLMs that they donât understand. And churning under it all are the billion dollar investments and a constant attempt to strategically push the conversation/discussion in ways favorable to the tech and its investors.
A part of our problem is this mess of a system weâve created for discussion of social issues be they tech or other. I think tech provides a perfect focal point because it is obviously heavily weighted in modern life. And within this the LLM/AI discussion is the perfect example of a failed public discourse.