Google AI Mode (Next Gen Search) is pretty good!

Been using it last few days under Labs and it’s replaced the need for perplexity.ai ChatGPT search for me. And being in google ecosystem makes things easier in many ways either way with it’s integrations.

I like the cleaner UI as well, perplexity was starting to feel a little cluttered.

https://www.google.com/aimode

Try it out and let me know what you think.

2 Likes

Google AI Mode hallucinates and lies as well as any of the AI models. I tested it with questions about characters and the actors who played them in an older highly regarded HBO series, from 20 years ago, and it had incorrect answers for everything.

Katie

2 Likes

Indeed, I have been exploring Google’s AI Mode over the past few weeks & I must say, it’s a remarkable shift in how we approach search. As someone who is been in IT services for over a decade, I have seen many search evolution, but this one genuinely feels next-gen.

For example, last week I needed to compare React Native vs. Flutter for a client project. Normally, I would have to open 4–5 articles, compare performance benchmarks, community support & tooling. With AI Mode, I simply typed, React Native vs Flutter for a fintech app in 2025, & the response included key differences, pros and cons tailored to the fintech industry, & even linked recent benchmarks from GitHub and StackOverflow trends.

Also some features like:

  • follow-up questions and helpful links to the web
  • divides question into subtopics and searches for each one

This is a classic use case for GenAI in professional services but always keep in mind… your competition is doing the same :wink:

1 Like

I tried some simple questions about a band I liked and it hallucinated album and band member names. It’s even worse than other AIs out there, at least ChatGPT gives the correct information for the same prompt before adding some random hallucinations!

I wouldn’t trust this and would recommend others carefully verify what it says before taking action on any information, or you could make bad decisions based on fake information.

100% Agree, this is a textbook GenAI use case in professional services, and you are spot on about the competition. That is why I believe the real differentiator now is not just using AI, but how you use it and the human layer you build around it.

For example, in our agency, we have started GenAI with our domain expertise to accelerate early research basically outlines, but the final outputs always go through a strong layer of expert validation. It is not just about speed, it’s about trusted relevance.

Do you see AI adoption in your space becoming more of a race for output velocity, or do you think the emphasis will shift toward quality augmentation through human-AI collaboration?

It’s both. Output velocity is necessary because we are all in a race to the bottom to boost efficiency (and cut costs). Quality augmentation is also a must because at the end of the day professional services are about generating trust with the client: that you can understand the problem, envision a proper solution and execute it. A slide deck with bullet points generated by an LLM is nice as a starting point, but the last mile of the sales funnel is still human.

1 Like

:sweat_smile: Nailed the paradox perfectly,

Velocity gets you in the game, but trust wins the client. & as you said, that last mile still depends on human insight, empathy & the ability to connect dots that are not always in the data. That’s the piece no LLM can truly automate — at least not yet.

In our case, we have started drawing clear lines between AI-accelerated deliverables & human-authored insight. Clients seem to appreciate that transparency. It actually strengthens trust rather than weakening it, they see we are not just using AI to save time, but to spend more time on the parts that really matter.

Have you noticed clients becoming more AI-savvy in the sales process itself? Any shifts in what they expect or how they evaluate expertise now that everyone knows GenAI is in the mix?

Expert validation is incredibly important.

I used ChatGPT to determine the size of the belt I needed for my car. ChatGPT enthusiastically told me that a certain belt size would definitely fit my car, even though it didn’t have AC. I went and purchased that belt—turns out, it was too big.

I queried ChatGPT again. This time, it suggested that another belt might be more appropriate.

I reviewed the associated websites together. Eventually, I found a car forum with human support, which clearly spelled out the belt size I actually needed.

ChatGPT was excellent at summarizing information and at creating a comparative table of different belt manufacturers and the associated specs. However, it fell short when it came to sorting out which belts were recommended. Despite my instructions, the AI couldn’t effectively separate the belt sizes meant for vehicles with AC from those meant for vehicles without AC. This was not necessarily the fault of AI, as the sites were full of contradictory information.

This is the danger of AI: it’s agreeable, but it lacks discretion, because that’s not what it’s designed to do. Human eyes need to analyze the information—particularly when it’s outside your area of expertise—to ensure that it’s accurate.

Not sure what you mean here, but the times when you could simply copy and paste what ChatGPT said were brief and are now gone. Any unedited ChatGPT material will be easily spotted (hopefully by some senior colleague).

Regarding expertise evaluation it’s not that the purely technical aspect to it was that much important beyond that you needed to have it otherwise you’re out. In my experience the decisive win keeps being about success cases, experience in the industry, etc. Oh, and the boring scoping, planning and budget.

1 Like

I can see preferring AI mode over Perplexity if you’re a Perplexity fan. As a search engine for Google’s index, it feels myopic with the wording AI summary and only a few links on the right sidebar. The briefer search summary at the top of traditional SERPs feels like a better balance. Our eyes process can search engine results so quickly, even peripherally. I could only speculate on the neuroscience but I think that makes for a better ‘summary’ than the couple hundred words generated by the LLM.