I’ve become less skeptical of the usefulness of AI in certain contexts, so I’m exploring potentially signing up for the more advanced (and potentially more private) services out there. But I don’t know whether I’m better off buying directly from one provider (e.g. chatgpt or claude or gemini), or though some separate service that gives me access to multiple models (kagi, raycast).
Any thoughts on this? Where do you get your AI? From one provider or many?
I have got Gemini, Claude, raycast AI, chatGPT and Grok … but I mostly use ChatGPT, without even thinking about the others. For various reasons, some of them to do with tax, I signed up for annual subscriptions to Gemini and Claude, and I get Grok becasue I have a paid twitter account.
I would be happily turning all of them off … and just sticking with chatGPT, which only offers a monthly subscription.
Why not try a month of ChatGPT?
(If I had to make 2 choices, I’d have ChatGPT and Raycast, now that it also has an iOS app. I love Raycast!)
I’d watch a few videos comparing the different models and their capabilities. They each have their use cases. I use Gemini with google workspace, NotebookLM is worth checking out regardless of what model you choose. I’m currently taking a few classes and learning how to prompt and how to integrate AI into my workflow. Big learning curve but it is fun to dive in and see what’s possible.
I use OpenRouter which gives me API access to everything out there. I use Obsidian all day, so have connected it to OpenRouter and I often try different models to get the best result. It costs me much less than subscribing, as I’m not a heavy AI user - it’s pay as you go.
I also use Gemini Pro 2.5 directly via the Gemini API as it’s free at the moment!
I have a Gemini subscription too, but I probably won’t renew it.
Likewise! I run https://openwebui.com locally on my computer and have it hooked up to open router (I found they’re website suuupperr laggy once you have a decent sized chat with any model) primarily for use with Gemini 2.5 Pro and Claude 3.7. I also have ollama connected but realized that the models I can run on my machine really aren’t sufficient for the coding problems I’ve started to ask the models.
Technically, I also have it included in my Kagi subscription, but have only used that once or twice.
Thanks for sharing about OpenWebUI. I like having an interface outside Obsidian for my various random queries through the day - part of the reason for a Gemini sub. That works great!
I tend to use Gemini 2.5 Pro Preview (despite its recent wobble), Claude 3.7 and ChatGPT 4.0 - all for assisting me in writing.
I usually default to Copilot. It feels “less creepy” than the other ones but less advanced than say ChatGPT (particularly in image generation). My work deals with a lot of Microsoft services so it was just a natural fall back.
Generally, I’d recommend exploring the models/modes on one hosted service first (OpenAI/Anthropic/etc.), unless you really like the entry point of the aggregated service (e.g. if it just seems wrong not to start in Raycast, build on that.)
If you have the machine(s) for it, it’s worth learning to run locally.
P.S. Getting an employer or client to pay for a subscription and/or the API credits is the best, if it’s an option.
I’ve tried paid plans for Claude, Gemini, and ChatGPT, but I usually stick with ChatGPT because I have the most history with it. My needs are light and primarily focused on ideation and brainstorming. Your decision may depend on whether you have specific requirements in certain areas. Additionally, unless you rely heavily on conversation history with a particular model, using a tool like Raycast or another external API-based option could be more sensible and save you money.
This was originally why I started using open router! You pay per token/in out. I had loaded up $5 and it lasted me almost a year with how little I used AI.
I sampled a lot of different services and then just got a paid account with ChatGPT. I find their macOS app to have the least friction.
My usage is more focused on quickly iterating on some assistance, not a/b/c comparisons to see which tool is the most accurate for a specific prompt or task.
I think getting proficient in using one AI tool and only if you consistently bump up against limits or frustation seeking out alternatives is a good way to walk before running.
I do find generative AI (LLM’s, generative images) is really a small portion of my AI use. Much larger usage is through more sophisticated and automated tools added to the existing productivity software I use.
Taking stock, my biggest gains have been the automated/improved algorithms in Photoshop (editing images - especially object removal/cleanup), Premiere Pro (video editing and voice processing).
As an aside, I do find myself using ChatGPT as my “go to” first stop for all web searches now.
I don’t know how long it will last, but the clean text results with only tiny backlinks/references to the source material, is much more refreshing than the ensh**tification of search results from Google and Microsoft.
I also love the observation from Dave Hamilton of MacGeekGab, that he uses the voice interface of ChatGPT to have conversations while driving at night to stay awake and entertained by researching random topics or just following his curiosity.
I find ChatGPT superior to Claude, Perplexity and Gemini for anything text/document related. The only other one I use regularly is JetBrains AI beause I find it lightyears better than any other AI for coding. Luckily, my work pay these premium subs to all these for me, but I barely touch the others. If I were paying I’d only have ChatGPT and JetBrains.
I had a ChatGPT Plus account for the last month, but today I switched to Gemini Advanced for the 30 day free trial. Thus far, I am still liking ChatGPT Plus more, but it could just be out of familiarity at this point.
BoltAi (via Setapp) lets me interact with Anthropic, Google and OpenAI via apikeys I probably spend a few dollars a month
LMStudio - local models running directly on my Mac - Currently Qwen3 models are very good. Cost either free or $6K Canadian to purchase an M3Max with 64GB of RAM
Obsidian Copilot
Google Gemini via the web when I need search.
Notebook LLM audio via the web
VSCode and Copilot
The irony is I’m deeply skeptical of the value outside narrow contexts.
My stack shifts pretty rapidly based on which models release what, but these days I’m on ChatGPT Pro and Gemini Advanced for chat/research, Wispr Flow for dictation, and Sonix.ai for transcription. I’m considering Kagi Ultimate (currently on Professional) for better models when I want AI to quick answer my search query. I just started playing with Oniri.
I’m quite liking BoltAI as a frontend and chat application for Open AI / Gemini / Claude APIs. Cost is far better with the API as opposed to a subscription, the features are quite nice with a third party frontend (e.g. multiple providers models, chat grouping, and different system prompts per chat) and my workflow changed 0% when the latest GPT-4.1 came out - I just changed the model setting in a few key chats and in the default “new chats use” selector and carried on as I had been already.
I’m surprised Claude isn’t getting more love here. I subscribe to a few different ones, but Claude is my go-to writer. I find that if I give it samples of my own writing, it can really dial in. I have a project set up for sales-related emails, I have one that takes a large archive of my own article writing and creates new articles based on the raw materials I give it. Typically after 2-3 tweaks — maybe a new lead paragraph or a couple of connections from context outside the raw material — I have something that is just as good as the straightforward thing I would have posted anyway, and in much less time and effort.
ChatGPT is getting much better at almost everything, but I think its writing is still quite obviously “AI writing.” Even when I give it the same writing samples to use as models, it just doesn’t get there for me.