Kudos to @Bmosbacker for the thoughtful commentary!
So, Claude it shall be. Can anyone recommend resources for learning how to use Claude in addition to what is on the Anthropic web site?
How does NotebookLM fit in to the conversation? Since the writer Steven Johnson was instrumental in its development it seems it might be useful for Dr Mosbacker’s book writing projects.
I am not among the “everyone else” who Dr Mosbacker mentions. My experience with AI has largely been limited to querying Perplexity. I have no idea what LLM it uses. I installed Dia a couple hours ago and asked it if there was experimental support for a hypothesis I read about last evening in a book published 16 years ago. The answer was impressive.
I would say my interest in AI agents (is that what they should be called?) would be as an assistant or co-pilot. (Is co-pilot now a generic term? Or does it imply Microsoft’s Co-Pilot, which I presume is OpenAI since Microsoft invested heavily in it? There’s also a plug-in for Obsidian called Copilot, which claims you can use any LLM you want with it.) I guess I could ask AI.
Apologies for my naïveté. I clearly have much to learn.
One way is to ask Claude itself. Note, however, that you won’t get much if you just type something along the lines of “How do I use you?”—you’ll get better results if you’re specific about what you want to accomplish.
So, think of something you’d like to get Claude’s help with and ask the best way to go about working on the problem together.
That being said, Anthropic Academy does have some useful tutorials. If you haven’t done so already, try the module on Claude for personal use. It’s built around the concept of “AI Fluency” which is a good place to start if you’re totally new to LLMs.
The entire “AI Fluency” module is quite helpful, but the section on effective prompting techniques is something you can use right off the bat if you don’t have time to work through the whole thing.
I don’t know why people feel the need to choose: use OpenRouter.ai. You can query multiple LLMs at once and their pay-as-you-go model is so much cheaper than any of the subscriptions. (It’s literally pennies.)
Perhaps as Steve Jobs might have said “You’re doing AI wrong”; here’s why I have been using mainly one LLM tool (feedback welcome).
Avoiding cognitive complexity - Trying to learn the quirks of prompting for one LLM tool instead of many. Fearful of the “Jack of all trades, expert at none”.
Not using LLM’s 24x7x365 - I don’t need perfection or absolute optimal results. Using one tool that is “good enough” is simpler
Workspace environment - The “value-added” benefits of memory, workspace configuration, default global prompt/setup, history of sessions, and other soft benefits of using the interactive eco-system of a specific LLM tool versus a generic prompt router/handler?
Note: I use LLM only for specific tasks, primarily idea creation, online preliminary research (replacing Google), answering specific “how do I…” tech questions (Google replacement again).
I do all my image generation and AI-assisted image editing directly inside Photoshop (especially now that I can easily switch between Firefly and Google Nano Banana models), because most of my AI image stuff is cleanup/removal, augmentation, enhancement, or compositing so being right there with my regular image tools makes the most sense.
I do my other “embedded AI” tools primarily for video enhancement/editing and inside Adobe Premiere Pro. Some are generative, some are older ML models, and some are probably there but not even visible/obvious to me (audio cleanup, audio crossovers, blending, etc.)
Nope, not there. Apple News without the “+” is rarely the solution, in my experience. I’ve hit a paywall so many times when I click on one of its “Top Stories” links that I don’t even bother with it anymore.
I have come to this very fork in the road, over the past few weeks—so stumbling across this post has been a very interesting read.
One thing that has me (begrudgingly) leaning towards ChatGPT, is the apparent ease with which one can create a custom GPT, compared to Claude.
What I know about coding and API integration can be written on the back of a postage stamp.
So, whilst I am far more inclined towards Claude for all the underlying reasons highlighted here—my wanting to subscribe to a tool of this nature is specifically to allow me to build/experiment with my own custom GPT bot/tutor/database. And to be able to share this with others, for them to interact with it. My understanding is that ChatGPT allows this for anyone who has a free OpenAI account, who then accesses it via my shared link. Whilst Claude apparently requires a paid account to be able to interact with someone else’s custom GPT. Nevermind the fact that creating it inside Claude is far less intuitive, and requires some more advanced know-how—at least, this is what Claude itself has told me…
If anyone has created their own toolkit inside Claude—would really appreciate some feedback on how easy it was/limits etc etc., compared to the custom GPT builder over at OpenAI!
NotebookLLM - is a funny beast. It is exceptionally good at one small thing.
When I want to interact and/or ask questions of 1000 of pages of notes at the same time it is excellent. One example I’ve used it for recently. I wanted to understand, all of the problems that people who work in Scrum teams have asked about online. I exported the contents of several forums and uploaded them into NotebookLLM. The value was it digested far more information than I was going to manage using my eyeballs,
It’s weakness - you must spend time uploading documents before you start.
I like using it as a supporting/secondary tool because of that. Unlike the majority of tools, it isn’t just a “me-too” variation on the same thing.
NotebookLM is quirky, but fun to use and can be useful. Mostly, I appreciate Google willing to let at least one group “think differently” and explore a novel approach for LLM-driven tools.
The nuts and bolts are behind the paywall, but the author of the linked post (Nate B. Jones) uses NotebookLM in a manner similar to what @mlevison and @SpivR appear do—as a tool for information retrieval specifically from information you load into it. Once he’s used NotebookLM to extract information from the materials he’s gathered, he passes that information on to an LLM, e.g., Claude or ChatGPT, for further analysis and exploration.
He details his rationale for this structure and his workflow is behind the paywall. Here’s an excerpt that sums it up nicely:
NotebookLM is optimized for retrieval. It’s designed to find accurate information in your documents and surface it with citations. It does this extremely well. What it doesn’t do well is think about that information, synthesize across multiple concepts, or create new content from it.
That’s because retrieval and thinking are architecturally different. A retrieval system like NotebookLM grounds its responses in the documents you gave it. It pulls from those sources and cites them. It doesn’t rely on the language model’s parametric memory—the training data baked into the model. That’s why the hallucination rate is so low.
A thinking system like Claude or ChatGPT does the opposite. It uses its parametric memory to reason, synthesize, and create. It can search the web if you ask, but its primary mode is generating from what it knows. That makes it powerful for thinking through problems, but less reliable for fact retrieval unless you’re explicitly constraining it.
PS - Nate Jones has a brief video up on YouTube on this topic as well. It’s a good capsule intro if you haven’t used NotebookLM before.
Well played. Further to this point NotebookLLM is very good at this task because it relies of Google Gemini’s ridiculous context window of 1 million tokens. So unlike most other tools, it can keep all of your documents in the context window.
Most other approaches that do document retrieval use RAG (Retrieve Augmented Generation). This means the use some other mechanism to find 10-15 of the most likely documents and feed that to the LLM.