What is it for?

I have the ChatGPT and the Gemini apps on my phone. But aside for the odd request can someone please inform what is this for?

If I have a search request there’s a very good chance I wan a living breathing human to have researched and written the response.

So what am I missing out on? Cause aside from hype I’ve not seen anything

1 Like

I’m strict with how I use AI, as I do not want it to replace my own thinking or writing. I don’t want what little reasoning or writing skill I have to atrophy.

That said, I’ve found several useful applications. But instead of sharing my list, try asking AI directly. Prompt it with your skepticism and your boundaries. Argue with it. See if it can surface anything that might be genuinely helpful to you.

If you want to go deep, consider subscribing for one month. The deeper thinking and research capabilities used to be a separate mode, but now they are integrated into the chat. You just need to ask for them. To conduct deep research or take a deeper dive into your question about AI, try prompting ChatGPT using something like this sample prompt:

Approach my questions and skepticism about AI with deep thinking. Take a multi-step, critical reasoning approach. Weigh trade-offs, evaluate from multiple perspectives (pros and cons, strengths and weakness, what is possible and the limitations), and give practical applications for the use of AI. Do not oversimplify or summarize. Go deep in your research and analysis. Consider all sides of this issue.

Warning: when AI engages in deep research or “reasoning,” it can take time and may be rather verbose. :slightly_smiling_face:

AI is a tool. The quality of the outcome is often shaped by the clarity, precision, and intent of your prompt.

Also, do an old-fashioned, and still effective, web search.

If after that you still don’t find a compelling reason to use AI, then don’t. FOMO is not a good reason to use any tool. I learned that the hard way by thinking I was missing something important if I didn’t use markdown in my writing. I concluded I wasn’t. :slightly_smiling_face:

3 Likes

My recent personal use cases for AI:

  1. Coming up with ideas for a birthday party based on a vague description from my son of what he wants. It’s a creative starting point.

  2. Given a manually curated recipe collection in Mela, I tell the AI what’s in my veg box for the week and it selects recipes that will use up the veg, including creative substitutions.

  3. My son has autism and ChatGPT is good at quickly creating social stories to get him prepared for new scenarios. I find them weirdly hard to write myself because they need to be very simple.

  4. I split my grocery shopping bill into categories for expense tracking. There’s enough variance in product names that AI does a better job than something more deterministic, although it does need some supervision on its maths.

  5. Researching purchases in product categories I’m not familiar with. Search results are so full of AI slop and chatGPT gives me a better starting point of specific products to research.

  6. When my son gets stuck on a game level I can quickly ask it for a strategy instead of trawling forums.

The common theme is that it’s all either very low stakes or a starting point for more human research but it’s definitely saving me time/improving general quality of life.

4 Likes

It’s a good idea to avoid the hype - sometimes I think people over hype it, and I sometimes think it’s not hyped enough.

I often go for a drive or bike ride, put my AirPods Pro in (the voice isolation is spectacular) and turn on the advanced voice mode, and chat with it. I’ve solved a lot of my business problems by chatting with it (usually I solve them, it just helps, like a coach or friend who is good at listening), and I use it to talk through my IP.

I have been playing Zelda breath of the wild lately, and I often use it to get me unstuck. Sometimes I just turn on constant voice mode and leave it sitting there so I can ask it questions, without having to pick up my hands. (Voice mode requires a paid account, I think).

I find I’m losing google search far less now. On my Mac I have Raycast, and I use it to do quick ai searches (option-command-space “What’s the difference between hiding and minimising a window on Mac, again?” Tab, then I get the answer, written out as an answer).

Hope those examples help, a little.

4 Likes

Wow. Never thought of that! Very clever.

3 Likes

I use AI to help me with research and learning, although I don’t generally use my phone.

  1. I use the Obsidian web clipper to port content from the web directly into one of my Obsidian vaults. One of my web clipper templates prompts Claude (via my Anthropic API key) to prepare a summary of the clipped material that I can use as a refresher later when I’m reviewing my notes. (I had Claude help me prepare the prompt so that I could get exactly the kind of summary I need.) I have another template that prompts Claude to create a five-to-seven question short-answer quiz on the clipped material so I can test myself on how well I understand it and can talk about it in my own words. Both templates prompt Claude to extract a list of keywords as well as provide a list of related works or concepts should I want to probe deeper into the subject.

  2. I’m test-driving NotebookLM as a tool to help me digest my repository of notes and texts on a research topic. Since NotebookLM handles markdown really well, I can upload my Obsidian notes into the relevant notebook along with whatever other material I have. There’s one thing I don’t like about NotebookLM: it doesn’t parse PDFs particularly well, and there’s no way to highlight or annotate the source material inside your notebook. On my to-do list: figuring out how to achieve more or less the same ends using and API key and Devonthink.

Could you share what plugins you use to do this? I have the web clipper.

(I asked chatgpt and it made up a plugin)

I just use the web clipper, no plug-ins required. It does takes some tinkering to get the clipper templates set up to do what you want.

I used this guide to get me started: Obsidian's New Web Clipper - You'll Want to Try It • Stephan Miller

Happy to answer any questions tomorrow.

1 Like

Asking what AI is for is like asking what the internet is for. It’s so wide reaching its almost impossible to comprehensively answer.

I deliver a beginners AI training course and we focus on the following. Note that this is only for Large Language Models (LLMs) which are the ChatGPTs, Claudes and Copilots of this world. There are other types of AI.

We focus on the following

  1. Text generation - We walk people through writing a letter with a few short commands, “I need a cover letter for a job I wish to apply for. I have attached the job advert and my CV, please outline a cover letter to go with it”

  2. Text refinement - Attach text you have already written yourself and critique it in an LLM. For example “Check this text for plain english, make a list of suggestions for ways it could be improved and made easier for people to read.”

  3. Document summary and analysis - Attach a document to the LLM and say “Summarise this document. Begin with a heading at the top that summarises the key facts about the document, like the date, length and audience. Underneath use headings and bullet points to summarise the contents. If there are any hyperlinks, summarise at the bottom”.

  4. Assisted search - you don’t need it for simple searches, but for complex searches it can be excellent. For example say “I want to compare Ronaldo to Messi. Summarise the key facts for each player, such as goals, assists, trophies won across their entire careers. Summarise results in a table. Do the results season by season.” (Note that you need an LLM with a web search function to do this, but most do these days).

  5. Brainstorming - “Give me everything I need to take to the beach this weekend. I have 2 kids and the weather is hot and we’re going by train.”

  6. Creating How-Tos - “I want to build a patio. Please give me detailed instructions on everything I need to do from start to finish.”

I would urge you to note however, if you just put a single line question in like the one I’ve outlined above, you’ll get decent but limited answers. Part of learning AI is understanding how to get better answers out of it, mostly around multi stage prompting. One line sentences will give you ok but limited responses.

2 Likes

I feel the need to say that what you get in response to a prompt to an LLM is not in any way an answer (e.g. which evaluates the question, assesses evidence and retrieves facts) but using the model built on vast training data, to predict what similar inputs should produce as a response. The “AI” is producing language that is in line with training data and the correlations and patterns in language associated with it based on the prompt you have given.

Where the “question” is mainly about language (e.g. proof-reading, producing a new text in a consistent, perhaps new tone) this can be very effective. Where the question is “real” (e.g. the answer depends on evaluation of circumstances, weighting of factors, discerning emotions or ideas, or seeking a new synthesis) you risk getting something that looks appropriate but may well not be, especially if it’s in an area where the training model did not have vast amounts of neutral data.

I see a place for the current AIs in helping us wrangle text, or possibly even generating text as a stepping off point for writing (e.g. helping us come up with plausible names for characters or places, or to hit an intended tone) and in generating illustrations and similar images. There’s limited but perhaps some value in using them to “brainstorm”. Those are all examples where you are seeking something which is “in line” with the patterns and correlations that make up the model (e.g. “draw on the model to generate examples of birthday activities for pre-teen girls” - and that list might include some you personally do not know or have forgotten). It can’t “reality check” that the “answer” would be a good or helpful one, or even that what it is saying exists or is true.

That kind of thing can be useful, but it can’t match the hype (which is needed to sustain investment as very little of this is in any way sustainable otherwise) and you might or might not find it personally useful.

It’s also been extremely well known and documented since at least the 1950s, that humans have a deeply embedded tendency to perceive and interact with things around us that give any hint of human characteristics as if they are actually human and so to ascribe to them human motivations, emotional states, understanding, perception etc… I find myself thanking ATMs sometimes.

Something you can reasonably “chat” with triggers that innate response very strongly and that makes it very hard for us to evaluate these apps: it’s very interesting that people are so willing to adapt themselves to the foibles of AI apps, when similar levels of unreliablilty or unpredictability would make us instantly reject a non-AI app.

FWIW I am quite comfortable not paying extra for any AI: I just don’t think they will give me my money’s worth and I don’t trust them with anything that matters. I am also quite comfortable to use AI as a component within a system or other software (e.g. photo processing or transcription or translation with Apple systems) where the model has been focused on specific tasks.

2 Likes

This is, at best, out of date. If you ask a commercial AI a question it will not only answer from its training data. If you ask a commercial AI a question, it will do a web search (or run other activities) and synthesise the answers from that. This is easily tested by asking it today’s news.

Strictly speaking, the LLM will answer from whatever data you give it. If it has no data, it will answer solely from its training data. If you only give it access to your company wiki, it will answer from that. If you give it access to the web and a search function, it will use that.

And on top of that, most of them support MCP, which permits them to integrate with other services. This allows them not to just search the web, but access other services online or on your machine. For example you can get Claude to search your computer for files, or your Inbox for messages. Or it can search your online Dropbox files for content. So if you ask it a question, it can check any or all of those sources.

The answer you gave was true the earliest iterations of ChatGPT, but this is a very fast moving field and it really isn’t true for any of the commercial providers any more.

1 Like

The point is that current AIs, including those with web access or web search, are fundamentally incapable of answering a question - there is no “understanding” or “knowledge” with which to answer.

Web searches are used to update the currency of the model after initial training, as are all the prompts fed to it and the feedback associated with the results and the model can be used to generate material that was never in the training data, but it’s still the case that prompts are being used to generate plausible outputs based on patterns and correlations in the model: there’s no, or very little, reality checking or even an evaluation of confidence that should be placed in the model’s output for this particular prompt.

This seems to many AI researchers outside the AI companies themselves as a fundamental and maybe insoluble problem. At the least, the modelling needs to include, or be placed within, a framework of “fundamental common sense”: such things as how to recognise and deal with people, objects, animals etc and how deep concepts like location, mass, direction and even such things as intention work. People do this confidently from an early age, and it allows us to distinguish fantasy, comedy, speculation, news, propaganda etc. to navigate our way though complex environments and circumstances and to recognise how new experiences and concepts are like and unlike those we already know. Humans do all this reasonably well, almost instantly and without the need to be trained by Terabytes of examples. LRMs don’t do any of it, except in so far as their training data contains patterns and correlations from our human experience of it.

2 Likes

I have no idea where you’re going with that response. There’s a couple of factual errors - modern LLMs do fact checking and confidence evaluations (again, old ones didn’t), and typical large LLMs don’t train on their search results.

But most importantly, I don’t see why you’re arguing about an AI not knowing anything in the first place. I mean, neither does my computer, or a calculator for that matter, but that doesn’t mean they don’t have incredibly helpful functions.

We can write public health research queries in 30 minutes that would take days manually, to produce something of comparable quality. So even if the AI doesn’t really “understand” what its writing and works a different way to a human… what’s the problem?

2 Likes

There’s not a problem with the AI working as it does. There are things a large model can do that are helpful, responsible and productive and they are what they are.

The problem is with people ascribing characteristics to AI that are delusional or incorrect and not recognising just how pervasive that human tendency is.

As openAI states: “ChatGPT predicts text based on patterns learned during training. It can generate plausible-sounding information, but sometimes produces incorrect or fabricated statements”. Incorporating web search and user feedback about errors into chat-GPT and “deep research” algorithms can help but simply don’t remove the fundamental nature of predicting new text by applying a model which is that you are “in the model” not in reality in a way that any human will be.

You don’t have to look very far to see examples where people are relying on fundamentally unreliable AI systems for important purposes.

To get back to the original question of this thread: it would be reasonable to cautiously use AI apps and systems where you have tasks or problems that the app helps you to solve. It’s equally fine not to worry at all if you don’t, and to recognise that there are quite a number of people heavily invested in making you think you have to.

1 Like

Can you give me an example?

Take any question/issue you have researched.

Go go Perplexity or Claude 4 and tell it - “These are my thoughts - can you add any suggestions?”

If you do not routinely get back some good ideas from doing this then I will be stunned.

That applies to anything from your choice of hotel on your next trip to academic nuances in your PhD thesis.

This was a spectacular response.

I work in environmental, health, and safety which requires ingesting thousands of data points. AI has been especially helpful in breaking down data for me and analyzing hot spots I need to be aware of.

For example, I fed 6 models a spreadsheet with incident details (no personal data) and asked what their insights were. For fun, I feed them responses from different models to see what happens!

So it really depends on your use case and needs. I will never trust one, but if I can get a consensus from 6 it’s a good place to start.

Me too Chris! Love that :slight_smile:

Though, sometimes, when I read all about the built in biases that we humans have, I wonder if we might be better off if the ATMs were in charge.

One of the things I love about using chatGPT is that it can argue different points of view - if I ask it to. Often there is no right answer.

In fact, there are often two right answers but they’re the exact opposite, and we are totally blind to one of them, so having a tool to help us broaden our minds is useful.

(A quantitive example of two opposites - the square root of 49. Most people are only aware of one of the answers. The other answer, it’s opposite, is very important too.)

2 Likes

Something of an aside: For anyone interested in probing the differences between human thinking and what LLMs do, I highly recommend listening to the Santa Fe Institute’s podcast series on intelligence. Here’s the description from the Institute’s website:

Right now, AI is having a moment — and it’s not the first time grand predictions about the potential of machines are being made. But, what does it really mean to say something like ChatGPT is “intelligent”? What exactly is intelligence?

In this season of the Complexity podcast, The Nature of Intelligence, we’ll explore this question through conversations with cognitive and neuroscientists, animal cognition researchers, and AI experts in six episodes. Together, we’ll investigate the complexities of human intelligence, how it compares to that of other species, and where AI fits in. We’ll dive into the relationship between language and thought, examine AI’s limitations, and ask: Could machines ever truly be like us?

I think about what I learned from this series every time I say “please” and “thank you” to an LLM—something I would never say to my AI-powered image processing tools. :wink:

1 Like