AI podcast recommendations?

Are there any AI related podcasts, that are accessible like MPU and the other Relay FM podcasts?

One that doesn’t get too deep in the weeds whilst providing practical uses and tools like MPU does?

Thanks

I’m not aware of any, but I’ve been enjoying a new podcast that discusses AI in general. The first two episodes were about one of IBM’s early attempts “Watson”, the computer that played Jeopardy. I’ve not listened to the third yet, but it’s about self driving cars.

1 Like

Lex Fridman usually talks about AI. He can get really technical on everything computer science related but he is basically a pop-sci technologist.

1 Like

Just reviving this post to ask if anyone have found some podcast that sort of tackles the generals of what AI is and how to get better at using it

Hard Fork from The NY Times is a technology podcast that follows stories on AI. And there is all kinds of info on YouTube.

Just search for: AI How To

1 Like

Depending on your politics, you might enjoy Your Undivided Attention (which largely talks about AI regulation, societal impacts, and so on): Your Undivided Attention Podcast — Apple Podcasts

1 Like

FWIW I subscribed to Hard Fork for the better part of a year. I just unsubscribed. They swallow all of the hype without making any effort to counter balance it.

I know in the software feild, GenAI is creating a large body of poorly written code. Basically, large scale duplication. The damage done in 2025, will take years to undo. Hard Fork won’t cover this.

@MevetS - Gary Marcus is doing a good job.

Two things that are really standing out for:

As a result, I won’t read or listen to anything generated by AI. I will only consume human created material where I trust the human who is acting thoughtfully.

2 Likes

It’s no longer on my podcast app either. I’m keeping up with the headlines with podcasts like Techmeme Ride Home then researching any items that interest me.

I’m finding discussions about how the legal and medical etc., fields are starting to use AI more interesting than the nuts & bolts of the latest tech.

1 Like

Likely not what the OP was interested in but if anyone is looking for a thorough analysis of weekly AI news (and its implications on society and our future), I find The Artificial Intelligence Show to be really insightful and often eye opening.

The hosts do a great job of putting things in perspective and reading between the lines of the press release hype cycles. They had an interesting discussion about Apple Intelligence in a recent episode.

It can be a bit dry sometimes, but, honestly I appreciate calm and measured commentary on this particular topic.

If the goal is to learn about LLMs then while not podcasts I suggest:

To cut through the hype.

To learn how to use LLMs effectively and locally.

1 Like

A daily “news” podcast about AI, presented by two bots

Funny thing, I use these tools daily. I tend to assume they’re wrong. The errors are built into the training data (mistakes on the internet) and to the built in randomness. Net result, I wouldn’t trust a bot created podcast.

I use these LLMs in places where I can spot the errors; where I’m willing to take the time spot the errors or where the errors don’t matter.

A podcast misses on three of those points.

1 Like

"The podcast, if you hadn’t already guessed, was AI generated bottom-to-top by Google’s new AI app NotebookLM.

The Google Labs product lets you import a wide range of content, press a button labelled “Audio Overview”, and hey presto, out pops a magic Deep Dive episode with your content transformed into an animated podcast discussion. . . ."

Yes that was my working assumption.

For same reasons I already mentioned, human content over generated content.

1 Like

The way I understand the way NoteBookLM works it’s my content, and the AI’s presentation.

No matter, I’m finding the podcast a good source of info for further research. I thought others might find it interesting.

I understand it is NotebookLLM. The risk of errors still exists. NotebookLLM is just fancy RAG. I’ve used it on occasion and sometimes it made mistakes. In this case it broke out of the sandbox of RAG and answered with random stuff from the internet.

I use these tools on a daily basis. They’re in the low trust sandbox.

Just to make sure I understood correctly. Are you saying that if I’m only using my content that I have put into NotebookLM, it can still make mistakes? It may pull in other stuff from the internet?

I’ve only looked at it a couple of times. All I know is what I’ve read:

“Information Not in Sources: NotebookLM is designed to answer questions based on the information provided in your uploaded sources. If the answer isn’t in the source material, it won’t be able to provide a response.” Frequently Asked Questions

But if I upload data that includes a URL, “the text content of the given webpage will be scraped for use as a source;”

Please remember all LLMs hallucinate at times, it’s built into the model.

Last year, I documented an experiment: NotebookLLM is not yet a miracle worker | Agile Pain Reliefs Experimental Blog then I did further work with it loading 20+ sources on pair programming.

More sources was better, even then I could tell some answers weren’t based on my articles.

@WayneG of course the FAQ sounds good. By the way I have a used car for sale. FAQ says low mileage. It doesn’t mention sources of error.

Be careful what you trust

Use NotebookLLM by all means, however if it suggests something interesting then you need to verify its claim.

1 Like

Understood. I’ve seen and taught about hallucinations.

I’m loading only google docs into my notebook. No videos or urls. I will run some more tests to see if I can get it to hallucinate but it still surprises me if it’s only(?) supposed to pull from the content in the notebook that doesn’t even reference outside sources.

To be clear, I JUST started playing with it yesterday and have not read about it at all so my knowledge is very limited.