Robot Assistant Field Guide

My guess is he was using dictation. :joy:

1 Like

haha … I really need to slow down with that dictation button. Sorry, guys.

6 Likes

Don’t apologise! That was the best thing I’ve seen on the internet this week! I promise you that I will write a character Amanda Fish into my next book.

Nice work @MacSparky !

3 Likes

Amanda Fish should be our mascot.

9 Likes

I will add my vote to buying the RAFG. If you have no interest in using AI beyond prompting then of course don’t spend your money on the field guide. I am interested in taking it further and helping me with my “donkey work” as David eloquently calls it. I am not a programmer, but Cowork is a game changer, and clearly Anthropic is pushing a lot of resources into this area of Claude. David’s guide is written by someone I trust, whom I share similar thoughts about security and privacy with, and has not let me down in the past. I have bookmarked more Claude Guides and posts on social media than I will ever read but they are from unverified sources.

I bought this because I have a need and desire to learn about agentic work and I like learning it from a source I trust.

Nope, still don’t know. Or get the reference. Any help?

Teach a man to fish.

It’s a proverb in multiple languages and cultures.

“Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.”

I’m semi-LLM-hostile, and I think it’s reasonable and well done, though I’m only partway through.

It’s a bit like learning to cook where you learn basic techniques, then some possible recipes where you could use them, and then some possible variations.

2 Likes

Oh right, yes I know that reference. :grinning:

My mind got stuck on the “who is Amanda?”

Thank you.

2 Likes

You weren’t the only one - so thank you both!

1 Like

I am starting to think I am in an abusive relationship with Claude Code. It keeps doing stupid things, then promising me it will never do it again, and then it does them again. I say that I don’t trust it, and it promises to do better, and never do the bad things again.

(I’m not making light of this, because I know people who have been in abusive relationships, and it’s not funny. This does feel similar, and I can’t think of a better way to describe it.)

2 Likes

We are a long way from Artificial General Intelligence (AGI). I’ve never been worried about AI taking over and destroying mankind. :slightly_smiling_face:

I’m curious, when this happens, do you observe Claude updating its CLAUDE.md file or some other “memory” file that it ought to consult between sessions? If not, you might want to tell it to do that.

Unless of course humans deliberately give autonomous AI access to lethal weapons of war.

No chance of that happening?

1 Like

In which case that is a human problem, not an intrinsic machine one. I’m far more concerned with what humans will do to each other, with or without machines, than I am with what machines will do. :wink:

2 Likes

Give this a listen and see if it changes your mind…… Why the AI Race Is Leaving Hum… - On with Kara Swisher - Apple Podcasts

I read a book of fiction that dealt with that possibility. Good adventure story, dealing with technology that exists today.

2 Likes

Thanks for the link to the podcast. It is far more thoughtful and thorough than most I have encountered.

That said, I believe my original point holds. Harris’s most serious concerns, the concentration of wealth, the failure of governments to invest in their citizens, the collapse of labor markets, and the absence of meaningful regulation, along with the predicted consequences, are all the products of human choices and institutional failures, not evidence of machines acting against human interests on their own initiative. That distinction is central to my argument.

Even the examples of models that exhibit apparent “self-preservation” behavior are better understood as the result of human-designed and human-deployed systems than as evidence of genuine machine agency.

Artificial intelligence has no self. It is not conscious and therefore possesses neither volition nor moral framework. It has programming. It has no will, and it has no independent motive.

What threatens humanity is what has always threatened it: humanity itself, immoral human appetites e.g., greed, the concentration of power in the hands of the few, and the persistent tendency of institutions to serve the powerful rather than the common good. For example, social media algorithms did not choose to harm society. People designed the systems, wrote the algorithms, prioritized profit over people, and declined to govern what they had created. The AI story is following the same pattern, for the same reasons.

The danger, in other words, is not AGI. It is thoroughly human.

That said, I did not intend to start a debate on the issue. :slightly_smiling_face: So I’ll leave it at that. Thanks again for the link to the podcast.

2 Likes

The lesson about teaching Amanda Fish is to be careful about trusting AI (in this case speech recognition).

Whelp, I just bought this and started watching it. I’m not sure how much I’ll get out of it, but I’m hoping it will spark some ideas at least.
My own setup is a good bit different from David’s, with some areas of overlap. I do use Obsidian and Claude Code & Cowork already, so that’s good. But I split my time between Windows and macOS, so that introduces some extra complexity. And the apps/services I use are generally not the ones that work best with Claude. (I use Firefox rather than Chrome, Fastmail rather than Gmail, and so on.)
I have a lot of ideas in my head about how I could use Cowork, and I’ve tried some of them already, but I feel like I could be doing a lot more.