“Anthropic is giving Claude the ability to use your Mac for you”

“Anthropic is introducing the ability for Claude to control your Mac. The feature arrives today.

In an unlisted YouTube video, Anthropic shows off Claude computer usage with Cowork and Code:

“In Claude Cowork and Claude Code, you can now let Claude use your computer to handle tasks. It can point, click, and navigate like you would to do everything from opening and editing files to handling complex software tasks. And with Dispatch, you can instruct Claude from your phone.”

Excerpt From:

This material may be protected by copyright.

I don’t like the fact that one’s computer is being controlled remotely. There is enough power on the Mac to do this fully locally and I would hope that “Apple Intelligence” will finally head there. This, which seems to be what MacSparky is excited about, is just a stopgap measure. We’ve got all of those idling cores, why not use them? Why pay to use someone else’s?

2 Likes

I agree completely, I will be waiting for local AI processing before I allow AI access to my files and computer. McKinsey, who have some of the best security experts, has been vulnerable for over 2 years to a simple hack that allows their AI to be taken over and all their data leaked through the agent (including the data they access via APIs from external sources). If that happens to me, I lose my job!

1 Like

The whole idea of letting an AI agent change anything on your computer unsupervised, when AI is known to have a consistent error rate, seems rather ‘brave’ to me.

That’s ‘brave’ in the ‘suicidally reckless’ sense (or ‘Yes, Prime Minister’ sense, if you’re from the UK…). It’s bad enough when an operating system does something you don’t expect, never mind giving an over-confident pattern-matcher the keys to your data.

I suppose I just don’t understand why anybody would want to do this badly enough to risk the inevitable consequences.

3 Likes

I am in the same camp, but from what I see non-technical folks seem to not understand the risk. Luckily I come from a technical background and it is obvious for me to see that deploying a statistical prediction model on sensitive or important data is asking for data loss and inconsistency. Even with careful backup, do folks really want to risk all their personal data being leaked or corrupted?

For example, lots of my undergrad students have managed to completely wreck their laptops by installing OpenClaw (requiring a nuke and pave). They didn’t even consider the risks and only considered the benefits.

1 Like

If Claude wants to use my computer, I’m all for it.

Of course, Claude runs in leased data centers, on computers it either leased or purchased. It completely understands you must pay for compute cycles.

I figure I’ll set up a bunch of virtualbox systems and get rich off of what Claude pays me for the resources it’s using.

I’ll be RICH!!!

3 Likes

I’m wondering what businesses will choose to do. Will they allow their data to be processed on a device that they haven’t locked down? Or will they process it on their own servers or a trusted cloud provider.

I would think that they would treat AI agents as employees with their own authentication, etc. rather than that of the user.

hah that’s great. we could solve the data farm energy crisis if we all let AI rent our cycles in a sandboxed space. you’ve saved the world!

2 Likes