Someone in the Robot Builders Club mentioned that occasionally they’ll say “give me five ideas to improve this skill.” Which I thought was brilliant and started using, a lot. Like, pretty much every task. It’s not uncommon for me to adopt every recommendation.
The other day I tried “what would you do with this skill/system (whatever) if I let you do whatever you wanted make it more efficient, accurate, and just work better and more easily for me.” And the results of this were even better.
Then I tried a new task. ”Check the Skills and skill index for relevant content. Then analyze the skills and tell me what you would do to make them more efficient, more accurate and easier and better for me to use. What would you do if you could do whatever you wanted? Don’t do it, just tell me”
“give me five ideas to improve this skill.” is more focused. Like for my morning briefing it suggested clustering tasks based on tag for rescheduling. So I could move a set of 3 tasks related to a project to Wednesday because I knew I wasn’t going to work on it today. Or creating a “weekend” mode that would skip certain work domains entirely.
”what would you do with this skill/system…” got me ideas that were a larger scope. More rethinking it than improving what’s there. It made suggestions about how the whole thing was structured
“Check the Skills and skill index…” got me a wealth of suggestions, some off base, but a few that were ambitious but could be genuinely helpfully. Like automating document triage when it encounters a new folder. Or even suggesting I get a UPS system in case the power goes out and an off-site backup.
Ive been experimenting with a regular skill that improves other skills. It runs daily at the end of the working day, reviews all my sessions where a skill was used, finds issues, and creates improvements for the skill. For example, this is one from today.
The skill’s Step 5 says “Use the Bash tool to write the file” but the Bash sandbox can’t access the user’s home directory. The session found the working approach via osascript, but it took 7 attempts before it tried. Let me edit the skill and package it.
It’s really interesting to watch. It turns out quite a few of my skills weren’t working 100% first time round, but Claude was able to silently fix the problem on the fly. But the next time the skill runs, it tries the faulty approach again, because that’s what the skill instructs.
As a user you often don’t know about this stuff because it’s hidden in the message of the session. Its also more useful than reviewing the skill directly, because there are some issues Claude doesn’t know about until it tries to actually run the skill.
Just a little experiment. I’m going to leave it a few weeks, see what happens.
I'd like you to install a skill called session-skill-review. It runs at the end of each working day, reviews all the Cowork sessions from that day, spots any friction in how skills performed, and — where it's confident about the fix — packages an updated version of the skill and surfaces a "Save skill" button directly in this chat for you to install with one click. It also keeps a running JSONL log of observations.
Please do the following:
Before packaging, ask me two things:
Which of my skills should be treated as personal (safe to auto-improve)? These are skills you own and maintain yourself, as opposed to shared org-wide skills. List any you're aware of from my installed skills as a starting point.
Where should the log file be saved? It will be a .jsonl file — a path on your Mac or a cloud-synced folder works well. Suggest a sensible default if I'm not sure.
Take the SKILL.md below, update the personal skills list and log file path with my answers, then package it and present it to me. The "Save skill" button will appear here in this chat — I'll click it to install.
Once I've installed it, help me set it up as a scheduled task to run automatically at around 5:30pm on weekdays.
The full skill.md is linked here, bit too long to paste.