A quick check, ‘doing AI’ the comments can either be about local LLMs (see below) or about Apple’s Cloud AI – soon be Gemini. For the latter, I can’t imagine M5 matters much. For local LLMs, it’s early to buy an M5 for AI. The processor may well be better designed for AI, but the RAM is going to be the limiting factor.
I do a lot chunk of work with local LLMs and they eat RAM for breakfast. I settled on an M3Max 64GB of RAM (see: M3 Max Memory and Bandwidth) and for current generation models it good, but only barely.
I can run Qwen3 30B locally, but it doesn’t have enough context length to be useful in a lot of contexts.
Most of my use is via Claude and ClaudeCode. Since they are cloud based, my overprovisioned M3Max isnt all that important.