This is an excellent summary of where things stand. Having researched these tools and their underlying concepts and, recently, after working with them a lot, I don’t think the catastrophizing happening elsewhere in this thread is warranted.
However, I disagree with this take:
Sure, LLMs won’t replace high-quality original thought anytime soon (or more specifically, abduction and inference to the best explanation). But they do make it much easier and faster to create certain things (and to do them better). Someone skilled at LLMs and how to work with them can manage, delegate to, and supervise LLMs as assistants to do some really cool things. For an example of this, listen to the AI segment in the recent episode of MPU with Jeff Richardson.
As a result, support people will likely be able to help more people in less time, illustrators will be able to iterate on concepts at a greater scale, and writers can write and edit more. In turn, it’s likely that organizations will need less of each of these people. Of course, the flip of this is that it will be easier to build smaller, more focused organizations.
This kind of democratization tends to increase both volatility and capacity. Capacity as we can literally do more; volatility as anyone can do more. Hopefully the capacity increases will be enough to counter the more dangerous kinds of volatility.
Note that I’m not talking about the democratization of knowledge here, but knowledge work. I think it’s a subtle but important distinction.