LLM as posts on this forum

I so wish that would be the case. In theory, an illustrator can generate more work. In practice, someone with no skills, will type into a little box and get something “good enough”, a hundred times over, before they’d hire that illustrator. And in between that potential client and illustrator, will be a surplus of hustlers offering to type into that box for them, for a fraction of the cost the illustrator would charge.

I know a fashion photographer that was just replaced with AI by a good client, even with the weird hands and other artifacts. Photographer, models, stylists, makeup artists, assistants… all replaced with words in a box.

What can gently be called democratization, might better be described as value extraction.

The problem is the scale. AI will cut across all knowledge workers in varying degrees, on varying timelines and the number of roles it will provide can only pale in comparison. There will be many, just not enough.

Truck driver is the top job in ~30 states. That’s not going away tomorrow, but over time, you betchya. Warehouse work. Ditto. The trades will take a drubbing too, maybe not as much by AI but by getting flooded with displaced workers. This is fundamentally different than previous technology advances.

I’m jaded, my experience with tech, VC and PE firms doesn’t afford one much optimism. It just feels like there’s a whole lot of whistling in the dark going on. I do hope I’m wrong.

4 Likes

Exactly. It no longer requires a special caste for the task. Seems like democratization to me.

Once upon a time, photography was a skill processed by few. A complicated arcane art. I think most folks are happy that is no longer the case.

And that’s just one example of new tools disrupting the status quo. And the then current ‘experts’ (I need a better word here) in the field almost always rail against the hoi polloi joining the ranks of the privileged few.

3 Likes

It’s both/and. The work will be democratized, and because of that democratization all but the most exceptional versions of that work will be significantly devalued.

Jobs of all sorts exist because it’s worth paying somebody X dollars for Y value, in hopes of realizing a return on that investment. At the point where a person’s effort of an hour or two can be replaced by $0.10 in AI compute cycles, the odds of that job disappearing goes up significantly.

This isn’t a huge problem when it happens to a small number of people, and there’s plenty of work to reskill them into. The concern with AI is that it’s less likely to be a small number, and we’re not sure what work those people would do.

The question is one of what we do in that situation.

2 Likes

Technology protests and fears have been going on for at least 300 years, and yet you would be hard pressed to find a large portion of the populace who cannot find work. Technology displaces but does not render people unemployable nor does it prevent them from finding new work. Uncomfortable, yes. Dire catastrophe, no.

Unless, of course, it truly is different this time. I mean, hey, there’s a first time for everything, right?

2 Likes

This is why I said above I am more concerned about humans using the tech for evil not machines operating on their own. :blush:

Indeed, this is precisely my point. I’m far less concerned about a “Terminator” scenario than I am about the already existing, and likely to be accelerated, evil use of AI by evil people

2 Likes

I think you miss my point, which is that AI looks to be a force multiplier, on a scale previously unknown, where typing text into a little box on a computer screen will send an AI-enhanced machine to deal out death and destruction. Previously people had to both give the order and deliver the death and destruction.

Although, as some will point out, that has already started to change.

I grant you that, and I claim no expertise on this matter. That said, I have more immediate concerns about the human condition and predicament than a hypothetical singularity. :slightly_smiling_face:

And here I am, after being off forum for many, many months — and knee-jerk clicking on this post wondering “what do they have against postgraduate law students posting on MPU”… Been a long day. :sunglasses:

7 Likes

We so need a laugh reaction here!

1 Like

This is already happening. One can find video online where a “loitering munition” drone engages, pursuits and finally hits a single desperate Russian soldier. This is way below the level of intelligence that @Bmosbacker fears but the sheer horror of the scene made me think that these types of weapons should be the subject of international treaties like nuclear weapons, which I think is in line with your argument of AI as a force multiplier.

Not mentioning the ethics angle. Not sure if there is such a thing like “the ethics of wartime” but who is immediately responsible for that loss of life? Nobody hit a button to detonate the explosive payload. Nobody entered a set of coordinates. Some operator just “deployed” the device. Some engineer just “coded” the software (and given what we are seeing, soon the software will have been coded with the help of another piece of software)

2 Likes

Indeed. Since the U.S. invasion of the Middle East post 9/11 there have been many reports of pilotless drones going beyond what a human might have intended, and it would be naive to think we hear about most of this stuff given that military stuff is mostly secret. That isn’t to say a human wouldn’t have made a decision to kill civilians/children/whoever, but the key point is that no human actually made a choice, and drones aren’t given specific commands to do that (I hope). They are simply “following code” whether that was the intent behind the instruction or not.

In any case, you don’t have to only look to the theatre of war to find this stuff. Computers already run code every day that ruin people’s lives, and humans don’t intercede to stop it. We have no idea how many lives are ruined because automated applications for e.g. state support, insurance, medical help, etc. are refused. The UK alone has several ongoing scandals, which only come to light because humans fight to oppose those letting the computers make decisions, and I seriously doubt it’s the only country facing these problems.

The U.K. Post Office scandal alone ruined the lives of over 900 families and still isn’t resolved. Whilst blame for this entirely rests with U.K. government (they decided to prosecute, not the computer software), the direction of travel with LLMs is that we’ll reach a point where the entire process is left to the software, and we know this is already the case sometimes.

I’d also like to remind you all that we’re only sitting here today able to debate this stuff because every time there was a computer error at NORAD during the Cold War (which happened multiple times), a human interceded and stopped preparations for retaliation on a nuclear strike that hadn’t happened. Had the technology existed that didn’t require a human to check the outputs from the monitoring systems, we’d all be eating rats in a cave right now. It is quite naive to think humans won’t just blindly hand over control to technology, given that we do this many times each day already.

Being completely clear, the Royal Mail (The entity which owns the post office and franchises it’s branches, they also deliver mail in the UK) decided to prosecute the affected postmasters, not the UK government. They are officially separate entities (i.e. the Royal Mail is not a public utility anymore)

For the avoidance of doubt, as this post is about LLMs, Between the Royal Mail and it’s outsourced software providers, the coding was wrong, this was not anything to do with LLMs, it seems to be just a really poor software system.

2 Likes

Your clarification is actually incorrect. Royal Mail only went private in 2013, and the majority of prosecutions predate this (prosecutions started in 1999 and continued until 2015). Royal Mail at that time was a public service, and as such it was civil servants (Royal Mail staff, but they worked for a public body) that made the decision to seek criminal charges. In any case, you’re actually doubly incorrect because the decision whether to take a criminal case to court rests with the Crown Prosecution Service, which was and is a government department. It is the Crown Prosecution Service’s job to determine whether criminal charges should be issued (for any crime). So there were at least two public bodies here that made a choice not to question if the software had a fault (even though people were already flagging concerns).

Your second clarification about LLMs is correct but wasn’t the point of my post and I didn’t say that the Royal Mail scandal was due to LLMs. My point was (and still is) that blindly following software and assuming it is infallible already ruins lives, and the situation will only get worse as that software becomes more sophisticated. As a society we’ve already demonstrated that we’re not very good at questioning the veracity of what a computer tells us, and there’s little evidence this is improving.

1 Like

I stand corrected on the Royal Mail.