Excuses, excuses...
A common line I hear when bemoaning the unending and unwarranted hype for Large Language Models (LLMs) falsely labelled as ‘AI’, is “well, yeah, it’s shitty, but it’s already happened now, so you may as well embrace it or you’ll get left behind”.
The ‘well, it’s already happened now’ retort isn’t so much an argument as it is a wonky self-justification for continued use of a tool that hurts human beings and damages the environment. It’s self-given permission to engage with and encourage the large-scale theft and laundering of data because it benefits you.
Imagine ‘well, it’s already happened now’ applied to any other shitty behaviour we see in the world. What if you applied that reasoning to fraud, violent crime, human trafficking, or sexual assault?
I wonder what the world would look like if, when Nazi Germany invaded Poland in 1939, the rest of Europe stood back and said ‘well, that’s not good, but it’s already happened now, so we may as well just let them get on with it, hey?’
Okay, I’m making my point with an example of extreme whataboutism, but what I’m getting at is, by and large (a few current affairs aside…), the human race will stand up and say ‘hold the fuck up’ when obviously shitty things happen to groups of people.
Make no mistake, LinkedIn is a cesspool - a social network full of immoral clout chasers, conmen, and middle managers who believe themselves philosophers - but many of my connections are good people who will take action and strive to make changes when bad things happen to others.
And yet, when creators, artists, writers, musicians, and other victims of Silicon Valley’s widespread data laundering say ‘please don’t use Generative AI, it hurts us’, those same good people are more than happy to whip out the ‘well, it’s already happened now’ excuse and keep on using it.
When challenged, it’s always met with water-thin explanations of how it’s not actually plagiarism, really - reasoning used often despite the fact LLMs use other people’s commercial work without permission for financial profit and 60% of its responses include actual plagiarism. Hell, the New York Times is suing OpenAI and Microsoft for generated content that rips them off verbatim and this fails to make people wary about using the same technology.
These aren’t people without moral convictions. Some are the kind of people to harangue me for an entire year for sometimes getting recyclable disposable cups from coffee shops but are happy to burn through the world’s drinkable water supply by proxy so they don’t have to hire a professional or spend a few minutes coming up with their own ideas.
Those same people are happy to keep using LLMs despite the many, many instances of providers admitting in plain sight that their business model is built on shitty practices that hurt people.
I imagine some people I know and respect will see themselves in this piece and might be annoyed at me. If that’s you, I mean, I’ve already done it now so you may as well let me get on with it. Secondly, I want you to ask yourself some questions…
How many more times:
- Do you need to hear how that OpenAI wouldn’t be able to operate if they had to pay their dues?
- Does an image generator need to create an almost identical image to that drawn by a real artist?
- Do alarm bells not ring when you read OpenAI destroyed its training sources?
- Do LLM hallucinations have to publicly embarrass or financially impact a company?
- Do you need to find out LLM operators lie to you, as Open AI did by heavily editing their Sora text-to-video output because they were embarrassed by the results?
Look, I think I’ve become addicted to using questions as my medium, so I’m gonna keep that rolling for the finish.
Why won’t you believe LLM operators when they tell you who they are? Why are you not angry about the smoke and mirrors and potential damage to your reputation? Why are you compliant in hurting people?
Why is this where you draw your line on shitty things happening to people? Why is this where you resort to ‘well, it’s already happened’?