7 min read

Why blog in an (AI) apocalypse?

On AI and how it’s been conjuring existential dread in a lot of creatives.
Why blog in an (AI) apocalypse?
Photo by Igor Omilaev on Unsplash

2023 is over, my Dutch neighbors have been throwing firecrackers like we're nearing the end times, and all I want (for Christmas) is to be able to say that I have a blog up and running. The Web has changed so much since blogging's hayday, 20 or 30 years ago, and there’s no way you could ignore the electric elephant in the room - ChatGPT, which took off, accelerating content creation and the making of excuses to not pay writers their worth.

I’ve been absolutely tantalized by possibilities of AI, but I've since sobered up to the sociopolitical effects of how the technology is used. I've also deployed and maintained a few ML models at my work as a Data Engineer, so I have some insight beyond the hype.

This post is not explicitly about blogging. It’s about AI and how it’s been conjuring existential dread in a lot of creatives, as well as the misconceptions people have about this new technology. Maybe that gets some clicks.

Disclaimer: I've successfully restrained myself from consulting with ChatGPT or other LLMs here (Large Language Models - I'll be using the terms "AI" and "LLM" interchangeably here, sorry) for better or worse.

State of the (Con)Art

AI has been on everyone's mind this year (along with all the world powers rattling swords with disastrous results), which brought a lot of charlatans and grifters out of the woodwork. I got the feeling that they're probably pivoting from crypto.

My Youtube feed circa April 2023.

I'd been meaning to write an article titled "There is no AI" (but The New Yorker beat me to it), trying to deduce if an LLM has a model of the world or not, to evaluate if it's "the real thing".

Since then, I've come to believe that it's a "por qué no los dos" situation, and one that still doesn't answer the question of sapience nor if it would ever have a will of its own. Outside of course the corporate interests that trained it, but I digress.

The sincerest form of flattery

Whether or not AI is about to become sentient (or if that story is just successful marketing) doesn't matter. Anyone with basic computer literacy can create a ChatGPT account and have their minds blown by having pretty realistic conversations with it. The tool can then whip up whole essays or at least outlines to help them kickstart their work. It gets people excited immediatelly, while somehow also creating a feeling of unease.

It makes you question what "creativity" exactly is, since the LLM is basically a very powerful text search engine, trained on a massive but undisclosed corpus, which then magically (for mere mortals) produces the output, one word or token at a time.

If you squint really hard, that is in a way how human creativity works, at least in terms of producing new ideas (don't worry, humans are still better-ish). We take time to absorb experiences and learn, so that we can produce something that is not a mere copy, but novel.

Feeling creative yet? Photo by Igor Omilaev on Unsplash

So, if by blogging I presume a human is putting their thoughts on (virtual) paper and making them coherent so that others may read them – why do that yourself when you could just check what's trending on the Web and ask ChatGPT to write about it for you? 

In 2023, I stumbled onto so many videos coaxing the viewers to do just that, and then MoNeTiZe. You could contribute to the ever-growing number of content-farms that have gobbled up the first page of Google search results. This makes it harder to get actual information, causing Google to invest in its AI efforts to improve their Search, thus creating more powerful AI tools, leading to more effective content-farms, and then the whole process starts all over again, in a yucky spiral of stinking up the Web. It might make the Dead internet theory come true. Now look at what you've done!

Which pill are we on now?

Though, this might eventually make people go offline and return to their local libraries, scouring them for books and (gasp!) interact with people once more. Just don't mention that you've just laid off armies of hungry and very unhappy creatives.

Disclaimer: I am not suggesting that we should end the existence of independent websites and replace the whole Web with chatbots - after all, those LLMs and their AI agents need information to scrape in order to be useful! Unless we centralize the Web and reorganize it in such a way. Ew.

Effective AI-true-ism

The first intent behind writing something online would be to inform. If so, there is no guarantee that the output of the AI is "true" or correct. In fact, testing and verifying the outputs of an LLM is an interesting undertaking right now. Right now, it usually requires humans. But sometimes you can also use other LLMs to verify LLMs.

Additionally, LLMs don't guarantee the same output for the same input. This is all because LLMs use the connections they make between words they're trained on to give what they "think" is the most likely output, not the most correct or the one true verified answer.

Putting aside the validity or freshness of the training data, the connections they make are not of the "if-else-then" kind.LLMs don't reveal what data they're trained on either, making fact-checking difficult. I've been personally working on putting ML projects into production for most of this year and the quality assurance part is not a solved thing, at all.

We actually don't know how AI works, and that's kinda scary when you think about it. Not in the AI-apocalypse cult emanating out of Silicon Valley. It might reproduce a copyrighted (gasp!) work, word-by-word, because it overfitted on its training data. It might just glitch out on us, with no rhyme or reason, though that would probably be amusing and/or inspiring. More mundanely, we can be persuaded to believe in whatever comes out of an LLM, regardless of its input data, or its owner's interests, because it sounds plausible or because the wizard magical machine said so.
Screencap from the Wizard of Oz movie, where the gang peeks behind the curtain
“Wait, it’s just crypto all over again?” “Always has been.”

On the flip side, there's no guarantee that what a human wrote is correct or original either. At least in this case they are clearly the sole party responsible for the output, making it easier to judge their credibility. And we expect humans to have biases, due to their lived experiences.

Some perspective

Which brings me to my second point, that AI-generated content doesn't have a perspective of its own. Hopefully your writing does. Putting your perspective out there to find its audience is probably one of the reasons for you writing a blog in the first place.

Me spreading peace and love to the world by creating content.

An AI can construct a plausible-sounding piece based on what was written before, by humans and other AIs (dogfooding AI with AI-content is a whole other can of worms). While it definitely has biases from its training data and the manual overrides by its owners, it doesn't have its own agenda. It cannot participate in a discourse. It does not possess personal experiences. It can only regurgitate its training data or query a database.

My point is, if you are going to use these tools to assist in making content for the Web, you should build upon it and infuse it with your own perspective.

If you don't, you're parroting what was in the LLM's training data, and are likely enforcing the status quo. Which is boring. Don't be boring.

I must admit that I enjoy asking ChatGPT to come up with clever phrases, titles or character names, but I don't think I ever take the suggestion as is, but rather use it as a jumping off point. Kind of like when writers and other creatives use ancient divination tools, such as the tarot, to spice up their creative projects in the 21st century. Only now this tool is supercharged with the content of most of the current Web.

Beyond blogs and bots

There's benefits that I've noticed when I make an effort to write my thoughts down. It forces me to clarify the usual whirlwind of thoughts, impressions and memory, into a narrative. This process makes them clearer to myself first of all.

Writing is a communication skill that I personally value and I believe they are seriously lacking in the IT world. Putting the words out there means that they just might find someone they resonate with and help them find clarity too.

My two target audiences. Photo by Ga on Unsplash

But what if you're afraid of putting your writing out there because you don't want it scraped and then fed into an AI?

There are way to signal that you don't want your content used in such a way, by telling them in the robots.txt file. And before you say "if an AI scrapes my own work - good! The idea will be disseminated. Information wants to be free! Y2K!" - there's a difference between a human reading something and automated processes doing this on a massive scale, to extract profit.

Which only serves to remind me that I should've invested in Bitcoin on time. But then again, you could say that for any MLM.