When I started my current administration job in 2012, “Typist” was a title still held by some people, literally typing up instructions and details received from businesses via fax machine, to be manipulated in a database later. This harked back to when typing was a specialised role that involved training for speed and accuracy.
Typing is not a lost skill, but is lost as a role, for everyone now types, as much a part of everyday life as typewriters once were, and computers now are.
I hope this trajectory eventually happens to artificial intelligence, at least in the way we currently use it – once its introductory period is over, and once tech companies are done foisting it on us, it will recede into the background. The useful tools of A.I. will remain, like accelerating scientific discoveries or managing your calendar, and the more frivolous and novel uses will fall away, like generating a picture of yourself sat on a horse, or asking a chatbot what you should do next with your life.
It would be a lovely future, one I am hopeful for, because it means we will have stopped talking about A.I., some consensus having been reached on ethical and moral boundaries, from the use of the technology to the collection of the data, from how to power data centres to where to place them, and from who can access the technology to who is allowed to control it, and just who an how is all this meant to be paid for. Yes, it would be a lovely future.
As much as I do not use A.I. by choice – I can write, manipulate pictures, make videos and find answers well enough for my own purposes, thank you – generative A.I. systems now form, or are additions to, programs I encounter every day, and that is not by choice. I am my generation’s equivalent of someone who doesn’t, or won’t have the internet at home, or a computer, just as the generation before them had to contend with whether they let a television into their house, or electricity.
My refusenik nature with A.I. is also informed by the daily view count of this website. I have seen it grow exponentially during 2025, with thousands of views a day, but a nagging thought tells me that many of these are not from people. As much as I can prevent my work from being skimmed, I am left to contemplate whether this action should be taken as a complement, when it really shouldn’t be.
Copyright infringement and data harvesting are still areas waiting on government legislation, but plagiarism, impersonation and fraud are existing problems, and once more people acknowledge that generating funny pictures and videos involve this in ways they don’t see, they may reconsider using it.
What I need to do now is enjoy the freedoms of not using A.I.: I know what I want to do or make, so I should be able to do that without fear of what will happen to it, or without having to acknowledge the slop that it can compete with. The only reason “slop” has arrived so quickly as a name for mechanically recovered content is because it is too obviously so – for as much as you can rail against people for consuming slop, they most often know it is slop, and know not to substitute it for what is real. You can denigrate the tools, you can overestimate the provider, but the biggest trick to making a good piece of media is not to underestimate you audience.
In short, I am bored of talking about A.I., for my refusal to use it willingly means I have already taken a stand against it.






