
The Russian "Aidol" robot collapsing at its unveiling, November 2025
When I ask “what comes after A.I.?”, what I mean is I need to know the ultimate outcome of the upheaval caused by the application of artificial intelligence into everyday life, especially as someone that has, as far as they are concerned, done their best to avoid it so far.
Am I thinking about a “post-A.I.” world, one where the technology has essentially become an endemic, integral part of how life is lived, or a world where people got bored of waiting for the future it promised, too conscious of the monetary and environmental costs of maintaining mainframes and data centres? Or am I thinking of a future beyond both these options, one mapped out by the machines that outstripped ourselves? This was science fiction once.
Right now, the A.I. programs we use remain “mechanical”, weighing user prompts against available data to generate the next outcome. This step went wrong when I asked Microsoft Copilot to explain “prompt engineering” in the style of Jack Kerouac, but because there was no actual thinking involved, that was to be expected. Already able to write confidently, draw with some ability, and manipulate photographs on a computer exactly how I need, I really have no use for the way A.I. is being pushed right now - I write this after the week Apple added generative text, equation and image functions to its Pages, Numbers and Keynote productivity apps, so long as you take out an “Apple Creator Studio” subscription to avoid missing out.
In February 2023, I wrote that I had already concluded I would not have a use for any creative generative A.I. program: “[If] you want help, or you simply want to cheat time and process, then you have now created a marketplace, and the producers want paying. Not only is it more rewarding to write that essay yourself, but it is also cheaper.” Three years of using A.I. programs have since followed, a wealth of data collected from users and subscribers, who consented to their collection of that data by their using them. It is not enough data to replace the plagiarism of copyrighted material, but enough data for me to consider whether the addition of A.I. functions to every program imaginable means tech companies will soon have “enough” data for whatever their next step turns out to be.
“Artificial general intelligence” is what I understand to be the current goal, going beyond mechanical prompts to match the human ability to think, well, intelligently. Could this then develop into an artificial general superintelligence, extending beyond the capacity of human thought? If we achieve that, we need to think now about legislation against equipping it with arms and legs.
It appears I am thinking about what comes after A.I. because there needs to be a world that still needs us. The current implementation of artificial intelligence relies on being told what is good for us in the long run, as if anything that doesn’t involve it is a backwards step, but a lot of the possibilities don’t appear to involve us. My job is in administration, but for how long will that job last? What kind of world would we have if A.I. stops being foisted on us, and just becomes a tool, just as the computer itself became?
Yes, this was an article about finding the right question. Next time, I will try to answer it.
No comments:
Post a Comment