Sunday, February 26, 2023

PUT ONE FOOT IN FRONT OF THE OTHER [385]


I write to learn, so I remain perplexed about the growth, and more recently the explosion, of the use of artificial intelligence in the construction of essays and articles – I hesitate to describe it as “writing”. Having set myself the challenge in 2023 of writing at least one A4 page of diary every day, running to approximately three hundred words, then the mere act of writing one word after another really isn’t hard at all, even if you must go back on yourself to edit redundant words. Writing an article won’t take much effort if you know your subject, unless you are also relying on your AI program to gather the necessary information on that subject.

I promise the above paragraph was typed by hand. I have thought of engaging the use of an AI chatbot to see what it would come out with, but it is very hard to find one that could produce a satisfactorily entertaining result, or not require me to create a login or pay to use it – if you want help, or you simply want to cheat time and process, then you have now created a marketplace, and the producers want paying. Not only is it more rewarding to write that essay yourself, but it is also cheaper.

ChatGPT has been the AI chatbot causing the most ructions right now, for its delivery of prose, and even poetry, in a both a naturalistic style and in imitation of other writers. As a “Generative Pre-trained Transformer” with as many available samples of the written word stacked behind it, ChatGPT has been fine-tuned to sample a number of desired outputs from the question posed to it, rank those outputs in order, and uses an evaluator protocol that optimises and produces the most rewarding answer, both for the end user and in the future machine learning involved in “training” the chatbot to continue producing the correct answer. However, the overall aim is to take a word, and decide what the next word should be. Basic rules of grammar will get you half the way, until you must make a decision.

One limitation to ChatGPT is it can sometimes produce a nonsensical answer due to having no source of truth to draw upon in its sample writing, or that previous training caused it to be too cautious in selecting the correct answer, or even select the incorrect answer altogether. This is described in artificial intelligence terms as a “hallucination”, despite a person’s hallucination appearing to be real without having any external stimulus. What I would be worried about is proofreading: no-one should take anything they read entirely as read without proper evaluation, or trust in the evaluation another person has done.

With OpenAI, the research team behind ChatGPT, looking into “watermarking” its answers to avoid plagiarism, the turning point will not be when AI can produce infallible answers – the machines will only take over when their hallucinations are eliminated.

No comments:

Post a Comment