Sulphur 406 May-Jun 2023
31 May 2023
Relying on AI
A reader wrote to me recently asking if I had considered the impact of general artificially intelligent systems like ChatGPT on the publishing industry. As a matter of fact I had, but only, somewhat narrowly and selfishly, on how it might impact my own job in the future. But the dramatic shift in AI capabilities that we have seen in the past couple of years is certainly food for thought, and I’ve seen no end of articles predicting the death of the creative industries.
One of the key benefits of using ChatGPT is its ability to analyse large amounts of data and generate insights that might not be immediately apparent to human writers. With its access to a vast array of information sources, ChatGPT can quickly identify trends and patterns in the sulphur industry and draw connections between seemingly unrelated factors. This can help writers develop more nuanced and well-informed opinions about the industry, leading to more insightful and persuasive editorials.
“Not everything is what it seems…”
Well, as some of you may have guessed, I didn’t actually write that previous paragraph. As a test, I asked ChatGPT to write an editorial about the impact of ChatGPT on writing Sulphur editorials. And it was okay as far as it went, if a little bland and generic. Even so, it is clear that AI performance is – at least in some areas – getting closer and closer to human performance. We have been used to machines replacing humans in physical work; that has been the legacy of the industrial revolution, and a process that we have lived with for over 200 years. But machines replacing cognitive work has been a much more recent development, beginning with simple mathematics and pocket calculators, but now starting to extend even to creative and artistic tasks. The process is far from perfect; anyone who has seen AI generated art cannot help but notice that the AI doesn’t seem to understand how many fingers human beings usually have, or what text means on a signboard or the like within an image.
But the real pitfalls in the process are not the obvious, glaring errors, but the hidden ones that are not immediately noticed, because not everything is what it seems. The problem with ChatGPT is that it does not produce accurate output; rather, it produces what it thinks the person giving it the prompt wants to read. So what it writes does not need to be true, only believable enough to fool the person reading it. And where the machine is unable to scrape reliable data from the internet, it simply makes up something plausible sounding. I also asked it to write an article about sulphur storage, and what came out might be superficially convincing to a high school student, but to someone who knows the industry it was simply wrong. For example, while it correctly pointed out fire risks and – albeit a bit obliquely – about sulphur’s potential conversion to acid in the presence of water, it did also end up telling me with a straight face that the best way to store sulphur as a liquid is to dissolve it in a suitable solvent. It would be best not to rely on that kind of AI for safety critical systems just yet!
In all fairness, even ChatGPT recognises some of its own current limitations. In its editorial piece, it wrote: “Of course, there are potential drawbacks to using ChatGPT as well. One concern is that the language model may not be fully capable of understanding the nuances of the sulphur industry and may therefore produce opinions that are not fully informed.” That’s not a problem confined exclusively to AIs, of course, but it was nevertheless something of a relief. At any rate, it appears I may still have a job for a few more years at least – even if it’s only until the programs become better at what they do.