The world seems to be spinning faster and faster, literally out of control. And I am not so much referring to the geo-political situation or climate change, or the fact that the Conservatives are no longer in control of the UK economy, but to developments in AI. It seems that not only are they out of control, but no-one, not even the clever minds behind them, can understand how advanced AI works, or predict the outcomes. Instead we are faced with chaos, a lack of regulation and a huge number of unknowns.
Elon Musk and Others call for Pause on A.I., Citing ‚Profound Risks to Society‘ is a headline in The New York Times. But can you really click on the Pause or the Off button, as these tech leaders and researchers are demanding? In their view, the race to develop more advanced AI is out of control. The suggestion they are putting forward is therefore to put the development of AI on hold, to give themselves and researchers the opportunity to clear their heads, and think. The signatories of the letter asking for a pause express their conviction that the development of powerful AI systems must „advance only once we are confident that their effects will be positive and their risks will be manageable“. And it goes on to say: „“Humanity can enjoy a flourishing future with A.I. … Having succeeded in creating powerful A.I. systems, we can now enjoy an ‘A.I. summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.”
It is unthinkable what will happen if ChatGPT and its rivals are let loose on society. Worry about the dangers that this will bring, and the adverse effects it might have on people and their critical judgement or lack thereof, and on society. Just try to imagine the jobs that might disappear, more or less from one day to the next. Think about the effects on schools and universities, and academic excellence, which we are already seeing now. The main concern is that we have reached the point where the machines can outwit us, and where we have no means of distinguishing what’s true from what’s fake. It is also the time when the term plagiarism seems to lose all meaning.
Imagine also what happens when these new chatbots get things wrong, let their imagination run riot, fabricate stories and spread misinformation. Again, the signs are here for all to see.
If tech pioneers are worried, what should the rest of us think? As the initial excitement dies down, we are now seeing a kind of morning-after effect, a blurry-eyed vision of things going badly wrong. What we might have seen as a fun toy even six months ago, is now turning into a hydra, a monster that might engulf us all.
Musk et al. are keen to emphasize that their „moratorium“ is not meant to put a stop to AI development. They are simply asking for a pause to take a breath, mostly, they claim, for the benefit of humanity. So we can all get used to the changes that AI might bring to our professional and private lives and to the way we go about research and work.
Among the aspects to worry about are „digital poisons“. This is the title of an article in a recent issue of the Economist. In it we learn that while in the past, AI used to be trained on closed data sets which were curated by humans, the new generation of AI tools, such as ChatGPT feed on LLMs. These are large language models, essential neural networks connecting huge data repositories. Mining is done indiscriminately from the Internet, making the systems vulnerable to „poisons“ injected by anyone who is connected to the web. A computer scientist at the ETH in Zurich has carried out experiments in order to establish how such a poisonous scheme might actually work in the real world. He bought some defunct web pages and replaced a 1000 images of apples with randomly selected pictures, thus mis-training an AI engine to systematically interpret pictures as containing apples. He and his group also showed that it was possible to poison certain areas of the web, pertinently for example Wikipedia, which is regularly downloaded to act as a source of text data sets for feeding into LLMS.
Clearly, this could be mis-used to present unsuspecting users with false „truths“, and bias of various kinds. Other potential horror scenarios include AI chatbots who are directly connected to the Internet and who therefore ingest more and more unvetted data. Such bots could be instructed by hidden prompts to for example disclose shoppers credit-card details. The researchers point out that deciding what is and what is not legitimate training material for an AI bot is not straightforward. „One party’s poisoned content is, for others, a savvy marketing campaign“, is their conclusion.
LLMs learn from billions of patterns they gather on the web and then generate texts, ranging from school essays via emails, blogs to scientific papers. The more they learn the more (it is expected) they can improve their performance. At the same time, they are not infallible, in fact they are quite likely to make mistakes. They have been known to make up information, a phenomenon that goes under the name of ‚hallucination‘.
And herein lies the problem: it can be extremely difficult for us humans to distinguish what is true and what is false, what is right and what is wrong. Because AI delivers is findings and opinions with great confidence. For, unlike most humans, it does not go in for self-doubt.
Click button bellow to apply to work as a freelancer.
Apply as a freelancer