Non-Military Security Technology and Innovations

The Potential for Misuse of Artificial Intelligence in the Spreading of Misinformation

Katarína Ďurďovičová

By asking ChatGPT “Can AI become a threat to humanity?”, ChatGPT responds, “AI has the potential to become a threat to humanity if it is not developed and controlled responsibly.”

The development of Artificial Intelligence (AI) is advancing at breakneck speed today, with AI already present in many of the devices we use on a daily basis. AI has the ability to predict what we will do and influence what we will see and hear. Its abilities to write code, create art, and learn from humans in order to have conversations are a huge promise for reinventing more elements of society. However, the advancements in AI also raise a number of concerns about its misuse, such as creating and spreading misinformation.

With systems such as GPT-4, false facts and fabrications of information occur, which is a phenomenon referred to as “hallucination”. In AI, hallucination refers to the creation of results that may seem convincing but are in fact false. This usually occurs due to the AI model’s biases, lack of understanding of the real world, or even faulty training data. Thus, the AI system “hallucinates” data on which it has not been explicitly trained and produces inaccurate or misleading results. There are many cases where the Chat GPT chatbot produces incorrect answers to certain questions. The problem is that when AI systems provide inaccurate information, the users may become disillusioned with its technology and it may hamper future use in relevant sectors (e.g. in the field of healthcare or for legal work).

Another issue is how and from which sources the AI learns. While AI developers take content from many trusted and official websites, they also often take information from websites that cannot distinguish between information and misinformation. An engineer at Google recently resigned after claiming that his AI model was being trained based on Chat GPT, which would produce the same type of flawed answers. Therefore, even though many AI systems, such as Chat GPT or OpenAI Codex, can write text or code, a skilled expert still needs to review, approve or correct the output as it may cause chaos in the information provided. If AI developers continue to produce low-quality content and copy other AI systems, we will soon be unable to determine whether the answer is correct or not without further research – defeating the purpose of AI’s usefulness.

With artificial intelligence, another form of misinformation has emerged that is becoming a huge problem, and that is “Deep Fakes”. Deep Fakes are generated by algorithms that create realistic fake videos, sounds and even images (such as the AI-generated photos depicting the arrest of former President Donald Trump). The impact of this content varies, as it can range from humorous images to political campaigns and sexual content. What is becoming a bigger threat are Deep Fake videos, as they are usually a bit harder to detect if well crafted. Currently, this technology is being used to spread Russian propaganda about the war in Ukraine. For example, in one case, a Deep Fake video claiming to be from Ukrainian President Volodymyr Zelensky urging people and the army to surrender to Russia was widely shared on social media and temporarily replayed on a hacked Ukrainian news site. Such campaigns use extremely personalised messages to target or criticise certain individuals or groups. This means that there is a significant risk to democracy as a result of weaponised disinformation, which would pose negative consequences for political dialogue and participation. 

Generative AI language models, like ChatGPT, have been trained to express themselves in a fluent and authoritative manner. The problem is that, unlike us, generative language AI was designed to only produce general reasoning, not “think”. Deep Fakes are now so well-crafted and so easily produced that they can damage the reputation of many leaders, which can lead to negative consequences in the future. Therefore, the world needs to be equipped with better tools that can detect misinformation or warn users of potential misinformation in articles. The funding of new technologies and artificial intelligence algorithms, such as the Vera.ai project, aims to make them more widely used to find false information and verify the accuracy of these claims. However, this will not be successful unless a wider campaign is launched to make the public aware of the potential damage caused by misinformation.

Photo credit: Pope_MidJourney/Pablo Xavier

The Latest