Non-Military Security

Artificial intelligence as a security threat. It helps terrorists and extremists spread propaganda

Lucia Kobzová

The increase in the use of artificial intelligence in recent months has brought several risks associated with the new technology. In addition to the ethical concerns, experts also warn of multiple security threats. These have been recently translated into the use of AI by terrorists and violent extremists to spread propaganda. Artificial intelligence creates incomparably more sophisticated malicious content than a human can generate. Moreover, when given the right commands, it produced materials capable of bypassing digital platform systems to prevent unwanted content. Although artificial intelligence, in many instances, helps cybercriminals create malware of child pornography, it can also be a useful tool for prevention.

The active use of artificial intelligence by terrorists and dangerous extremists has been highlighted in the report by the Tech Against Terrorism initiative. Researchers review over 5000 posts created by AI every week. Recently, the primary focus has been on content disseminated by various groups sympathetic to the Lebanese Hezbollah and Palestinian Hamas. The intention behind these activities is to influence the narrative regarding the ongoing Israeli-Palestinian conflict. Other terrorist organisations also leverage AI to attain their goals. For instance, the Islamic State has published a guide on how to safely use various AI tools for creating propaganda materials. Al-Qaeda is also not lagging behind in the misuse of modern technologies, particularly focusing on generating fake images. Artificial intelligence assists malicious actors in various ways. The quick translation function into almost any language proves to be useful, with translations generally being of high quality, increasing the credibility of false information disseminated by these groups. Writing personalised messages with the help of AI further enhances the efficiency of recruiting new members. The technology also simplifies the recycling of propaganda, meaning the use of modified information from the past. However, not only Islamic terrorists exploit artificial intelligence to generate dangerous content. Findings from the report also pointed to neo-Nazi groups abusing a mainstream application available on the Google Play store. Antisemitic, racist, and Nazi images created by this AI tool were subsequently shared on digital platforms. Far-right figures have also joined in, spreading instructions for creating extremist memes using the new technology.Over the past years, technological companies have invested millions in creating extensive databases aimed at helping to prevent dangerously extremist and terrorist content. Platforms are behind the establishment of the so-called hashing database, which enables the automatic detection and subsequent removal of similar types of posts. However, the challenge lies in the fact that the sophistication of AI-generated materials can bypass these measures. Experts, therefore warn of the danger of various groups using such content, as it poses significant problems for the current security mechanisms of companies. Extremists focus their campaigns primarily on smaller platforms, as they have disproportionately limited resources to combat extremism. The good news is that Tech Against Terrorism, in collaboration with Google, recently introduced a new tool called Altitude. This tool could significantly help smaller platforms to prevent dangerous content since it is free and allows monitoring of disinformation and other hazardous content in the online space. The solution will help companies detect terrorist, antisemitic, or extremist posts. After integration, the entire system will be linked to the central database of Tech Against Terrorism. Platforms can thus check whether the content is marked as extremist. Instead of automatically deleting posts. The tool will provide useful information to platforms for combating terrorism. This can greatly help social networks like Telegram in preventing the spread of dangerous content.

The Latest