AI and Counterterrorism

Technology, it is said, progresses at an exponential rate. Recent progression, such as the IoT, blockchain and other forms of technology have kept consistent with this general rule. The current Coronavirus pandemic as well has served as a catalyst for digitization, both in crime and in law enforcement and public life in general. Criminal or extremist elements who thus seek to avoid detection must progress and adapt to these changes and new technologies at a similar, if not faster, rate of progression. While to date technology has been crucial in fighting crime and terror, in particular cybercrime and cyber terror, it has not yet reached maturity. Maturity for these technologies would be not only the automation of investigative processes which exists today, but also deep-learning artificial intelligence capabilities that would empower holistic and integrative systems to learn from past investigations and apply those lessons and trends to further investigation and intelligence gathering.

Counter-Terror and Law Enforcement agencies must harness emerging technologies and advancements in the private sector, particularly in the field of Artificial Intelligence, to enhance their intelligence-gathering capabilities. This relationship must be a two-way street: current systems and tools on the market today can greatly enhance and automate extant practices for intelligence agencies, but often lack several factors that prevent them from complete automation.

The first factor is comprehensive source coverage. Terrorists and criminals are known to use a wide variety of open, deep and dark-web sources, including social media. This is relevant for all terror and criminal organizations; for the Far-Right alone this can include mainstream platforms such as Facebook, Twitter, Instagram, TikTok, Vkontakte and others, as well as niche platforms such as Hoop, Telegram, Discord, IRC servers, Pastebin, dark-web sites and more. A surprising amount of information is available on these platforms that could easily be transformed into a final intelligence product. Intelligence agencies must stay abreast of relevant sources and develop in-house capabilities when possible, and when not possible find relevant private sector partnerships. These capabilities must incorporate mass scraping and storage capabilities for further investigation.

The second factor is the development of competent AI capabilities to analyze information and turn it into intelligence. Scraping and storing information taken from the aforementioned platforms is not enough, it must be analyzed and turned into a final product. This can be done via AI, primarily via Natural-Language Processing and various forms of OCR (optical character recognition) and object recognition capabilities.  Many social media platforms such as Facebook already employ powerful AI processes to moderate content on their platform.[1] This content is reliant not only on understanding harmful image or text content, but also on being able to understand hidden connections and combinations of the two that by themselves are innocuous but combined express some sort of extremism.[2] This technology can and should be improved by partnering with public sector, private sector and law enforcement partners.

Law Enforcement and Intelligence Agencies can and should cooperate with private sector companies that provide investigative software solutions to develop more relevant and targeted solutions that utilize the experience and knowledge of counter-terror experts and bodies. This expertise can include but should not be limited to the creation of datasets such as dictionaries for NLP/OCR analysis, a repository of extremist and criminal images and icons for image analysis and more. Public sector organizations such as the ADL, SPLC in the context of the Far-Right and others have text and image databases that can and should be integrated into these solutions.[3] Mass cooperation with social media platforms directly would be even more effective but is limited due to potential infringement of civil rights. These datasets can and should be developed by law enforcement and partner organizations and utilized by machine-learning algorithms. The correct utilization of these algorithms not only would improve their investigative efficacy but also enable them to learn and identify new terms, images and more that may be potential signs of criminal or terrorist activity.

The utilization of AI tools is key to fighting terrorism and crime online yet they do not come without risk. Much has been written about the inherent bias of datasets that are expressed in AI such as facial recognition[4] and the infringement of privacy rights such as in the case of controversial tools like Clearview AI.[5] Proper legislation is much needed. GDPR in the European Union is a step in the right direction in protecting individual privacy rights online, but it limits investigative efforts unduly in certain cases, primarily when investigating suspects on social media. Further legislation must be passed both in the United States and EU to delineate clear red lines for Law Enforcement while simultaneously granting them greater investigative freedom when required.

Encouraging direct partnerships with public and private-sector organizations that harness the capabilities of both parties could revolutionize the fight against crime and terror. Effective systems need to be designed and developed by both private and public sector bodies that work in collaboration. This collaboration should focus on the strengths of both sides – the private sector in providing the requisite development capabilities and efficiency and the public sector providing expertise and knowledge to be contributed in the form of datasets. This collaboration as well should be transparent and effectively monitored by legislation and regulatory bodies to ensure compliance with extant legislation on privacy and data-rights. Properly developed and implemented, Artificial Intelligence can revolutionize these efforts to be more effective and preventative while minimizing risk of exposure or violation of individuals’ privacy or data.

 

[1] https://ai.facebook.com/

[2] https://spectrum.ieee.org/computing/software/qa-facebooks-cto-is-at-war-with-bad-content-and-ai-is-his-best-weapon

[3] https://www.adl.org/hate-symbols

[4] https://www.nytimes.com/2019/12/19/technology/facial-recognition-bias.html#:~:text=the%20main%20story-,Many%20Facial%2DRecognition%20Systems%20Are%20Biased%2C%20Says%20U.S.%20Study,of%20Standards%20and%20Technology%20found.

[5] https://www.cnbc.com/2020/06/11/clearview-ai-facial-recognition-europe.html