Adversarial Attacks on AI Chatbots: Disinformation Campaign and Response of Democracies ( http://opendata.mofa.go.kr/mofapub/resource/Publication/14212 ) at Linked Data

Property Value
rdf:type
rdfs:label
  • Adversarial Attacks on AI Chatbots: Disinformation Campaign and Response of Democracies
skos:prefLabel
  • Adversarial Attacks on AI Chatbots: Disinformation Campaign and Response of Democracies
skos:altLabel
  • Adversarial Attacks on AI Chatbots: Disinformation Campaign and Response of Democracies
mofadocu:relatedCountry
bibo:abstract
  • 1. AI’s Humanistic Intelligence
    Ⅱ.The Challenges of Adversarial Machine Learning
    Ⅲ. Dissemination of Disinformation: Democracy and Sovereignty in Peril
    Ⅳ. Cyber Threats Posed by AI Chatbots  
    Ⅴ. Liberal Democracies’ Responses to Disinformation and Fake News
    Ⅵ. Policy Recommendations: Responses to Disinformation Campaign and Data Misuse
    
    1. AI’s Humanistic Intelligence
    
    Since the release of ChatGPT, OpenAI’s text-generating AI Chatbot for public use in November 2022, entrepreneurs have started innovative businesses employing artificial intelligence in various fields like customer service and education. AI’s narrative generation and natural language processing (NLP) have advanced significantly and their applications have become more widespread, unleashing fierce competition among the world’s AI powerhouses racing to take the lead in these technologies. Generative Pre-trained Transformers (GPT) is a type of large language model (LLM) trained on vast amounts of text data to learn human language and formulate human-like responses. AI’s ability to learn the patterns of human language is a powerful skill that will lead AI in achieving its greatest potential: Realizing humanistic intelligence. 
    Humanistic intelligence aims to create a psychologically, socially, and technologically integrated system that binds humans and artificial intelligence together as one team. With the current pace of growth, AI chatbots like Chat GPT and natural language processing technologies, which are capable of generating human-like texts, will likely determine the winner of the global technological race as GPT and NPL are bound to have a profound impact on human thoughts, decision-making process, feelings, and values. While AI and neuroscience are driving each other forward, AI is yet to think like humans; Chatbots or AI speakers learning a human language does not mean AI could think like a human being.
    
    Ⅱ.The Challenges of Adversarial Machine Learning
    
    While AI chatbots’ language skills are expected to dramatically improve human knowledge, concerns have been raised about their potential pitfalls since the release of Chat GPT. In 2016, Microsoft had to shut down its artificial intelligence chatbot Tay only 16 hours after its launch as the bot started posting inflammatory and offensive tweets through its Twitter account after an adversarial attack by trolls. An adversarial AI attack is a malicious attempt to corrupt the data that AI models learn from and cause these models to generate inaccurate or distorted outputs. To make matters worse, adversarial attacks on AI-powered self-driving vehicles or AI-driven medical care algorithms could put the lives of drivers and patients at risk. The fact that Chatbot algorithms are susceptible to malicious attacks foreshadows an intensified disinformation campaign in many countries, as AI-generated language and narratives are already affecting public opinion in many parts of the world. New York Times columnist Ezra Klein called Chat GPT a weapon for information warfare among countries, and this is not an overstatement given what we have seen over the past years. Since 2016, Twitter bots have affected election campaigns across the U.S. and Europe almost every year. Authoritarian regimes-sponsored bots were also mobilized to spread fake news when the world’s two rival blocs with competing political systems both claimed that their quarantine policies are superior to one another. The world’s major powers will continue to make ceaseless efforts in shaping a more favorable global order to them by using AI-powered storytelling capabilities that directly affects both domestic and foreign public opinion. 
    
    Ⅲ. Dissemination of Disinformation: Democracy and Sovereignty in Peril
    
    In his address to a joint session of U.S. Congress on April 27, 2023, South Korean President Yoon Suk Yeol touched upon the issue of disinformation, saying “Today in many parts of the world, false propaganda and disinformation are distorting the truth and public opinion - they are threatening democracy.” President Yoon added that totalitarian forces may conceal and disguise themselves as defenders of democracy or human rights, but in reality, they deny freedom and democracy. He also stressed that democracies must work together and fight the forces of falsehood and deception that seek to destroy democracy and the rule of law. 
    The rapid spread of disinformation on a large scale, mainly on social media platforms such as Twitter, Facebook, and YouTube, is not simply a matter of malicious actors seeking profits through distortion of facts in cyberspace or creating and fueling political bias. Amid an intensifying U.S.-China rivalry and competition among competing blocs, a country’s narrative and discourse are increasingly being weaponized as a tool to seize an upper hand in a global battle over values, ideology, and political system.  And in this battle of words and ideas, AI algorithms have become digital bullets attacking competitors’ national institutions and their political legitimacy.
    Since the 2016 presidential election, the U.S. has repeatedly seen its public opinion deeply affected by AI-powered computational propaganda originating in authoritarian states such as Russia and Iran. Europe has also been affected by disinformation campaigns from outside, including foreign interference in the 2016 Brexit referendum and the spread of disinformation across France and Germany during electoral periods. Countries in the West suspect that Russian and Iran are behind such campaigns. The act of distributing disinformation online during election cycles is nothing short of launching influence operations in peacetime. 
    Using chatbots and bot armies to distort public opinion, amplify social conflicts, interfere in elections, and undermine the political legitimacy of foreign governments is an act of subversion. The West considers the state-sponsored distribution of disinformation cyber terrorism and interference in sovereignty, which explains why a range of multinational cyber military exercises jointly or individually conducted by the U.S. and Europe feature a simulated spread of disinformation. 
    It is in this context that President Yoon highlighted the seriousness of the disinformation issue in his April speech. He tried to send a compelling message that Korea shares America’s concern over disinformation as its ally and is fully committed to addressing the issue alongside democratic nations. 
    
    Ⅳ. Cyber Threats Posed by AI Chatbots  
    
    AI chatbots were originally designed to improve human knowledge, but they are prone to adversarial attacks and could be fed with malicious data for the creation of disinformation. Chatbots could also be abused to launch a set of cyber threats that we are already well aware of. One of the most common issues that users will likely face is chatbots leaking personal data. Chatbots are more “data-hungry” than traditional search engines, so they ask a broad range of questions to users to engage in various conversations. A smooth, intimate conversation with a chatbot could make users let their guard down and voluntarily share their sensitive personal data.  
    Aside from data privacy concerns triggered by users giving away their information, there is an issue of hackers attacking chatbot service providers to steal personal information. Chatbots collect a vast trove of data through which they can profile a specific user, such as IP address, user's voice, device information, location, social media activity, phone number, and email address. Chatbot providers have access to users' conversations with the bot, and even if the users hide their real name or use a VPN to conceal their IP address, their identity could be revealed as bots accumulate data. And if criminal hacker groups gain access to chatbot users’ data, this could open up ways for more destructive attacks against individuals or groups. Actors with malicious intentions even learn various ways to launch cyber attacks from Chat GPT. 
    It is also forecast that cyber malign influence operations using AI chatbots, which can occur on various social media platforms and metaverse, will complicate the current picture further. AI chatbots will likely facilitate peacetime cyber operations aimed at affecting a target country’s public opinion and domestic political process, or other wartime cyber operations and psychological warfare. Waging interactive psychological warfare or committing organized crimes tailored to the targets’ personal ideology, values, and political orientation can also be possible. 
    
    Ⅴ. Liberal Democracies’ Responses to Disinformation and Fake News
    
    Despite the advancement of tracking and identification technologies for detecting individual users’ various activities or hackers’ attacks in cyberspace, government agencies are still struggling more with regulating the flow of information during peacetime than during wartime. In general, most liberal democracies including the U.S. tend not to regulate the flow of external information by its contents because they support the “free cross-border movement of information” in principle, unlike their authoritarian counterparts.
    However, the U.S. has implemented military and security responses to data misuse and disinformation attempts from foreign perpetrators and the flow of misleading information into its domestic online space around elections. In the run-up to the 2022 U.S. presidential midterm elections, the U.S. Cyber Command and the National Security Agency aligned their efforts to establish the “Joint USCYBERCOM-NSA Election Security Group,” to disrupt and deter foreign adversaries’ ability to interfere with the election results and thwart cyberattacks from Russia, China, and Iran. And interagency partners such as U.S. Air Force, FBI, and DHS have joined the ESG’s efforts. 
    In the case of Korea, there is no legal framework for regulating the spread of fake news and disinformation activities on portal sites and social media platforms. Although the Korean government can crack down on disinformation activities and harmful ads for teenagers based on the Broadcasting Act, the Act does not provide a legal basis for sanctioning contents in cyberspace.
    
    Ⅵ. Policy Recommendations: Responses to Disinformation Campaign and Data Misuse
    
    In today’s complex and dynamic communication environment we and AI algorithms have created, what efforts should government agencies make to improve the public’s discretion needed to identify false information, foster secure information space, and boost AI’s reliability? 
    It is inevitable that establishing the framework for regulating false information legally and autonomously takes time and effort in various debates and discussions since the National Assembly must lead relevant efforts. So, one of the most effective responses against disinformation activities the Korean government can formulate and implement along the way would be monitoring malicious disinformation activities in cyberspace around the clock and providing the public with easy access to accurate and fact-based information. In addition, if the public’s consensus on the need for strong responses to false information is reached, it could accelerate the process of establishing the legal framework, so fact-checking and the provision of accurate information matter.
    To this end, it is advised to operate an exclusive platform designed to collect and integrate information gathered by each government agency to detect false information and implement countermeasures. It is also recommended to establish and operate open platforms, situation rooms, or centers that help debunk disinformation contents through cross-checking to raise public awareness of disinformation actors’ main tactics and narrative. It is worth noting that the EU recently established the Information Sharing and Analysis Center (ISACs) under the European External Action Service (EAS) in February 2023 to block state-sponsored disinformation campaigns against Europe. The center is in charge of orchestrating the real-time responses of the EU’s various cyber security agencies and private-sector organizations to the deliberate dissemination of hostile and false information masterminded mostly by authoritarian adversaries including China and Russia.
    European countries had already operated “Setting the Record Straight,” a website set up to quickly detect and combat state-sponsored disinformation campaigns at one stroke. The website provides various speeches, names, interviews, videos, and images from around the world in multiple languages to identify and disclose disinformation activities, aggressively and continuously tracking false information and spreading NATO’s discourse. The website “EU vs. Disinfo (euvsdisinfo.eu)” also has a separate tab focused on refuting the major discourses and disinformation efforts coordinated by Russia and China to defame liberal democracies.
    And national intelligence agencies or related ministries should be able to respond to systematic disinformation campaigns or cyber psychological warfare activities led by foreign adversaries designed to incite and stoke social fear and confusion in the event of a national crisis. South Korea has the “Basic Guidelines for National Crisis Management (Presidential Order No. 299)” that articulates how to respond “in the event of a crisis,” which means that the flow of hostile and misleading information could be defined as “crisis.”
    It is of equal importance for the private-sector actors or civil society to foster relevant capabilities while building response capabilities in advance to identify, deter, and degrade adversaries’ ability to carry out disinformation campaigns tops government agencies’ priority list. Delegating some of the government agencies’ power and ability in terms of identification of false information and response capabilities during peacetime could be an effective and viable option, and help promote public-private cooperation in fighting state-sponsored disinformation campaigns in the coming years. 
    To effectively mitigate or face down state-sponsored malign influence campaigns and malicious cyber disinformation activities orchestrated by foreign adversaries including North Korea, the Korean government needs to gain a legal and institutional foothold to make clear-eyed decisions in allocating roles among government agencies or aligning interagency partners’ strategies and efforts just as the U.S. government’s efforts to promote election security. It is also recommended to consider adopting military approaches to deal with malicious state-sponsored disinformation campaigns that undermine our sovereignty and security. Foreign adversaries’ state-sponsored disinformation campaigns or homegrown threats aimed at undermining state sovereignty, upending the social system, or even overthrowing democratic governments should be met with stronger security measures. Therefore, when false manipulation information is distributed in the cyber simulation training conducted by the Korean government, the government's simulation can also be a useful way to respond in case of emergency and wartime. 
    Another factor worth noting is that the Korean government needs to provide the public, private sector actors, and civil society with easy access to accurate information and guidelines through education, training, and advice at all times to foster the level of sensitivity and situational awareness of cyber security threats equivalent to that of governments and the international community. And as social media platforms such as metaverse, or ChatGPT could open up new avenues for a wide spectrum of organized crimes - hacking, pornography, gambling, drug trafficking, fraud, a leak of state secrets, espionage, and malign influence operations, and terrorist plots, how the public perceives cyber security is closely or directly linked to a state’s national security interests.
    Therefore, government agencies engineered and operated to advance cyber security interests, need to work continuously to promote the public’s understanding of the values and principles - “trustworthy AI,” “securing the safety and reliability of cyberspace,” and “norms of responsible state behavior in cyberspace,” shared by the international community including the UN through the various open training programs and discussions organized and led by IT professionals.
    
    * Attached File
mofadocu:relatedCity
mofadocu:category
  • IFANS Focus
mofa:relatedPerson
mofa:relatedOrg
mofa:relatedEvent
foaf:isPrimaryTopicOf
mofa:yearOfData
  • "2023"^^xsd:integer
mofapub:dataURL
  • "https://www.ifans.go.kr/knda/ifans/eng/pblct/PblctView.do?csrfPreventionSalt=null&pblctDtaSn=14212&menuCl=P11&clCode=P11&koreanEngSe=ENG"^^xsd:anyURI
mofapub:hasAuthor
  • SONG Tae Eun
mofapub:hasProfessor
mofapub:pubDate
  • "20230522"^^xsd:integer
mofapub:pubNumber
  • 2023-08E
dcterms:language
  • ENG

본 페이지는 온톨로지 데이터를 Linked Data로 발행한 것입니다.