Artificial intelligence and international psychological security
- Información
The development of artificial intelligence and machine learning, embedded systems and devices, the Internet of things, augmented and virtual reality, big data analysis (data science) and cloud computing, block chain, etc. stimulate the transition to a new technological order. The attention of the academic community to this range of problems is evidenced by the active discussion that took place at the panel discussion “Malicious use of artificial intelligence and international psychological security” of the UNESCO Conference in Khanty-Mansiysk and continued at the eponymous research seminar at the Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation.
The II International Conference “Tangible and Intangible Impact of Information and Communication in the Digital Age” was held within the framework of the UNESCO Intergovernmental Information for All Programme (IFAP) and XI International IT Forum with the participation of BRICS and SCO countries in Khanty-Mansiysk on June 9-12, 2019. A number of academic institutions provided academic support for the event. These are the International Center for Social and Political Studies and Consulting (ICSPSC), the European-Russian Communication Management Network (EU-RU-CM Network) and the Russian – Latin American Strategic Studies Association (RLASSA).
The conference was supported by the publishing house “Ugra News”, the Institute for Political, Social and Economic Studies – EURISPES (Rome), the Association of Studies, Research and Internationalization in Eurasia and Africa – ASRIE (Rome), the Geopolitics of the East Association (Bucharest), the International Association “Eurocontinent” (Brussels) and the International Institute for Scientific Research – IIRS (Marrakech).
Governor of Ugra Natalia KOMAROVA took part in the conference. Opening the panel discussion “Malicious use of artificial intelligence and international information and psychological security”, the head of the region recalled that a year ago the Ugra Declaration adopted at the first UNESCO conference included proposals for the preparation of a world report on socio-cultural transformations in the digital age, the formation of educational programs relating to ethical, legal, cultural, social aspects of life.
With academic support of EU-RU-CM Network the conference was attended by coordinators and network members: Darya Bazarkina (Russia), Evgeny Pashentsev (Russia), Olga Polunina (Russia), Marco Ricceri (Italy), Gregory Simons (Latvia/New Zealand/Sweden), Pierre-Emmanuel Thomann (Belgium) and Marius Vacarelu (Romania).
At the opening of the conference Evgeny PASHENTSEV, Leading Researcher, Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation; Director, International Centre for Social and Political Studies and Consulting (Moscow, Russia), Coordinator of the European-Russian Communication Management Network (EU-RU-CM Network), senior researcher, St. Petersburg State University presented a paper on “Artificial Intelligence: Current and Promising Threats to International Psychological Security”. He stressed that the low level of civil self-organization of society, the lack of established progressive counter-elites testifies to the crisis not only of the "top", but also of the "bottom" strata of the society, i.e. of the civilizational crisis. The only way to solve the problems facing humanity is to have access to information, to the modern possibilities of its processing, analysis and dissemination. On this basis it is possible to offer scientifically-proved models of progressive development of mankind, their discussion, both in the professional environment, and at public discussion forums. Artificial intelligence can be of great help in the processing, analysis, verification of research results and implementation of relevant social development programs. Human beings’ intellectual creative abilities, will, aspirations and feelings "Weak AI" will not replace, and "Strong AI" equal or superior to the modern human mind – is a matter of the future. Unfortunately, the rapidly growing practice of using AI to manipulate public consciousness at the international level, once again testifies to the large and dangerous potential of the negative consequences of the use of new technologies.
International psychological security (IPS) means protecting the system of international relations from negative information and psychological influences associated with various factors of international development. The latter include targeted efforts by various state, non-state and supranational actors to achieve partial/complete, local/global, short-term/long-term, and latent/open destabilization of the international situation in order to gain competitive advantages, even through the physical elimination of the enemy. In fact, the modern global world is witnessing hybrid warfare in the system of international relations, which has never completely stopped throughout history; rather it has had natural periods of exacerbation. We have clearly entered a long-term transition period in the development of humanity and the system of international relations in particular, which is accompanied by irregularly growing psychological warfare.
Darya BAZARKINA, Professor, Russian Presidential Academy of National Economy and Public Administration; Senior Researcher, Saint Petersburg State University (Moscow, Russia), classified the threats posed by the use of artificial intelligence by terrorist organizations can be divided into two groups:
1) Use of AI for destruction of physical objects, killing and harming the health of citizens;
2) Use of AI in the propaganda activities of terrorist groups.
It is no accident that the organization “Islamic State” (IS) is actively recruiting specialists in the field of high technologies. Even now, terrorists are experimenting with crypto currencies that allow you to transfer funds across borders, avoiding bank control. It is already clear that machine learning technologies are becoming increasingly available. Drones are already equipped with AI, and the use of military equipment that can operate without human help, has become the subject of lively discussions. Unfortunately, the documented use of social media, encryption and drones by terrorists suggests that once new technologies become widely available to the consumer, terrorists will also be able to use them.
Fatima ROUMATE, Associate Professor, Mohamed V University; President, Institut International de la Recherche Scientifique (Marrakech, Morocco) considers that the international actors are using AI to achieve their specific beneficial goals. However, they are investing more efforts to limit their vulnerabilities. The consequence is that international society faces the psychological impact of non-trusted information which influences policy-maker decision and political changes in global affairs.
Malicious use of AI leads us to think about one of the most important negative impacts, especially the attacks on democracy. In fact, AI is not only expanding existing threats, but it’s creating new threats. Spear phishing attacks, for example, increased significantly since 2016 in several countries such as Canada, France, Italy and the USA where attacks against specific targets accounts for more than 86% of all phishing attacks.
The malicious use of AI creates new challenges for states as an original actor in international relations. This invites researchers and policymakers to rethink many concepts linked to the state’s notion as sovereignty, diplomacy and security, considering the appearance of new notions as artificial diplomacy, cyber security, cyber war…
Malicious use of AI imposes new challenges related to international law and human rights, especially with the charter of principals and human rights in the internet, which recognizes the access to the internet as a fundamental right. AI age is a new phase in the development of international law, which becomes heavily traditional. In the same context, the appearance of the Lethal Autonomous Weapons (LAWs) creates a controversial discussion between States and it requires an urgent review of the use of force as it was cited in the UN charter. States competition toward LAWs lead us to think that current trade crisis between China and USA can be escalated to an open military conflict with the use of AI weapons.
Aleksandr RAIKOV, Leading Researcher, Institute of Control Sciences, Russian Academy of Sciences (Moscow, Russia), stressed that AI is currently developing in a digital economy. It increasingly penetrates the socio-humanitarian and industrial sphere, helps to resolve a state and municipal government’s issues.
AI is a technology that enhances a person’s creative possibilities and helps him in his work. AI makes it possible to understand and use the power of the human mind, to get closer to the mystery of the human spirit. However, before the AI was a harmless human’s helper in routine, now it has already become a dangerous competitor for any employee.
However, the AI capabilities are expanding and deepening. It infiltrates deeper into the secrets of the sensual and emotional human sense levels, human’s meditative abilities, as well as the collective unconscious. And with this, features of the next generation of AI - Artificial Super-Intellect (ASI) begin to appear: “Intellect, which is much smarter than the best human mind in almost all areas, including scientific creativity, wisdom and social skills”. With the advent of ASI, its danger to society is not excluded. And on the way of its creation it can meet traps, in which it is capable of causing irreparable damage to society.
These are due to the stereotypes of conducting scientific research, insufficient coverage of disciplines and the lack of relevant international collaborations. But sooner or later, these limitations in the development of AI will be removed and the ASI will enter the arena. This requires the development of interdisciplinary basic research, a more critical attitude to digitalization, immersion of information models in infinite-dimensional spaces, removal of contradictions between quantum mechanics and the theory of relativity, appeal to the potential of Space and much more. It is also necessary to start teaching people the future!
Pierre-Emmanuel THOMANN, President/Founder, Eurocontinent (Brussels, Belgium), considers that AI will contribute to change the power hierarchy and the international order in the 21st century, accelerating the dynamics in which new technology and power mutually reinforce each other. AI has the potential to transform the paradigms of geopolitics through new relationships between territories, spatio-temporal dimensions and immateriality. Geopolitics is characterized today by the rivalry between states, alliance of states or private actors for the control of different spaces: ground, sea, air and cyberspace. The emergence of AI is adding a new dimension, that is space-time dominance. Alliances of states able to exert full spectrum dominance in the different spaces like ground, sea, air, cyberspace and space-time (AI) will be able to have a decisive geopolitical advantage, because the mastery of territory and time in the service of a political objective is a decisive advantage and a central element of sovereignty.
AI will be influencing geopolitics at a tactical level, but also on a more strategic and long-term level. The malicious use of AI at tactical level can have direct effects to change the balance of power in a conflict for geopolitical influence between rival states.
On a more strategic level, the introduction of AI might lead to reinforced competition between actors for full spectrum dominance, a combination of ground, sea, air, cyberspace and space-time (AI) dominance, and result in the transformation of the global geopolitical configuration. AI research programmes need accumulation of data to be developed and this is why big data is therefore the fuel to AI. The geopolitical balance will probably change between actors and states possessing AI capacity and big data sovereignty and those who do not possess technological sovereignty and are dependant on other states or private actors.
Erik VLAEMINCK, Researcher, University of Edinburgh; Research Associate, International Cultural Relations Ltd (London, UK), noted that throughout history, changes in technology have impacted societies and their peoples all over the world in the most thorough ways, often for the better, but also for the worse. The latest development in the sphere of technology and communication are not an exception to this paradigm. From the digitalisation of our economies, the rise of social media to the advances in the field of machine learning and AI, the impact on people’s daily life is tremendous and most probably still in its initial phase.
Besides the many benefits, among which interconnectivity and the partial erasure of (geographical) boundaries between people, societies and economies, this new technological paradigm has also brought various challenges and threats to our societies and democratic institutions, well-exemplified by the dissemination of propaganda and fake news to the hacking of elections and manipulation of political identities on a global scale. The future advancements in the field of AI might worsen these threats considerably as state and non-state actors with bad intentions might turn against society in the pursuit of political interests. In order to counter these potential threats, it will be of importance to conduct more research and advocate for international cooperation.
Discussion of the problems of malicious use of AI continued on June 14 at the research seminar “Artificial Intelligence and Challenges to International Psychological Security”. The seminar was organized by the Centre for Euro Atlantic Studies and International Security at the Diplomatic Academy of the MFA of Russia and International Centre for Social and Political Studies and Consulting with the academic support of the European-Russian Communication Management Network and the Department of International Security and Foreign Policy of Russia, Russian Presidential Academy of National Economy and Public Administration.
The participants of the seminar adopted a final document aimed at explaining to the authorities and civil society institutions the threats associated with the fall of AI tools into the hands of criminal actors. (Also published here: https://www.alainet.org/en/articulo/200955).
Darya Bazarkina DSc, Professor at the Chair of the International Security and Foreign Policy of Russia, RANEPA; Research Coordinator on Communication Management and Strategic Communication, International Centre for Social and Political Studies and Consulting.
Mark Smirnov, Research Intern of the International Centre for Social and Political Studies and Consulting.
Del mismo autor
Clasificado en
Comunicación
- Jorge Majfud 29/03/2022
- Sergio Ferrari 21/03/2022
- Sergio Ferrari 21/03/2022
- Vijay Prashad 03/03/2022
- Anish R M 02/02/2022
Internet ciudadana
- Nick Bernards 31/03/2022
- Paola Ricaurte 10/03/2022
- Burcu Kilic 03/03/2022
- Shreeja Sen 25/02/2022
- Internet Ciudadana 16/02/2022