Legal aspects of artificial intelligence in ensuring national security: challenges and threats
https://doi.org/10.52468/2542-1514.2025.9(4).36-46
Abstract
The subject. Artificial intelligence (AI) opens up wide opportunities for strengthening national security, but also raises a number of legal and ethical issues that need to be carefully analyzed and resolved. The implementation of a government strategy aimed at improving tools and approaches to information protection using AI is a key aspect of ensuring national security and maintaining law and order.
The aim of the article was to confirm the hypothesis that ensuring national security depends on algorithms implemented in artificial intelligence (AI) technologies.
Methodology. The criteria formulated in the President Decree of the Russian Federation No. 124 are fundamental on the basis of which trusted AI systems are developed. The analysis of legal acts, acts of technical regulation and scientific literature was made.
Main results. In areas where there is a threat to national security, the use of trusted AI technologies is becoming mandatory. In accordance with Presidential Decree No. 124, trusted AI technologies are defined as meeting safety standards and developed based on the principles of objectivity, non-discrimination and ethics. Their use should exclude the possibility of harming a person, violating his rights and freedoms, as well as harming the interests of society and the state. Only two principles formulated in that Decree are disclosed – this is the principle of non-discrimination and objectivity. Despite the use of the term "ethics of norms", "ethics" in the legislation in the field of AI, ethics is not defined as an independent principle. Ethics in the field of AI can be understood as a system of views and principles aimed at protecting the moral and social consequences associated with the use of various data processing models contained in AI technologies. The principle of objectivity or impartiality is related to the ability of the system to inspire confidence and exclude unjustified bias of the formed estimates. The principle of objectivity is closely intertwined with the principles of transparency of AI systems and data protection and security. To achieve the objectivity of data processing by AI systems, it is necessary to use high-quality and representative datasets obtained from reliable sources. The implementation of the principle of non-discrimination contains certain contradictions that manifest themselves in the conditions of ensuring national security. Since, despite the fact that it is intended to exclude the possibility of using AI algorithms for processing and analyzing data containing discriminatory criteria, if necessary, the state can apply other criteria to ensure the national security.
Conclusions. The implementation of the above principles, the creation of a trusted environment for the use of AI technologies, and the development of trusted AI technologies all ensure information security, the effectiveness of the system for protecting the interests of citizens, society, information, and information infrastructure, thereby ensuring national security.
Keywords
About the Author
A. K. ZharovaРоссия
Anna K. Zharova – Doctor of Law, Associate Professor, Leading Researcher
10, Znamenka ul., Moscow, 119019
ResearcherID: H-4012-2015
References
1. Bachilo I.L. Legal provision of information security at a new stage in the development of the information society. Available at: http://igpran.ru/public/articles/BachiloIL.2014.1..pdf. (In Russ.).
2. Strakhov A.A., Dubinina N.M. About data leakage and DLP systems. Kriminologicheskii zhurnal = Criminological journal, 2022, no. 4, pp. 226–232. DOI: 10.24412/2687-0185-2022-4-226-232. (In Russ.).
3. Gromov Yu.Yu., Drachev V.O., Vojtjuk V.V., Rodin V.V., Samkharadzcе T.G. Analysis of existing methods of information protection in modern information systems. Inzhenernaya fizika = Engineering physics, 2008, no. 2, pp. 67–69. (In Russ.).
4. Zharova A.K., Elin V.M., Avetisyan B.R. Prevention of computer attacks such as man in the middle, committed using generative artificial intelligence. Voprosy kiberbezopasnosti = Cybersecuriry issues, 2024, no. 6 (64), pp. 28–41. DOI: 10.21681/2311-3456-2024-6-28-41. (In Russ.).
5. Denning P.J., Arquilla J. The context problem in artificial intelligence. Communications of the ACM, 2022, vol. 65, iss. 12, pp. 18–21. DOI: 10.1145/3567605.
6. Cheruvalath R. Artificial Intelligent Systems and Ethical Agency. Journal of Human Values, 2023, vol. 29, no. 1, pp. 33–47. DOI: 10.1177/09716858221119546.
7. Zharova A.K. The intelligent image and meaning recognition systems in the crime prevention system. Trudy po intellektual'noi sobstvennosti = Works on Intellectual Property, 2024, vol. 49, no. 2, pp. 16–23. DOI: 10.17323/tis.2024.21708. (In Russ.).
8. Zharova A.K. Intellectual systems of pattern and meaning recognition in the system of prevention of crimes committed using the Internet. Russian Journal of Economics and Law, 2024, vol. 18, no. 2, pp. 469–480. DOI: 10.21202/2782-2923.2024.2.469-480. (In Russ.).
9. Tsygichko V.N., Alekseeva I.Yu. Information challenges to national and international security, ed. by A.V. Fedorov and V.N. Tsygichko. Moscow, PIR Center for Political Studies Publ., 2001. 328 p. (In Russ.).
10. Gulabyan A.O., Smirnov V.M. The main directions of information protection and information systems resources. Tendentsii razvitiya nauki i obrazovaniya, 2022, no. 88-1, pp. 45–46. DOI: 10.18411/trnio-08-2022-13. (In Russ.).
11. Ruzanova E.A. Means of information protection in LAN, in: Theory and practice of modern science: the view of youth, Proceeding of the III All-Russian Scientific and Practical Conference in English (St. Petersburg, November 30, 2023), in 2 parts, St. Petersburg, Saint Petersburg State University of Industrial Technologies and Design Publ., 2024, pt. 2, pp. 166–169.
12. Roizenson G.V. Modern approaches of formalization of the concept of ethics in artificial intelligence, in: Proceedings of the International Conference on Computational and Cognitive Linguistics TEL-2018 (Kazan, October 31 – November 03, 2018), in 2 volumes, Kazan, Academy of Sciences of the Republic of Tatarstan Publ., 2018, vol. 1, pp. 306–331. (In Russ.).
13. Gallese Nobile C. Regulating Smart Robots and Artificial Intelligence in the European Union. Journal of Digital Technologies and Law, 2023, vol. 1, no. 1, pp. 33–61. DOI: 10.21202/jdtl.2023.2.
14. Kharitonova Yu.S. Legal Means of Providing the Principle of Transparency of the Artificial Intelligence. Journal of Digital Technologies and Law, 2023, vol., 1, no. 2, pp. 337–358. DOI: 10.21202/jdtl.2023.14.
15. Vasilevskaya L.Yu. The code of ethics for artificial intelligence: a legal myth and reality. Grazhdanskoe pravo = Civil law, 2023, no. 2, pp. 19–22. (In Russ.).
16. Laruel F. Two ethical principles in the technological world, transl. by E. Rudneva. Filosofskaya antropologiya, 2015, vol. 1, no. 1, pp. 49–61. (In Russ.).
17. Zharova A.K. Achieving Algorithmic Transparency and Managing Risks of Data Security when Making Decisions without Human Interference: Legal Approaches. Journal of Digital Technologies and Law, 2023, vol. 1, no. 4, pp. 973–993. DOI: 10.21202/jdtl.2023.42.
18. Begishev I.R., Zharova A.K., Zaloilo M.V., Filipova I.A., Shutova A.A. Digital and Nature-like Technologies: Features of Legal Regulation. Journal of Digital Technologies and Law, 2024, vol. 2, no. 3, pp. 493–499. DOI: 10.21202/jdtl.2024.25.
19. Begishev I.R., Khisamova Z.I. Artificial intelligence and criminal law. Moscow, Prospekt Publ., 2024. 192 p. (In Russ.).
20. Zharova A.K. Informational and psychological violence in criminal law. Informatsionnoe obshchestvo, 2024, no. 2, pp. 103–111. (In Russ.).
21. Halezov S.A. Ethics of artificial intelligence, in: Nauka. Tekhnika. Chelovek: istoricheskie, mirovozzrencheskie i metodologicheskie problemy, Interuniversity collection of scientific papers, iss. 13, Moscow, Moscow State Technical University of Civil Aviation Publ., pp. 240–246. (In Russ.).
22. Sushchin M.A. Taddeo M. Three ethical challenges of artificial intelligence applications in the field of cybersecurity (Abstract), in: Bulavinov M.P. (ed. & comp.). Etika nauki, Collection of reviews and abstracts, Moscow, Institute of Scientific Information on Social Sciences of the Russian Academy of Sciences Publ., 2022, pp. 175–178. (In Russ.).
23. Perevozchikova D.A., Zaripova M.Y. Ethics of artificial intelligence. Nanotekhnologii: nauka i proizvodstvo, 2024, no. 3, pp. 26–30. (In Russ.).
24. Lizikova M.S. Ethical and legal issues of artificial intelligence development. Trudy Instituta gosudarstva i prava Rossiiskoi akademii nauk = Proceedings of the Institute of State and Law of the RAS, 2022, vol. 17, no. 1, pp. 177–194. DOI: 10.35427/2073-4522-2022-17-1-lizikova. (In Russ.).
25. Dodonova V., Dodonov R., Gorbenko K. Ethical Aspects of Artificial Intelligence Functioning in the XXIst Century. Studia Universitatis Babeş-Bolyai. Philosophia, 2023, vol. 68, no. 1, pp. 161–173.
26. Giarmoleo F.V., Ferrero I., Rocchi M., Pellegrini M.M. What ethics can say on artificial intelligence: Insights from a systematic literature review. Business and Society Review, 2024, vol. 129, iss. 2, pp. 258–292. DOI: 10.1111/basr.12336.
Review
For citations:
Zharova A.K. Legal aspects of artificial intelligence in ensuring national security: challenges and threats. Law Enforcement Review. 2025;9(4):36-46. https://doi.org/10.52468/2542-1514.2025.9(4).36-46
JATS XML

























