Preview

Law Enforcement Review

Advanced search

Criminological classification of robots: risk-based approach

https://doi.org/10.52468/2542-1514.2021.5(1).185-201

Abstract

The subject of the research is key criminal risks in robotics. The purpose of the article is to confirm or disprove the hypothesis that key criminal risks of using robots may be identified and classified. The author dares to describe the key aspects of the application of risk-based approach in the assessment of robotic activities, identify the key risks of using robots, give a criminological classification. The methodology includes a formal logical method, systematic approach, formal legal interpretation of legal acts and academic literature, SWOT analysis. The main results of the study. The author applies the main provisions of criminal riskology when assessing encroachments involving robots. Key risks and challenges when using robots are identified. The severity of the consequences of harm caused by using robots (from minor to critical risk) is assessed and a matrix of the probability of its occurrence is provided. The author's criminological classification of robots is based on the risk-based approach and is substantiated on two grounds. The first one is the category of public danger and the second is the potential severity of the consequences of harm caused by robots. The causal complex that can lead to criminal risks in robotics is identified. The grounds of such risks are divided into those related to the mechanical subsystem of robots, digital subsystem of robots and power supply subsystem of robots. Conclusions. The risk-based approach is the most progressive and effective basis for regulating the criminal relations in robotics. The author demonstrates the existence of real risks to the peace and security of mankind, life and health of people, objects of wildlife, nonliving material objects from the use of robots. It is necessary to recognize robotics as source of increased potential criminal danger and to adopt appropriate regulation as soon as possible. The necessity and expediency of applying a risk-based approach to robotics is theoretically substantiated, and the characteristics of robots that are important in assessing the criminal potential of their exploitation are evaluated. The conclusions and recom mendations of this paper may become a basis for the implementation of the risk-based approach in legal regulation of robotics. The risk matrix presented in the article can be used to establish a framework for regulatory impact on robotics, assess the consequences of potential harm and minimize it.

About the Author

I. R. Begishev
Kazan Innovative University named after V.G. Timiryasov (IEML)
Russian Federation

Ildar R. Begishev – PhD in Law, Honoured Lawyer of the Republic of Tatarstan, senior researcher

42, Moskovskaya ul., Kazan, 420111

Scopus AuthorID: 57205305394

ResearcherID: T-2409-2019

RSCI SPIN code: 8859-9395; AuthorID: 595003



References

1. Gómez de Ágreda Á. Ethics of autonomous weapons systems and its applicability to any AI systems. Telecommunications Policy, 2020, vol. 44, no. 6, pp. 101953. DOI: 10.1016/j.telpol.2020.101953.

2. Winfield A. Ethical standards in robotics and AI. Nat Electron. Nature Electronics, 2019, vol. 2, pp. 46–48. DOI: 10.1038/s41928-019-0213-6.

3. Dignum V. Ethics in artificial intelligence: introduction to the special issue. Ethics and Information Technology, 2018, vol. 20, no. 1, pp. 1–3. DOI: 10.1007/s10676-018-9450-z.

4. Lin P., Abney K., Bekey G. Robot ethics: Mapping the issues for a mechanized world. Artificial Intelligence, 2011, vol. 175, no. 5–6, pp. 942–949. DOI: 10.1016/j.artint.2010.11.026.

5. Belk R. Ethical issues in service robotics and artificial intelligence. The Service Industries Journal, 2020, vol. 40, no. 10, pp. 1–17. DOI: 10.1080/02642069.2020.1727892.

6. Sharkey N. The Ethical Frontiers of Robotics. Science. 2008, vol. 322, no. 5909, pp. 1800–1801. DOI: 10.1126/science.1164582.

7. Lo Piano S. Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanities and Social Sciences Communications, 2020, vol. 7, no. 9, pp. 1–7. DOI: 10.1057/s41599-020-0501-9.

8. Green B.P. Ethical Reflections on Artificial Intelligence. Scientia et Fides, 2018, vol. 6, no. 2, pp. 9–31. DOI: 10.12775/SetF.2018.015.

9. Winfield AFT., Jirotka M. Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2018, vol. 376, no. 2133, pp. 1–13. DOI: 10.1098/rsta.2018.0085.

10. Tubert A. Ethical Machines? Seattle University Law Review, 2018, vol. 41, no. 4, pp. 1163–1168.

11. Guo S., Zhang G. Robot Rights. Science, 2009, vol. 323, no 5916, pp. 876a. DOI: 10.1126/science.323.5916.876a.

12. Fosch-Villaronga E., Golia A.J. Robots, standards and the law: Rivalries between private standards and public policymaking for robot governance. Computer Law & Security Review, 2019, vol. 35, no. 2, pp. 129–144. DOI: 10.1016/j.clsr.2018.12.009.

13. Leenes R., Palmerini E., Koops B.-J., Bertolini A., Salvini P., Lucivero F. Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues. Law, Innovation and Technology, 2019, vol. 9, no. 1, pp. 1– 44. DOI: 10.1080/17579961.2017.1304921.

14. Bokovnya A.Yu., Begishev I.R., Khisamova Z.I., Bikeev I.I., Sidorenko E.L., Bersei D.D. Pressing Issues of Unlawful Application of Artificial Intelligence. International Journal of Criminology and Sociology, 2020, vol. 9, pp. 1054– 1057. DOI: 10.6000/1929-4409.2020.09.119.

15. Boden M., Bryson J., Caldwell D., Dautenhahn K., Edwards L., Kember S., Newman P., Parry V., Pegman G., Rodden T., Sorrell T., Wallis M., Whitby B., Winfield A. Principles of robotics: regulating robots in the real world. Connection Science, 2017, vol. 29, no. 2, pp. 124–129. DOI: 10.1080/09540091.2016.1271400.

16. Van den Berg B. Robots as Tools for Techno-Regulation. Law, Innovation and Technology, 2011, vol. 3, no. 2, pp. 319–334. DOI: 10.5235/175799611798204905.

17. Bennett B., Daly A. Recognising rights for robots: Can we? Will we? Should we? Law, Innovation and Technology, 2020, vol. 12, no. 1, pp. 1–21. DOI: 10.1080/17579961.2020.1727063.

18. Baranov P.P., Mamychev A.Yu., Plotnikov A.A., Voronov D.Yu., Voronova E.M. Problems of Legal Regulation of Robotics and Artificial Intelligence from the Psychological Perspective. Propósitos y Representaciones, 2020, vol. 8, no. 2, e511. DOI: 10.20511/pyr2020.v8n2.511.

19. Bokovnya A.Yu., Begishev I.R., Khisamova Z.I., Narimanova N.R., Sherbakova L.M., Minina A.A. Legal Approaches to Artificial Intelligence Concept and Essence Definition. Revista San Gregorio, 2020, no. 41, pp. 115–121. DOI: 10.36097/rsan.v1i41.1489.

20. Calo R. Robots as Legal Metaphors. Harvard Journal of Law & Technology, 2016, vol. 30, no. 1, pp. 209– 237. DOI: 10.20511/pyr2020.v8n2.511.

21. Danaher J. Robots, Law and the Retribution Gap. Ethics and Information Technology, 2016, vol. 18, no. 4, pp. 299–309. DOI: 10.1007/s10676-016-9403-3.

22. Bertolini A., Aiello G. Robot companions: A legal and ethical analysis. The Information Society, 2018, vol. 34, no. 3, pp. 130–140. DOI: 10.1080/01972243.2018.1444249.

23. Khisamova Z.I., Begishev I.R. Criminal liability and artificial intelligence: theoretical and applied aspects. Vserossiiskii kriminologicheskii zhurnal = Russian Journal of Criminology, 2019, vol. 13, no. 4, pp. 564–574. DOI: 10.17150/2500-4255.2019.13(4).564-574. (In Russ.).

24. Scherer M.U. Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of Law & Technology, 2016, vol. 29, no. 2, pp. 353–400.

25. Bikeev I.I., Kabanov P.A., Begishev I.R., Khisamova Z.I. Criminological risks and legal aspects of artificial intelligence implementation. ACM International Conference Proceeding Series, 2019, a20. DOI: 10.1145/3371425.3371476.

26. King T.C., Aggarwal N., Taddeo M., Floridi L. Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions. Science and Engineering Ethics, 2020, vol. 26, no. 1, pp. 89–120. DOI: 10.1007/s11948-018-00081-0.

27. Begishev I.R., Khisamova Z.I. Criminological risks of using artificial intelligence. Vserossiiskii kriminologicheskii zhurnal = Russian Journal of Criminology, 2018, vol. 12, no. 6, pp. 767–775. DOI: 10.17150/2500-4255.2018.12(6).767-775. (In Russ.).

28. Caldwell M., Andrews J.T.A., Tanay T., Griffin L.D. AI-enabled future crime. Crime Science, 2020, vol. 9, no. 14, pp. 564–577. DOI: 10.1186/s40163-020-00123-8.

29. Khisamova Z.I., Begishev I.R., Sidorenko E.L. Artificial Intelligence and Problems of Ensuring Cyber Security. International Journal of Cyber Criminology, 2019, vol. 13, no. 2, pp. 564–577. DOI: 10.5281/zenodo.3709267.

30. Henschel A., Hortensius R., Cross E.S. Social Cognition in the Age of Human–Robot Interaction. Trends in Neurosciences, 2020, vol. 43, no. 3, pp. 373–384. DOI: 10.1016/j.tins.2020.03.013.

31. Rahwan I., Cebrian M., Obradovich N., et al. Machine behavior. Nature, 2019, vol. 568, pp. 477–486. DOI: 10.1038/s41586-019-1138-y.

32. Gabov A.V., Khavanova I.A. Evolution of Robots and the 21st-Сentury Law. Vestnik Tomskogo gosudarstvennogo universiteta = Tomsk State University Journal, 2018, no. 435, pp. 215–233. (In Russ.).

33. Turchin A., Denkenberger D. Classification of global catastrophic risks connected with artificial intelligence. AI & Society, 2020, vol. 35, no. 1, pp. 147–163. DOI: 10.1007/s00146-018-0845-5.

34. Babaev M.M. Risks as a component of the determinational complex of crime. Vestnik Nizhegorodskoi akademii MVD Rossii, 2018, no. 1 (41), pp. 104–110. DOI: 10.24411/2078-5356-2018-00014 (In Russ.).

35. Deng B. Machine ethics: The robot’s dilemma. Nature, 2015, vol. 523, pp. 24–26. DOI: 10.1038/523024a.

36. Robles Carrillo M. Artificial intelligence: From ethics to law. Telecommunications Policy, 2020, vol. 44, no. 6, pp. 1-16. DOI: 10.1016/j.telpol.2020.101937.

37. Robles Carrillo M. La gobernanza de la inteligencia artificial: contexto y parámetros generals. Revista Electrónica de Estudios Internacionales, 2020, no. 39, pp. 1-27. DOI: 10.17103/reei.39.07

38. Fosch Villaronga E., Roig A. European regulatory framework for person carrier robots. Computer Law & Security Review, 2017, vol. 33, no. 4, pp. 502–520. DOI: 10.1016/j.clsr.2017.03.018.

39. González M. Regulacion Legalde la Robotica y la Inteledencia Artificial: Retos de Futurо. Revista Jurídica de la Universidad de León, 2017, no. 4, pp. 25–50. DOI: 10.1016/j.clsr.2017.03.018.

40. Clarke R. Principles and business processes for responsible AI. Computer Law & Security Review, 2019, vol. 35, no. 4, pp. 410–422. DOI: 10.1016/j.clsr.2019.04.007.

41. Bertolini A. Wearable Robots: A Legal Analysis. In: González-Vargas J., Ibáñez J., Contreras-Vidal J., van der Kooij H., Pons J. (eds). Wearable Robotics: Challenges and Trends. Biosystems & Biorobotics. Springer, 2017, vol. 16, pp. 201–204. DOI: 10.1007/978-3-319-46532-6_33.

42. Sumantri V.K. Legal Responsibility on Errors of the Artificial Intelligence-based Robots. Lentera Hukum, 2019, vol. 6, no. 2, pp. 333–348. DOI: 10.19184/ejlh.v6i2.10154.

43. Tikhomirov Yu.A. Risk in the focus of legal regulation. Pravo i sovremennye gosudarstva, 2017, no. 6, pp. 9– 23. DOI: 10.14420/en.2017.6.1 (In Russ.).

44. Baburin V.V. Risk as a basis for differentiation of criminal liability. Author's Abstract of Dissertation of Candidate of Legal Sciences, Omsk, 39 p. (In Russ.).

45. Tikhomirov Yu.A. Robotization: dynamics of legal regulation. Vestnik Sankt-Peterburgskogo universiteta. Pravo = Vestnik of Saint Petersburg University. Law, 2020, vol. 11, no. 3, pp. 532–549. DOI: 10.21638/spbu14.2020.301 (In Russ.).

46. Voronin V.N. Criminal Law Risks of Digital Technologies Development: Problem Statement and Research Plan. Vestnik Universiteta imeni O.E. Kutafina (MGYuA) = Courier of Kutafin Moscow State Law University (MSAL), 2018, no. 12, pp. 73–80. DOI: 10.17803/2311-5998.2018.52.12.073-080 (In Russ.).

47. Babaev M.M., Pudovochkin Yu.E. The phenomenon of risk in the context of preventive policy (criminal riskology). Vestnik Sankt-Peterburgskogo universiteta. Pravo = Vestnik of Saint Petersburg University. Law, 2019, vol. 10, no. 1, pp. 136–148. DOI: 10.21638/spbu14.2019.110 (In Russ.).

48. Gracheva Yu.V., Aryamov A.A. Robotization and Artificial Intelligence: Criminal Law Risks in the Field of Public Security. Aktual'nye problemy rossiiskogo prava = Actual Problems of Russian Law, 2020, vol. 15, no. 6, pp. 169–178. DOI: 10.17803/1994-1471.2020.115.6.169-178. (In Russ.).

49. Kamalova G.G. Some Questions of Criminal Legal Responsibility in the Field of Application of Artificial Intelligence Systems and Robotics. Vestnik Udmurtskogo universiteta. Seriya «Ekonomika i pravo» = Bulletin of Udmurt University. Series Economics and Law, 2020, no. 3, pp. 382–388. DOI: 10.35634/2412-9593-2020-30-3-382-388 (In Russ.).

50. Mosechkin I.N. Artificial intelligence and criminal liability: problems of becoming a new type of crime subject. Vestnik Sankt-Peterburgskogo universiteta. Pravo = Vestnik of Saint Petersburg University. Law, 2019, vol. 10, no. 3, pp. 461–476. DOI: 10.21638/spbu14.2019.304 (In Russ.).

51. Gless S., Silverman E., Weigend T. If Robots cause harm, Who is to blame? Self-driving Cars and Criminal Liability. New Criminal Law Review, 2016, vol. 19, no. 3, pp. 412–436. DOI: 10.1525/nclr.2016.19.3.412.

52. Bigman Y.E., Waytz A., Alterovitz R., Gray K. Holding Robots Responsible: The Elements of Machine Morality. Trends in Cognitive Sciences, 2019, vol. 23, no. 5, pp. 365–368. DOI: 10.1016/j.tics.2019.02.008.

53. Matthews P., Greenspan. S. Technology Definitions, in: Automation and Collaborative Robotics. Berkeley: Apress, 2020. Pp. 45–67. DOI: 10.1007/978-1-4842-5964-1_2.

54. Ben-Ari M., Mondada F. Robots and Their Applications, in: Elements of Robotics. Springer, 2018, pp. 1-20. DOI: 10.1007/978-3-319-62533-1_1.

55. Gardina D. Social Robot: the Problem of Definition and Classification. Artificial Societies, 2018, vol. 13, no. 1–2. DOI: 10.18254/S0000115-5-1.

56. Redfield S. A definition for robotics as an academic discipline. Nature Machine Intelligence, 2019, no. 1, pp. 263–264. DOI: 10.1038/s42256-019-0064-x.

57. Lesort T., Lomonaco V., Stoian A., et al. Continual Learning for Robotics: Definition, Framework, Learning Strategies, Opportunities and Challenges. Information Fusion, 2020, vol. 58, pp. 52–68. DOI: 10.1016/j.inffus.2019.12.004.

58. Vicentini F. Terminology in safety of collaborative robotics. Robotics and Computer-Integrated Manufacturing, 2020, vol. 63, pp. 101921. DOI: 10.1016/j.rcim.2019.101921.

59. Belykh V.I. Signs Backbone Risk Classification Companies. Nauka o cheloveke: gumanitarnye issledovaniya, 2016, no. 2 (24), pp. 177–182. DOI: 10.17238/issn1998-5320.2016.24.177 (In Russ.).

60. Kabanov P. A. Criminological Taxonomy: Definition, Content, Taxonomy Units and Grounds for Grouping Them. Kriminologicheskij zhurnal BGUE`P = Criminological Journal of BNUEL, 2007, no. 1–2, pp. 25–29. (In Russ.).

61. Bikeev I.I. Material objects of increased danger in the Russian criminal law: general and special issues. Kazan: Posnanie, 2007, 272 p. (In Russ.).

62. Mishchenko E.V., Letuta T.V., Shukhman A.E. Autonomous Robot as a Source of Increased Danger in Law: Harm Prevention Problems, Proceedings of the International Scientific and Practical Conference on Digital Economy (ISCDE 2019), 2019, vol. 105, pp. 764–768. DOI: 10.2991/iscde-19.2019.149.


Review

For citations:


Begishev I.R. Criminological classification of robots: risk-based approach. Law Enforcement Review. 2021;5(1):185-201. https://doi.org/10.52468/2542-1514.2021.5(1).185-201

Views: 744


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2542-1514 (Print)
ISSN 2658-4050 (Online)