Skip to main content

Cybersecurity in AI: balancing innovation and risks

Dmitry Fonarev, Senior Public Affairs Manager, Kaspersky

The technology landscape has witnessed the emergence of systems enabled by Artificial Intelligence (AI) on an unprecedented scale. According to Kaspersky research, more than 50% of companies have implemented AI and the Internet of Things (IOT) in their infrastructure, with a further 33% planning to adopt these interconnected technologies within two years.

However, nascent technologies are accompanied by new cybersecurity risks and attack vectors. In its latest study titled “Cyber defense & AI: Are you ready to protect your organization?”  Kaspersky gathered the opinions of IT Security and Information Security professionals working for SMEs and Enterprise-level companies regarding new challenges in protecting their organizations against cyberattacks involving the use of AI. The majority of respondents participating in the survey (76%) said that the number of cyberattacks in their companies increased in the last 12 months and 46% believe that the majority of those cyberattacks included the use of AI.

The concept of security in the development of AI systems has been thrust to the forefront of various regulatory initiatives, such as the EU AI Act or the Singapore Model AI Governance Framework for Generative AI, to minimize the associated cyberrisks. Despite these regulatory advances, there remains a gap between the general frameworks and their practical implementation at a more technical level. The workshop session “Cybersecurity in AI: balancing innovation and risks”, hosted by Kaspersky at the 19th Annual Meeting of the Internet Governance Forum (IGF) in Riyadh, Kingdom of Saudi Arabia, focused on the essential cybersecurity requirements that should be considered throughout the design, deployment and operation of AI systems.

The session began with a debate on the concept of trust in AI. Allison Wylde, team member (interoperability) of the UN IGF Policy Network on AI (PNAI), emphasized that trust in AI is subjective and mostly depends on cultural and individual factors. She stressed the importance of defining and better understanding this concept to ensure proper transparency and reliability, and advocated for more quantified and measurable indicators. Ms. Wylde also pointed out that it is imperative to adopt a zero-trust approach with respect to AI systems, highlighting the need for continuous verification of both models and their data prior to deployment. 

Yuliya Shlychkova, Vice President of Government Affairs and Public Policy at Kaspersky, provided a brief overview of the current cyberthreat landscape in relation to AI, which – like any software – is not completely immune to attack. Notably, AI is increasingly being used by cybercriminals to automate their intrusions. In addition, AI systems can be exploited through data poisoning, prompt manipulation or backdoors. She noted that cybersecurity in organizations is particularly important, as many employees unknowingly expose sensitive information when using AI models.       

Sergio Mayo Macías, Innovation Programmes Manager at the Technological Institute of Aragon (ITA), Spain, reflected on the challenges of relying on datasets to train AI models. Specifically, vulnerabilities such as poor data quality or data bias, where stereotypes and incorrect societal assumptions about gender, ethnicity or geographical location infiltrate the algorithm’s dataset and lead to AI systems making unfair or discriminatory decisions and producing inaccurate outputs. In this regard, individuals designing and operating AI models need to be aware of these biases and take steps to mitigate them in order to ensure fairness and reliability. Mr. Mayo also pointed out the need to create safe spaces to ensure data sovereignty and secure data sharing for AI training across different states and regions.

Dr. Melodena Stephens, Professor of Innovation & Technology Governance at the Mohammed Bin Rashid School of Government, UAE, underscored the differences between digital literacy and AI literacy, as the latter is much more complex and requires constant updating to keep up with rapid technological advances. In this context, she advocated comprehensive societal education on AI, including training for engineers, policymakers and the general public. In addition, Dr. Stephens questioned the real short-term possibility of aligning different cybersecurity policies due to geopolitical fragmentation and differing views on human rights and privacy issues, although such harmonization would be highly desirable and productive. Instead, she advocated for better adaptation of regulations, such as those developed by the International Organization for Standardization (ISO) or the US National Institute of Standards and Technology (NIST), to make them more understandable and actionable for individuals and organizations at different levels of expertise.

As a step towards the practical implementation of general regulatory frameworks, Yuliya Shlychkova presented the “Guidelines for Secure Development and Deployment of AI Systems” developed by Kaspersky in collaboration with leading academic experts. This document is particularly useful for companies that rely on third-party AI components to build their own solutions and covers key aspects of developing, deploying and operating AI systems, including:

  1. Cybersecurity awareness and training
  2. Threat modelling and risk assessment
  3. Infrastructure security
  4. Supply chain and data security
  5. Testing and validation
  6. Vulnerability reporting
  7. Defense against ML-specific attacks
  8. Regular security updates and maintenance
  9. Compliance with international standards     

The paper is a critical resource for developers, administrators and AI DevOps teams, and provides detailed, practical advice to address technical gaps and operational risks. It is available in English, Chinese, French, Portuguese, Russian and Spanish.

The debate also highlighted the imperative need to follow security-by-design principles in the development of AI models, to address the cybersecurity of AI as an integral process and to consider the human factor as crucial for the sustainability of systems. Panelists and audience members also touched on ethical dilemmas in the use of AI, particularly in developing countries, underlined the risks associated with application programming interface (API) vulnerabilities and agreed on the importance of security audits, with a focus on assessing the integrity and fairness of AI models.

Participants in the session agreed on the need for transparency, education and collaboration across regions to address the critical issues of AI security standards and interoperability, while recognizing the local cultural and economic context in which these systems will be deployed. Developing clear guidance on how to implement AI models and systems safely is also essential to ensure progressive economic advance at different levels and to improve the wellbeing of individuals and communities using these emerging technologies.

Cybersecurity in AI: balancing innovation and risks

The essential cybersecurity requirements that should be considered throughout the design, deployment and operation of AI systems.
Kaspersky logo

Latest Articles