Cybersecurity: An Essential Skill in the AI Era
- Whenever companies implement AI, they are exposed to potential risks such as data breaches and attacks by malicious actors.
- Senior leaders must be involved in cybersecurity decisions in order to create strong internal cultures of security, and they must work closely with external partners to enhance supply chain cybersecurity.
- Business schools can use real-world case studies and hands-on exercises to prepare top executives to deal with cybersecurity-related activities.
When today’s business leaders discuss the benefits of generative AI (GenAI), they usually focus on how the emerging technology can streamline operations by automating a range of tasks. But they often overlook a serious concern: information security.
Any organization that uses AI tools needs to guard against potential threats to its business operations and its customer data. It also needs to be aware that it is vulnerable to any weaknesses in the AI systems of its business partners. Companies must be vigilant about cybersecurity threats if they’re going to maintain business continuity and comply with data protection laws and regulations.
Traditionally, cybersecurity has been considered a technical issue to be managed by IT or security experts. But as digitalization becomes more central to business, it’s critical for top executives to gain a deep understanding of the risks posed by digitization and AI—and the ways their companies can respond.
Threats From Within and Without
AI’s security risks can be divided into two main areas: those that arise when companies use AI internally, and those that occur when threat actors use AI maliciously.
Let’s look at internal risks first. AI relies on mass data collection and use, and most AI tools and services are cloud-based and delivered through the internet. This means there is always the possibility that there could be a leak of sensitive business information, intellectual property, or personal information belonging to employees and customers. When these breaches occur, they’re usually the result of vulnerabilities in either the AI systems or the procedures and policies related to their use.
If companies inadvertently disclose sensitive data, they can suffer severe reputational damage, legal consequences, and financial losses. Data breaches also can cause personal harm to consumers as well as wider societal damage.
AI relies on mass data collection and is delivered through the internet, meaning that there is always the possibility of a leak of sensitive information.
For instance, the 2020 data breach of Finnish company Vastaamo Psychotherapy Centre resulted in the theft of confidential data, including therapy notes for more than 33,000 patients. The incident led to Vastaamo’s bankruptcy, as well as legal and financial consequences for some stakeholders. It also caused widespread public outrage and placed immense stress on the patients who faced the threat of having their private information exposed.
Public records and reports mention several internal technical and procedural vulnerabilities that made Vastaamo susceptible to cyberattacks. The cause of the breach, however, aligns more closely with the second type of cyberthreat—an action by a malicious actor. In the Vastaamo case, the cybercriminal was later arrested in France and returned to Finland for sentencing.
While cybercrime and hacking are hardly new phenomena, GenAI puts more tools in the hands of criminals. It enables them to create more realistic phishing emails and deepfakes in multiple languages. GenAI also makes it easier for criminals to develop malware and ransomware that exploit vulnerabilities in a company’s AI system, giving them unauthorized access to sensitive data and critical business processes.
A 2019 attack on IT provider SolarWinds shows how such a scenario can play out. A hacker group inserted malware into SolarWinds’ Orion monitoring platform, which was used by thousands of corporations and government agencies around the globe. The hackers were able to use the malware to conduct reconnaissance on the operational security environment and access customers’ internal information, which compromised the whole supply chain.
Both a Sword and a Shield
While such incidents should rightly make business leaders cautious, AI is not just a sword to be swung by cybercriminals, but a shield to be used by organizations to protect themselves. AI-enabled intrusion detection systems can pinpoint anomalies and malicious software by monitoring and analyzing transactions within networks and computing environments. AI also can be leveraged to identify, contain, and respond to emerging cyberattacks before they spread across organizational boundaries.
At many companies, security operation centers (SOCs) and cybersecurity incident response teams (CIRTs) already rely on AI’s defense capabilities to manage a range of tasks:
- Analyzing high volumes of data gathered from multiple sources in real time to enhance a company’s situational awareness.
- Automating time-consuming routine tasks such as identifying phishing emails or optimizing access policies in real time, allowing cybersecurity experts to focus on addressing higher-priority issues.
- Generating actionable reports for communicating cyberthreats and incidents to various stakeholders.
AI technology is both a threat and a defense. This means that the implementation of any AI-enabled information security and governance management system requires a considered, targeted approach—one that starts with a careful evaluation of how AI could complement existing cybersecurity tools and protocols.
AI is not just a sword to be swung by cybercriminals, but a shield to be used by organizations to protect themselves.
Once IT professionals and company leaders complete that evaluation and put a system in place, their jobs are far from done. They can ensure its continued effectiveness only if they adopt a range of best practices, including:
- Testing the system’s security posture to confirm that controls are strong enough to handle emerging cyberthreats.
- Making continuous risk assessments and regular security audits, both internally and externally.
- Promoting an internal cybersecurity culture that empowers employees to become the strongest links in the firm’s cyber defense.
- Delivering customized training to help employees understand the value of cybersecurity and how it affects their professional duties.
- Encouraging employees to improve their individual security behavior and to report suspicious incidents to cybersecurity teams.
- Building strong, professional, information-sharing networks with authorities and external partners to enhance supply chain cybersecurity.
Because cybercriminals can cause disruptions by targeting less well-defended firms in the supply chain, the last point is especially important. By keeping the lines of communication with their partners open, top executives will stay up to date about threats, security requirements, and opportunities to improve. They also will be able to encourage their partners and suppliers to comply with industry-specific standards and best practices that relate to cybersecurity.
The Role of Education
It’s important for senior leaders to be involved in cybersecurity decisions for three reasons: They’re the ones who can ensure smooth internal collaboration among IT, human resources, public relations, and legal departments. They’re the ones who can build strong external relationships with partner organizations and authorities. And they’re the people who will be held accountable for any decisions that could place the firm in jeopardy if it becomes exposed to cyberattacks.
Business schools can help top executives understand what types of cyberthreats could face their operations and how these risks could affect their firms. Schools also can provide senior leaders with the business, technical, social, and ethical grounding they need to leverage cybersecurity as a competitive advantage—not only to ensure the continuity of their businesses, but also to enhance their organizations’ ability to create value.
At Aalto University School of Business in Espoo, Finland, our holistic approach to cybersecurity education combines technical knowledge with business acumen. We first introduce postgraduate and executive students to cybersecurity concepts and theories through research and case studies. Then we illustrate those concepts with real-world examples.
We focus on showing the connection between cybersecurity, business operations, and legal and ethical responsibilities. For instance, I often use a business model as a lens to explain how a successful cyberattack can compromise a firm’s value proposition or value network, disrupt its operations, or even completely diminish its ability to create value for customers and stakeholders.
In my own classes, I use different case studies, including a case study that I developed with Kari Koskinen about the Vastaamo data breach. We discuss the breach from a business model perspective, discussing how the company’s lax approach to cybersecurity manifested in various sociotechnical vulnerabilities and a lack of proper administrative and procedural security controls. The case provides an overview of how a breach can diminish the value of a data-driven business model, leading to significant consequences for companies that process large amounts of sensitive personal information.
I also use a case study I wrote with Koskinen and Yijuan Wei about the SolarWinds supply chain attack. The SolarWinds example highlights how any digital solution, including AI, can be both a risk and a shield.
In-Class Exercises
In addition to using case studies in my classes, I find great value in hands-on exercises that allow students to practice cybersecurity-related activities such as threat modeling and risk analysis.
One role-playing exercise, based on our past research, presents students with a moral dilemma in the context of complying with an information security policy. Students hear audio recordings of realistic conversations between two master’s thesis students at the university. The protagonist is trying to decide if she should give a fellow team member the password to her university account. This would violate the university’s information security policy but enable the team to submit a group project by an upcoming deadline.
Through simulations, students learn that the scope of cyberattack incidents can be affected by human factors such as how well leaders deal with uncertainty and communicate effectively.
Students are asked to take on the role of the protagonist and decide how to solve the dilemma. After students share their responses, we consider other actions they could take, and we also consider the legal and moral issues underlying potential solutions.
For instance, we discuss how sharing the password allows the fellow student to access the personal information of individuals who are participating in the protagonist’s master thesis. This compromises participants’ privacy and violates the conditions in the consent form, as well as the European Union’s General Data Protection Regulation. The goal is to ensure that students understand that security decisions have social and ethical dimensions, not just technical and procedural ones.
In another exercise, I ask students to conduct threat modeling to identify the risks associated with implementing a new information system for a hypothetical manufacturing organization. Students first conduct the threat modeling manually, then repeat the task using GenAI, and compare the advantages and disadvantages of each method.
Students quickly learn that, while they can complete a rather long task by giving just one prompt, different members of the group will receive different results with the same prompt. They also realize that GenAI does not account for contextual factors such as local regulations or geopolitical conditions that security experts must consider. Here, the goal is to show that while GenAI can increase the efficiency of the exercise, human intervention is still necessary to ensure the accuracy and reliability of high-risk, sensitive cybersecurity tasks.
In a final exercise, I use a cyberattack simulation developed by Harvard Business Impact Education, in which students play the role of an executive who must address a cyber incident. The goal is to enable students to experience how incident response teams will be challenged by the dynamic nature of an attack, how emerging events create a sense of urgency, and how various parties will get involved. As a result, they come to understand that the scope of a cyber incident can be affected by human factors such as how well leaders deal with uncertainty and communicate effectively with diverse stakeholders.
Creating and Protecting Value
As AI becomes more prevalent in business settings, it will be necessary for schools to incorporate new technology into the classroom. In business schools, leaders can learn not just how to create value, but how to protect that value from weak software and malicious cybercriminals. They can learn how to handle both the risks and opportunities that accompany innovations such as GenAI.
Cybersecurity is now an essential topic for everyone to learn, from the next generation of leaders taking undergraduate and graduate courses to members of the C-suite pursuing executive education. In the AI age, data protection is no longer the sole domain of IT professionals—it’s the responsibility of everyone across an organization.