![]() |
05.04.26
Why AI Belongs in Your Crisis Planning Playbook
THERE’S a phrase that seems to be everywhere in the business world right now, but it is likely missing from most companies’ crisis management plans: Artificial Intelligence (AI). Crack open any decent crisis planning playbook, and you’ll find detailed roadmaps for navigating natural disasters, system failures, and traditional cyberattacks. These risks are well understood, and crisis management planners have often seen how other organizations have handled these setbacks or even dealt with them themselves. Although AI now touches on great swaths of our professional and personal lives, it is still a very young technology. And while most people vaguely understand that AI introduces some new level of risk, these dangers largely have yet to materialize in the sorts of public disasters that make headlines and get business leaders to take notice. Although no one can predict exactly how AI-related risks will unfold in the years to come, businesses should start incorporating the technology into their crisis management plans now. Bad actors are already using (and misusing) the technology, and some of the vulnerabilities in early AI deployments are starting to reveal themselves. Armed with this knowledge, organizations can prepare for AI-driven incidents before these events cause full-blown crises. How AI Is Reshaping Cyber Threats Unfortunately, AI is already making cyber attackers faster and more effective. Attacks that once required ample time, expertise, and manual effort to carry out can now be automated and scaled. The technology is also opening organizations to new attack types meant to leverage the vulnerabilities of AI systems. Consider phishing attacks - a form of social engineering in which users are tricked into clicking a malicious link, downloading an infected file, or providing sensitive information such as passwords or banking information. With the help of AI, attackers can generate countless highly personalized messages, tailoring their tone, language, and details to specific targets. This makes fraudulent communications more difficult for employees to identify, increasing the likelihood of a successful breach. At the same time, AI is introducing entirely new categories of risk. Many businesses are deploying the technology for processes such as customer service, which involve troves of sensitive information. Emerging cyber-attacks such as prompt injection, data poisoning, and model manipulation can be used to expose this information, or to manipulate AI outputs in ways that harm the business. Finally, AI is blurring the line between fact and fiction. With deepfake video or audio messages, attackers have impersonated executives or colleagues, creating the trust needed to convince employees to take potentially disastrous actions. Bringing a Crisis Planning Lens to AI Perhaps understandably, many organizations still treat AI as a mostly technical capability aimed at transforming business outcomes. However, leaders must also carefully consider the risks of the technology. Looking at AI through a crisis planning lens means considering it with the same seriousness that teams bring when planning for a potential natural disaster, a system outage, or a data breach that exposes customer payment information. Crisis management teams must think through how they would respond if an operations or management system were compromised by external AI. For instance: What is the role of legal, public relations, and product teams if a company’s chatbot begins providing users harmful or biased responses? What steps will the organization take if an attacker impersonates the CEO with a deepfake video that leads to a large fraudulent transaction or jeopardizes the company’s reputation? And what happens if a previously unknown vulnerability in an AI tool makes confidential human resources data available to users across the company or, worse, external bad actors? AI is evolving quickly; crisis plans must be revisited frequently. It’s important that these conversations include cross-functional teams, because that is who will be responding to virtually any crisis involving AI. IT Security teams may be the first to detect an issue, but legal departments, communications professionals, and executive leadership will all likely play critical roles in determining how the organization responds. Aligning these groups ahead of time will avoid delays and confusion when the time comes to act. Although all the risks surrounding AI may not yet be fully understood, we can say with certainty that the technology will play a role in future high-profile crises. Organizations that wait for an incident to force action will find themselves making critical, on-the-spot decisions under extraordinary pressure. But those that begin integrating AI into their crisis planning now will be able to respond from a position of preparedness rather than panic. ![]()
Posted by Michael McKinney at 03:07 PM
|
BUILD YOUR KNOWLEDGE
![]()
How to Do Your Start-Up Right STRAIGHT TALK FOR START-UPS
Grow Your Leadership Skills NEW AND UPCOMING LEADERSHIP BOOKS
Leadership Minute BITE-SIZE CONCEPTS YOU CAN CHEW ON
Classic Leadership Books BOOKS TO READ BEFORE YOU LEAD |