A major operational risk associated with AI is the risk of misinformation and manipulation. This includes AI-generated false information, deepfakes, and impersonation attacks that can be used by bad actors to deceive individuals or organizations. For instance, AI can create convincing fake identities or manipulate voices, leading to incidents like a finance worker being tricked into wiring large amounts of money due to a deepfake impersonation of executives. Other operational risks include model risks, where AI models can be tampered with or manipulated, leading to system failures or unsafe operations—for example, a self-driving car misreading traffic signs due to a compromised AI model. Additionally, data risks such as vulnerabilities in training datasets can lead to breaches, cyberattacks, and compromised confidentiality. Addressing these risks involves establishing AI security strategies, conducting risk assessments and adversarial testing, securing data throughout the AI lifecycle, training users to spot misinformation, and maintaining human oversight to detect AI errors or hallucinations. In summary, misinformation and manipulation by AI technologies constitute a significant operational risk that can lead to costly decisions, security breaches, and damage to reputation if not properly managed.