虚构的危机:AI如何编造安全事件
想象一家公司在清晨醒来,发现新闻报道声称其遭受了重大数据泄露。细节详尽、技术准确且极具说服力。然而,泄露并未发生,没有任何系统被攻破,也没有数据丢失。是一个语言模型从头开始生成了整篇故事,并填充了看似合理的细节。在公司弄清真相之前,知名媒体的记者已跟进报道。数小时内,公司就被迫动员沟通团队来应对这一虚构事件。
案例二:旧事重提的陷阱
另一起事件始于真实背景。多年前,一家公司曾发生过真实的数据泄露并被广泛报道,该事件早已结案。随后,原报道媒体重新设计了网站,旧文章获得了新的URL和时间戳,搜索引擎将其重新索引为新鲜内容。AI驱动的新闻聚合器捕捉到信号并将其标记为正在发展的突发事件。公司发现自己不得不回应一个数年前就已解决的陈年旧案。
案例三:虚假的专家证言
第三类事件引入了新维度。某网络安全出版物发表了一篇关于英国公司遭遇商务邮件诈骗(BEC)、损失近10亿英镑的报道。文中引用了一位知名安全研究员的话,但事实上他从未接受过采访。AI生成的引文被自信地归于其名下,媒体将其作为事实发布。
这三个案例共同揭示了一种大多数组织尚未准备好应对的威胁。AI已经发展出凭空捏造令人信服的安全事件的能力,这些事件带有技术细节、具名来源,其可信度足以触发全面的危机响应程序。
A company wakes up to a news story claiming it has suffered a major data breach. The details are specific, technical and convincing. But the breach didn’t happen. No systems were compromised. No data was taken. A language model generated the entire story, filling in plausible details from scratch. And before the company can figure out what’s going on, a reporter at a reputable outlet picks up the story and requests comment. Within hours, the company is drafting statements and mobilizing its communications team to address a fictional event.
A second incident begins with something real. Years earlier, a company had suffered a genuine breach that received wide media coverage. The incident was investigated, resolved and closed. Then one of the outlets that originally reported on it redesigned its website. Old articles received new URLs and updated timestamps, and search engines re-indexed them as fresh content. AI-powered news aggregators picked up the signal and flagged it as a developing story. The company found itself fielding inquiries about an incident that had been resolved years before.
[Ed. note: The authors are withholding full specifics about the incidents because full disclosure could cause harm, yet CyberScoop confirmed with the authors that the incidents did in fact take place].
A third incident introduces yet another dimension. A cybersecurity publication ran a story about a business email compromise attack that cost a UK company close to a billion pounds. The article quoted a well-known security researcher, yet in reality, he had not spoken to the publication. AI generated the quotes, assigned them to him with full confidence, and the publication ran them as fact.
Together, these three cases expose a threat that most organizations have yet to prepare for. AI has developed the ability to fabricate convincing security incidents from nothing, complete with technical detail, named sources, and enough credibility to trigger full-scale crisis responses. Any organization that treats this as a distant or theoretical problem risks learning the hard way just how fast AI-generated fiction can become a real-world emergency.
The assumption that no longer holds
Cyber crisis response has always been built on a simple premise: something real happens, then you respond. That premise is breaking. AI systems now generate, amplify, and validate claims before security teams have confirmed anything. Once a narrative enters the ecosystem, it can be ingested into threat intelligence feeds, risk scoring platforms, and automated workflows. Fiction becomes signal.
For security teams, this creates a new class of false positive. Not a noisy alert from a misconfigured tool, but a fully formed external narrative that appears credible. A hallucinated breach can trigger internal investigations, executive escalation, and defensive actions. Time and resources get diverted toward disproving something that never happened.
Worse, it can influence real attacker behavior. Thr