We disagree with many of the remarks in the following. Still we do believe it is raising some very important questions.
D.K.
By Andrew Facini
November 19, 2025
On the evening of August 4, 2020, a massive explosion rocked the Port of Beirut in Lebanon. The explosion happened as hundreds of cameras were already pointed at a burning warehouse, in which more than 3,000 tons of ammonium nitrate used in fertilizers and explosives were improperly stored. The resulting blast devastated downtown Beirut, killing more than 200, injuring 7,000, causing $30 billion in damage, and displacing as many as 300,000 Beirutis.
It was one of the most-observed disasters of its kind in history, as footage and reporting from all angles—before, during, and following the blast—was shared around the world in real-time. The tragic outcome was reminiscent of the 2015 industrial explosions at a container storage station in Tianjin, China, which were similarly captured by citizens thanks to the lead-up spectacle of an uncontrolled fire.
Despite all evidence, President Donald Trump said to reporters:
“I’ve met with some of our great generals and they just seem to feel that it was not a—some kind of manufacturing explosion type of event. This was a—seems to be according to them, they would know better than I would, but they seem to think it was an attack. It was a bomb of some kind.”
Shortly after Trump made his comment, the online commentary engines started to churn. Right-wing conspiracy theories—mostly focused on Israel—gained traction despite a lack of evidence. Some commentators went as far as to share doctored videos to support the falsehoods. Incorrect claims to expertise and contrarian fake news posts captured inordinate attention, making accurate information all the harder to find and amplify. Experts on urban disaster response, hazardous materials, military capabilities, and the Middle East faced stiff, platform-driven headwinds against clearing up falsehoods and sharing urgent and important information.
But a disaster need not have the unique intrigue of an explosion to be badly exacerbated by these same forces of misinformation and conspiracy theories.
Hurricane Helene. In 2024, a series of hurricanes devastated communities in the Southeast and Mid-Atlantic, with Hurricane Helene in particular bringing extreme floods to Georgia and western North Carolina on September 26. As federal resources were sent to disaster zones, online commentators began circulating outlandish conspiracy theories targeting the agencies and officials responsible for coordinating the on-the-ground response.
These false claims were massively advantaged over factual reporting on social media, undermining public trust and causing additional challenges for local officials. The unchecked conspiratorial discourse even led to the arrest of a man who threatened FEMA workers, temporarily halting their work. By the time government “rumor response” efforts arrived a week later, there was no chance for the facts to out-compete the fictions—or to regain the critical time lost from confusion and discoordination.
Recent deep cuts to FEMA by the second Trump administration, as well as continued leadership turmoil, threaten to further undermine the ability of the agency to communicate with authority in a crisis. Diminishing the practical capacity of the government and the reputational credibility of institutions responsible for rapid response can only evaporate trust and lead to citizens looking to alternative sources for critical information, even without the distortions present on social media platforms.
Unfortunately, this dangerous dynamic is only worsening as platforms feed users an ever-increasing amount of content generated by artificial intelligence services.
Kamchatka earthquake. In July, after an 8.8 megathrust earthquake struck off Russia’s Kamchatka Peninsula and sparked tsunami warnings across the Pacific, many took to digital sources to find clear information on the timing and severity of the event. Resources were immediately activated, including the National Weather Service’s Tsunami Warning System, which provides the latest and most accurate information to the public on the when and where of potential dangers as soon as it becomes available.
But, despite public access to authoritative sources, chatbots such as X’s “Grok” and AI-driven summaries on Google’s search results repeatedly told users that still-active tsunami warnings had been lifted, and even went as far as generating nonexistent sources to support these false claims.
Such is the nature of today’s hallucination-prone, algorithm-driven information environment, in which continued user engagement often comes at the direct expense of accuracy, authority, and ultimately public safety. Culprits behind this trade-off include the heavy promotion of shocking and outrageous content, and pre-biasing AI models to anticipate and pursue user preferences, rather than offering and prioritizing verified information.
National security implications. This undermining of disaster-related communications isn’t just a problem for any given local disaster response—it represents a deep and worsening national security vulnerability.
If a country’s overall resilience can be measured by its ability to respond to hostile acts, it depends to a large degree on its ability to quickly reach its people with critical, reliable information. Until recently, emergency communications have been more centralized, through services like the Emergency Alert System, and via a comparatively limited number of traditional media companies. But as citizens and leaders alike get an increasing share of news information from social media platforms that customize and distort the content reaching the feeds of their users, it is a great deal more difficult for straightforward facts from authoritative sources to be widely seen and heard without distortion.
Most Americans will readily lament the smaller, everyday distortions found on social media platforms, but an unfolding natural or human-made disaster is not the time to have to navigate a competitive marketplace for trustworthy information—certainly not one so deeply skewed in favor of lies. Without active intervention, the mass media ecosystem is destined to fail when it will be needed the most.
One type of crisis in particular may constitute a “perfect storm” for emergency communications: the wartime use of a nuclear weapon.
Nuclear attacks. In the span of just nine months, this year has seen a higher degree of nuclear risks than many could have predicted. In May, a new high-water mark for direct conflict between nuclear rivals India and Pakistan. In June, a war between nuclear-armed Israel and nuclear-aspirant Iran entangled the United States with direct strikes. And these conflicts—each informed by the looming potential for nuclear use—are set against the backdrop of nuclear superpower Russia’s ongoing imperial war against non-nuclear Ukraine, now in its fourth year.
With each deepened crisis, policy makers across the globe are revisiting their calculations on nuclear weapons, sometimes finding generations-old security structures less compelling. Allies once reassured by the US nuclear umbrella are now considering independent approaches to nuclear deterrence, and reckless rhetoric from Moscow has eroded important norms against leveraging nuclear threats.
In the terrible event that any of the nine nuclear-armed countries commences a nuclear attack, it is likely to first involve a limited, “tactical” strike. Such a strike—the theory goes—would attempt to leverage the shocking escalation to advance a strategic goal by risking a limited nuclear exchange, but which would still stop short of an apocalyptic, all-out nuclear war. In the immediate aftermath of such a limited exchange—one with potentially high casualties or damage—governments would be placed in a desperate and uncharted phase of crisis management in which narratives, signals, and basic on-the-ground information would be critical. As leaders on all sides look for the clearest and most up-to-date intelligence available, opposing forces would seek to degrade access and obfuscate facts. In doing so, they would likely find a vulnerable public information environment a remarkably useful tool for manipulation.
The nature of nuclear attack itself can also act as its own accelerant for misinformation or disinformation.
Like the Beirut explosion, a single tactical nuclear strike may present a shocking disaster scene which can be documented and shared—and potentially distorted—by reporters and witnesses. Like after a hurricane, responding to a nuclear explosion would require a sweeping government response, involving agencies and procedures largely unfamiliar to the public. And given the unprecedented nature of the situation, AI-driven “summaries” would risk drawing inaccurate, personalized connections, distorting public understanding in a critical moment with real on-the-ground dangers. Worse, these challenges would interact and combine, requiring an extreme degree of authority and clarity if emergency communications at any level are to be effective.
The individual and community risks posed by the algorithms of social media and AI technology companies come full circle, directly exacerbating national security risks. Even if misinformation could somehow be controlled within a government—while still avoiding the trap of authoritarianism and its own distortions of the information space—its passive effects on individual leaders and their constituents make determining basic fact and fiction an increasingly personal task. This is an area all too prone to emotions, confirmation bias, and other shortcomings of human psychology. For a government’s on-the-ground response to a nuclear attack, this can take the form of disorganization, ineffectiveness, or even resistance to rescue efforts, as witnessed in North Carolina during the response to last year’s Hurricane Helene. Or those same distortions can filter upward, more directly affecting the beliefs and statements of leadership in a situation when every action and statement will be under extreme scrutiny at home and abroad.
Donald Trump’s off-the-cuff remarks hours after the Beirut explosion in 2020 sent US officials scrambling to clarify there was no such indication of a bomb, an attack, or any foreign involvement at all. We may never learn whether it was some specific misinformation or disinformation that led him to make such a claim. But what if the stakes were much higher, and the timeline for interpreting and deploying responses much shorter? And what if a potential adversary felt as if it could not trust the statements of a head of state anyway?
Resisting top-down disinformation. In August, Trump posted an alarming rebuke to former-Russian President Dmitry Medvedev, now Deputy Chairman of Russia’s Security Council, on Trump’s own social media platform: “I have ordered two Nuclear Submarines [sic] to be positioned in the appropriate regions, just in case these foolish and inflammatory statements are more than just that.”
While nuclear submarines are not able to be commanded via social media posts, the president’s decision to issue such a statement during this particular information environment—and his presumed attention to the content that social media platform prioritizes—is a far departure from the longstanding expectation that heads of state act and communicate based on verifiable facts and expert consensus, very especially in strategic contexts.
Even the best-conceived national strategies can quickly become unreliable—internally and externally—if conducted by issuing senior-level short takes on engagement-driven media platforms, and if leaders are known to consume information within environments that prioritize outrage over accuracy.
In a century already seeing both the dissolution of long-stable security norms and the increasing frequency and severity of natural disasters, a better path forward must be charted. There are three near-term steps that should be taken to reduce the precipitous risks of failure via vulnerable information environments.
The first is to build up a general resistance to misinformation. The social media revolution has ironically resulted in the near-total consolidation of news and information by fewer, profit-focused companies, which benefit most from maximizing engagement at the expense of quality and trust. This is a difficult problem to address: Humans are particularly vulnerable to these distortions, and it is ultimately a structural, rather than an individual, problem. But by investing in a concerted effort to teach and train new media literacy, including promoting norms and methods like intellectual humility, societies can begin reducing the worst effects of algorithm-driven media, as well as the incentives for companies to rely on harmful algorithms.
The second is to develop common-sense guidelines for social media platforms during times of crisis. As companies like Meta have already demonstrated, the algorithms that govern which content is prioritized can be easily and quickly adjusted. While most such formulas are continually optimized for maximizing an individual’s time spent on a platform—that is, by prioritizing outrage over information—it’s not hard to conceptualize legislation by which social media and AI technology companies are required to switch into a transparent “disaster response” mode in the wake of a natural disaster or other severe event in the area users are located. In such a context, posts from government accounts, international news aggregators, and verified journalists could receive a temporary boost, while outrage-driving commentary would be disincentivized. Of course, this would undercut the “democratic” approach to social media and risk abuse, but addressing the problem from an industry-wide angle would still have the added benefit of requiring coordination between media companies that ordinarily face a prisoners’ dilemma of cooperating for mutual benefit or betraying their competitors when it comes to audience engagement.
The last is to urgently raise our understanding of nuclear weapons and nuclear risks. While natural disasters can be anticipated but cannot be prevented, a nuclear crisis is one that may not be anticipated but can certainly be avoided, given that it is ultimately a human choice.
A recent study on the consequences of tactical nuclear war outlines the various possible near-term outcomes of a tactical strike and finds that some of the biggest risks in that period stem directly from preexisting beliefs and statements about nuclear weapons—the kinds that would be urgently tested in a crisis scenario. By reaffirming a long-held understanding among countries that any nuclear attack—however “limited”—would produce a catastrophic, world-changing event, and by understanding ahead of time the various unpredictable paths such a crisis may take, the public can reduce the risk of having one in the first place. Especially under an ailing information environment, prioritizing broader risk-reduction is a critical way to hold onto strategic stability through a period of deep change.
These recommendations alone cannot turn the ship entirely. But they can increase the odds for governments and individuals of successfully navigating the next crisis, be it natural or human-made. Restoring a broadly held value in institutional truth and information accuracy should be the long-term vision. But until that time, the public and their leaders must learn the skills and tools necessary to shore up our resilience against forces that fuel chaos and disorder.
As President Trump observed in his August post addressed to Medvedev, “Words are very important, and can often lead to unintended consequences, I hope this will not be one of those instances.”
We remind our readers that publication of articles on our site does not mean that we agree with what is written. Our policy is to publish anything which we consider of interest, so as to assist our readers in forming their opinions. Sometimes we even publish articles with which we totally disagree, since we believe it is important for our readers to be informed on as wide a spectrum of views as possible.











