Google Exec Admits Globalists Are Using AI To Design ‘Deadlier Pandemics’

Fact checked by The People's Voice Community
Google executive admits AI will soon create deadlier pandemic

A former Google executive has warned that the globalist elite are currently using artificial intelligence (AI) to develop a new, more lethal pandemic that will be unleashed to depopulate the world.

Mustafa Suleyman, who previously served as a leading executive in Google’s AI division, DeepMind, says he is deeply concerned that unchecked advancements in AI might are going to lead to new, more lethal biological threats.

During an interview on The Diary of a CEO podcast, Suleyman warned, “The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible.”

Suleyman also said that these synthetically designed pathogens, resulting from AI-driven research, “can spread faster or [be] more lethal…They cause more harm or potentially kill, like a pandemic.”

Thefederalistpapers.org reports: He emphasized the urgent need for stricter regulation surrounding AI software.

Raising a hypothetical, yet plausible, concern, Suleyman shared his “biggest fear” that a “kid in Russia” could genetically engineer a new pathogen which could trigger a pandemic that’s “more lethal” than anything the world has faced thus far, within the next five years.

Recognizing the immediacy of this potential threat, he said, “That’s where we need containment. We have to limit access to the tools and the know-how to conduct such high-risk experimentation.”

As the tech industry converges in Washington on September 13 for an AI summit, led by Sen. Majority Leader Chuck Schumer, Suleyman’s voice resonates with a sense of urgency.

“We in the industry who are closest to the work can see a place in five years or 10 years where it could get out of control and we have to get on top of it now,” he said.

“We really are experimenting with dangerous materials. Anthrax is not something that can be bought over the internet and that can be freely experimented with,” he continued. “We have to restrict access to the those things.”

“We have to restrict access to the software that runs the models, the cloud environments, and on the biology side it means restricting access to some of the substances,” Suleyman added.

His statements echo a broader sentiment within the tech community. Earlier in March, a host of tech magnates put pen to paper, signing an open letter that advocated for a six-month moratorium on AI training.

Elon Musk, CEO of Tesla and SpaceX, also shared similar concerns, drawing parallels to fictional narratives, cautioning that unchecked AI could potentially turn hostile, much like the robots in the popular film series, “Terminator.”

Suleyman ended on a contemplative note, stating, “Never before in the invention of a technology have we proactively said we need to go slowly.”

We need to make sure this first does no harm.”

“That is an unprecedented moment,” he added. “No other technology has done that.”

Sean Adl-Tabatabai
About Sean Adl-Tabatabai 17693 Articles
Having cut his teeth in the mainstream media, including stints at the BBC, Sean witnessed the corruption within the system and developed a burning desire to expose the secrets that protect the elite and allow them to continue waging war on humanity. Disturbed by the agenda of the elites and dissatisfied with the alternative media, Sean decided it was time to shake things up. Knight of Joseon (https://joseon.com)

4 Comments

  1. What rubbish You can’t even order a roll of toilet paperinline and expect to see it within 6 weeks. You can’t find anything Nothing useful anymore All is erased from, at least Australian service providers by rule of law from well respected members of commune in progress.

  2. Of course the cabal will use AI against us. They hate us & their goal is to kill 95% of the world’s population.

Leave a Reply

Your email address will not be published.




This site uses Akismet to reduce spam. Learn how your comment data is processed.