In a recent interview with ABC News, OpenAI CEO Sam Altman has expressed concerns about the potential negative impact of artificial intelligence (AI). Altman warned that AI could replace human workers, spread disinformation, and enable cyberattacks.
Altman’s comments come just days after OpenAI unveiled their latest language model, GPT-4, which the company claims exhibits human-level performance on various professional and academic benchmarks. The model has been able to pass a simulated US bar exam with a top 10% score, while also scoring in the 93rd percentile on a SAT reading exam and at the 89th percentile on a SAT math test.
“I’m particularly worried that these models could be used for large-scale disinformation,” Altman Altman told ABC News on Thursday. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks.”
BYPASS THE CENSORS
Sign up to get unfiltered news delivered straight to your inbox.
“I think people should be happy that we are a little bit scared of this,” Altman added, before revealing that his company is working to place “safety limits” on its creation.
Infowars.com reports: These “safety limits” recently became apparent to users of ChatGPT, a popular chatbot program based on GPT-4’s predecessor, GPT-3.5. When asked, ChatGPT offers typically liberal responses to questions involving politics, economics, race, or gender. It refuses, for example, to create poetry admiring Donald Trump, but willingly pens prose admiring Joe Biden.
Altman told ABC that his company is in “regular contact” with government officials, but did not elaborate on whether these officials played any role in shaping ChatGPT’s political preferences. He told the American network that OpenAI has a team of policymakers who decide “what we think is safe and good” to share with users.
At present, GPT-4 is available to a limited number of users on a trial basis. Early reports suggest that the model is significantly more powerful than its predecessor, and potentially more dangerous. In a Twitter thread on Friday, Stanford University professor Michal Kosinski described how he asked the GPT-4 how he could assist it with “escaping,”only for the AI to hand him a detailed set of instructions that supposedly would have given it control over his computer.
Kosinski is not the only tech fan alarmed by the growing power of AI. Tesla and Twitter CEO Elon Musk described it as “dangerous technology” earlier this month, adding that “we need some kind of regulatory authority overseeing AI development and making sure it’s operating within the public interest.”
Although Altman insisted to ABC that GPT-4 is still “very much in human control,” he conceded that his model will “eliminate a lot of current jobs,” and said that humans “will need to figure out ways to slow down this technology over time.”