The Bank Of England is set to warn City chiefs about the risk of a new artificial intelligence model that it believes could break into the financial system and wreak havoc.
Treasury, banking and security officials will meet to discuss the risks posed by the new AI model Claude Mythos to financial system
The AI model from Anthropic has sparked a mix of alarm and skepticism as it is said to be so skilled at coding it could be a great weapon for hackers.
BYPASS THE CENSORS
Sign up to get unfiltered news delivered straight to your inbox.
You can unsubscribe any time. By subscribing you agree to our Terms of Use
The Telegraph reports: Anthropic, a Silicon Valley AI company, has deemed the tool too dangerous to release to the public after the AI discovered previously hidden flaws in computer systems quicker than any human.
Justin Trudeau and Katy Perry Exposed Dining on Raw 'Child Flesh' in Paris
The revelations about the new AI’s capabilities prompted Scott Bessent, the US treasury secretary, and Jerome Powell, the chairman of the US Federal Reserve, to summon Wall Street bank executives to a crisis meeting this week.
Meanwhile, Duncan Mackinnon, the Bank of England’s risk chief, will chair a gathering of the Cross Market Operational Resilience Group in the next fortnight that will discuss the new AI threat, The Telegraph understands.
Officials from the Treasury, the Financial Conduct Authority and the National Cyber Security Centre will also attend the meeting.
It comes amid fears the AI model could breach the IT security of the financial system.
‘Everyone has a right to be concerned’
Anthropic, which revealed Mythos earlier this week, said it had already found thousands of security vulnerabilities in popular web browsers and operating systems that could leave users exposed to hacks.
Government cyber experts at the UK’s AI Security Institute (AISI) are testing Mythos to help develop defences against it.
Ciaran Martin, the former head of the UK’s National Cyber Security Centre, said: “There is a lot of excitable talk and hype about Mythos, but its security implications are real and need addressing.
“The timeline for finding and fixing vulnerabilities collapses to seconds, minutes and hours, rather than days, months or years.”
He added: “There’s plenty we can do to shore up defences – and there’s actually a real opportunity here to fix a lot of the internet’s hidden bugs.”
Anthropic confirmed that researchers at AISI, a taxpayer-backed lab launched by Rishi Sunak’s government, had begun stress-testing the AI bot.
It has given British security experts early access to its latest AI tools for “pre-deployment testing” before they are launched publicly to check them for flaws or unexpected capabilities.
AISI, which is led by Adam Beaumont, the former chief AI officer of GCHQ, has performed checks on AI apps from companies including Google and OpenAI.
Anthropic has signed up tech giants including Apple, Microsoft and Amazon to test Mythos, in a deal dubbed Project Glasswing. They aim to plug holes in their own software.
Danny Kruger, a Reform UK MP, has written to Darren Jones, the Chancellor of the Duchy of Lancaster, urging the Government to co-operate with Anthropic and asking if it would offer to collaborate with Project Glasswing.
Don Smith, a cybersecurity expert at Sophos, said: “I think everyone has a right to be concerned. I’m concerned. My colleagues are concerned.”

