Australia’s banking regulator has told lenders their safeguards are not advancing as quickly as AI, cautioning that newer AI tools could make cyber intrusions bigger and quicker.
In correspondence sent to banks, the Australian Prudential Regulation Authority (APRA) said much of the sector’s information security framework was failing to keep up with the pace of AI change.
Access deeper industry intelligence
Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.
According to the regulator, rapid progress in AI is becoming a stronger risk for the country’s financial sector.
APRA member Therese McCarthy Hockey said that one “cannot be blind to the risks of such powerful technology”, despite the “tremendous opportunities” it offers.
“While we are not proposing to introduce additional requirements at this stage, we expect to see a significant improvement in how entities are closing the gaps between the power of the technology they are using and their ability to monitor and control it,” noted Hockey.
Referencing a review it had conducted, APRA cautioned that “frontier AI models such as Anthropic’s Claude Mythos, which could enhance the discovery of vulnerabilities by bad actors, are expected to further increase the probability, speed and scale of cyber attacks”.
Mythos is said to have the ability to identify and exploit vulnerabilities across major operating systems and web browsers. Through an initiative called “Project Glasswing,” initial access is restricted to a select group of large technology and financial firms.
Anthropic said this measure is intended to help secure critical systems against such capabilities before similar AI tools are released more widely.
“APRA has heard clear recognition from regulated entities of the need for a step change in cyber practices and a continuing uplift in capabilities to protect IT assets in an evolving threat environment.”
Last week, a spokesperson for Home Affairs Minister Tony Burke said Australia was working with software companies, including Anthropic, on possible cybersecurity weaknesses, reported Reuters.
APRA said feedback from its industry consultation showed banks were placing excessive weight on vendor presentations and AI model summaries without fully weighing possible risks.
“APRA observed many boards are still developing the technical literacy required to provide effective challenge on AI related risks and oversight,” the letter said.
