
SINGAPORE - Organisations in Singapore are being urged to strengthen their cybersecurity measures, days after artificial intelligence company Anthropic began testing a frontier model that is reportedly able to break existing software.
Immediate mitigation measures include applying software patches to all critical and high-severity vulnerabilities, implementing multi-factor authentication on all interfaces and gateways, and reviewing user permissions to remove unnecessary access rights, said the Cyber Security Agency of Singapore in an advisory on April 15.
"Frontier AI models can reportedly reduce the time taken to identify vulnerabilities and engineer exploits - cutting short the duration from months to hours," said CSA.
The agency added that such models are capable of analysing billions of lines of codes to identify weaknesses, and conduct security analysis at speeds that outpace the time taken to do a review manually.
"However, the same capability could also be misused by cyber threat actors to accelerate vulnerability exploitation and the development of malicious capabilities," it added.
While there are no indications that such capabilities are being misused currently, it added that the advisory is meant to help organisations plan ahead to guard against such risks.
Still, companies should immediately patch critical vulnerabilities in internet-facing systems, which could cause widespread impact on company systems if compromised.
"These assets face the greatest exposure to automated attacks and present the highest risk of widespread impact if compromised," said CSA.
Access to all internet-facing development and test environments should also be strictly controlled, or disconnected from Internet, said the agency. User permissions should also be reviewed to only give access rights to who need it for their job function, and dormant and unused work accounts should be deleted.
CSA's advisory comes days after news broke earlier in April that Anthropic has begun testing its latest AI model with a group of around 50 firms, instead of launching it for public use.
The Claude Mythos is reportedly able to autonomously surface vulnerabilities in software systems and generate codes to exploit flaws. Anthropic said the model has found vulnerabilities in every major browser and operating system.
"Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," said Anthropic in a statement on its website.
"The fallout - for economies, public safety, and national security - could be severe."
In the longer run, CSA has also urged organisations to continuously monitor critical attack pathways such as network traffic and user behaviour, and to focus surveillance on high-risk activities on privileged accounts and access to sensitive systems.
To shorten time needed to deploy security updates, companies are also advised to streamline approval processes and pre-test security patches in isolated environments.
"AI-powered attacks can weaponise newly disclosed vulnerabilities within hours of publication, making rapid patch deployment critical to preventing mass exploitation," said CSA.
To quickly pick up on vulnerabilities, the authorities also called for companies to use AI tools to continuously scan and identify misconfigurations and weak credentials across the company's IT infrastructure.
"Frontier AI models represent a major advancement in enhancing cybersecurity capabilities but there are also risks involved," said CSA.
"Organisations should take proactive steps to raise cyber hygiene standards and strengthen overall cyber defence posture to protect themselves against risk of attacks from frontier AI models."