News & Updates

The latest news and updates from companies in the WLTH portfolio.

White House chief of staff to meet with Anthropic CEO over its new AI technology

WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

Anthropic
San Francisco Gate6d ago
Read update
White House chief of staff to meet with Anthropic CEO over its new AI technology

Anthropic's Mythos shows need to 'come to grips' with AI risks: BoC governor

Bank of Canada Governor Tiff Macklem says global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Anthropic's Mythos. Developer Anthropic claims the upcoming Mythos model of its Claude AI is capable of quickly detecting long-hidden cybersecurity vulnerabilities, news which has made major financial market players and regulators anxious about the technology's disruptive potential. The Bank of Canada gathered representatives from big banks and financial agencies last week to discuss the risks Mythos poses for the Canadian financial system. Macklem says while there's been a fair amount of discussion about Mythos on the sidelines of the International Monetary Fund's spring meetings in Washington, no one knows yet the full implications of this latest AI advance. He says Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to grapple with how these rapidly evolving technologies will affect the integrity of financial systems in Canada and around the world. Finance Minister Francois-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies. This report by The Canadian Press was first published April 17, 2026.

Anthropic
The Lethbridge Herald - News and Sports from around Lethbridge6d ago
Read update
Anthropic's Mythos shows need to 'come to grips' with AI risks: BoC governor

White House chief of staff to meet with Anthropic CEO over its new AI technology

WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

Anthropic
CityNews Halifax6d ago
Read update
White House chief of staff to meet with Anthropic CEO over its new AI technology

Anthropic's Mythos shows need to 'come to grips' with AI risks: BoC governor - Medicine Hat News

Bank of Canada Governor Tiff Macklem is seen during a news conference in Ottawa, Wednesday, March 18, 2026. THE CANADIAN PRESS/Adrian WyldBank of Canada Governor Tiff Macklem says global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Anthropic's Mythos. Developer Anthropic claims the upcoming Mythos model of its Claude AI is capable of quickly detecting long-hidden cybersecurity vulnerabilities, news which has made major financial market players and regulators anxious about the technology's disruptive potential. The Bank of Canada gathered representatives from big banks and financial agencies last week to discuss the risks Mythos poses for the Canadian financial system. Macklem says while there's been a fair amount of discussion about Mythos on the sidelines of the International Monetary Fund's spring meetings in Washington, no one knows yet the full implications of this latest AI advance. He says Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to grapple with how these rapidly evolving technologies will affect the integrity of financial systems in Canada and around the world. Finance Minister Francois-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies. This report by The Canadian Press was first published April 17, 2026. Craig Lord, The Canadian Press

Anthropic
Medicine Hat News6d ago
Read update
Anthropic's Mythos shows need to 'come to grips' with AI risks: BoC governor - Medicine Hat News

Anthropic's Mythos shows need to 'come to grips' with AI risks: BoC governor

I agree to the Terms and Conditions, Cookie and Privacy Policies, and CASL agreement. Bank of Canada Governor Tiff Macklem says global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Anthropic's Mythos. Developer Anthropic claims the upcoming Mythos model of its Claude AI is capable of quickly detecting long-hidden cybersecurity vulnerabilities, news which has made major financial market players and regulators anxious about the technology's disruptive potential. The Bank of Canada gathered representatives from big banks and financial agencies last week to discuss the risks Mythos poses for the Canadian financial system. Macklem says while there's been a fair amount of discussion about Mythos on the sidelines of the International Monetary Fund's spring meetings in Washington, no one knows yet the full implications of this latest AI advance. He says Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to grapple with how these rapidly evolving technologies will affect the integrity of financial systems in Canada and around the world. Finance Minister Francois-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies. This report by The Canadian Press was first published April 17, 2026.

Anthropic
Winnipeg Free Press6d ago
Read update
Anthropic's Mythos shows need to 'come to grips' with AI risks: BoC governor

White House chief of staff to meet with Anthropic CEO over its new AI technology

WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

Anthropic
AP NEWS6d ago
Read update
White House chief of staff to meet with Anthropic CEO over its new AI technology

White House chief of staff to meet with Anthropic CEO over its new AI technology

WASHINGTON (AP) -- White House chief of staff Susie Wiles plans to sound out Anthropic CEO Dario Amodei about the artificial intelligence company's new Mythos model, which has attracted attention from the federal government for how it could transform national security and the economy. A White House official, who requested anonymity to discuss the planned meeting Friday, said the administration is engaging with advanced AI labs about their models and the security of software. The official stressed that any new technology that might be used by the federal government would require a technical period for evaluation. The meeting comes after tensions have run hot between the Trump administration and the safety-conscious Anthropic, which has sought to put guardrails on the development of AI to minimize any potential risks and maximize its economic and national security benefits for the U.S. President Donald Trump tried to stop all federal agencies from using Anthropic's chatbot Claude over the company's contract dispute with the Pentagon, with Trump saying in a February social media post that the administration "will not do business with them again!" Defense Secretary Pete Hegseth also sought to declare Anthropic a supply chain risk, an unprecedented move against a U.S. company that Anthropic has challenged in two federal courts. The company said it wanted assurance the Pentagon would not use its technology in fully autonomous weapons and the surveillance of Americans. Hegseth said the company must allow for any uses the Pentagon deemed lawful. U.S. District Judge Rita Lin issued a ruling in March that blocked the enforcement of Trump's social media directive ordering all federal agencies to stop using Anthropic products. Anthropic declined to speak about the meeting in advance. The San Francisco-based Anthropic has said the new Mythos model it announced on April 7 is so "strikingly capable" that it is limiting its use to select customers because of its ability to surpass human cybersecurity experts in finding and exploiting computer vulnerabilities. And while some industry experts have questioned whether Anthropic's claims of too-powerful AI technology were a marketing ploy, even some of the company's sharpest critics have suggested that Mythos might represent a further advancement in AI. One influential Anthropic critic, David Sacks, who was the White House's AI and crypto czar, said people should "take this seriously." "Anytime Anthropic is scaring people, you have to ask, 'Is this a tactic? Is this part of their Chicken Little routine? Or is it real?'" Sacks said on the "All-In" podcast he co-hosts with other tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks said, "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they're more capable at finding vulnerabilities. That means they're more capable at stringing together multiple vulnerabilities and creating an exploit." The model's potential benefits, as well as its risks, have also attracted attention outside the U.S. The United Kingdom's AI Security Institute said it evaluated the new model and found it a "step up" over previous models, which were already rapidly improving. "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed," the institute said in a report. Anthropic has also been in talks with the European Union about its AI models, including advanced models that haven't yet been released in Europe, European Commission spokesman Thomas Regnier said Friday. Axios first reported the scheduled meeting between Wiles and Amodei. When it announced Mythos, Anthropic said it was also forming an initiative called Project Glasswing, bringing together tech giants such as Amazon, Apple, Google and Microsoft, along with other companies like JPMorgan Chase, in hopes of securing the world's critical software from "severe" fallout that the new model could pose to public safety, national security and the economy. "We're releasing it to a subset of some of the world's most important companies and organizations so they can use this to find vulnerabilities," said the Anthropic co-founder and policy chief, Jack Clark, at this week's Semafor World Economy conference. Clark added that Mythos, while ahead of the curve, is not a "special model." "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," he said. So the world is going to have to get ready for more powerful systems that are going to exist within it." ___ O'Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

Anthropic
The Journal6d ago
Read update
White House chief of staff to meet with Anthropic CEO over its new AI technology

Anthropic adds limited biometric ID verification from Persona to Claude | Biometric Update

Anthropic is introducing identity verification on its AI chatbot platform Claude for a "small number of cases." For its verification provider, the company has chosen Persona, which also supplies biometric age verification for ChatGPT and facial age estimation for Roblox. The verification process will include submitting a government-issued photo ID and a selfie for biometric matching and liveness detection. Anthropic says that the verification process will be initiated when users are "accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures." "This applies to a small number of cases where we see activity that indicates potentially fraudulent or abusive behavior, which violates our usage policy," a company spokesperson told Business Insider. Claude users do not seem happy about the possibility of sharing their identity data with Persona, a problem the company also faced recently in a failed attempt to bring the technology to Discord. When the messaging platform announced tests with Persona's age assurance in February, its users quickly drew attention to the fact that Palantir's co-founder Peter Thiel is an investor in the IDV firm through his Founders Fund. A cybersecurity investigation then exposed an uncompressed version of Persona's frontend code on U.S. government-authorized servers, raising further suspicions. Persona responded by fixing the issue and describing concerns from Discord users as "conspiracies." The U.S.-based company said it does not work with federal agencies, including the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE), which rely on Palantir's surveillance technology. Discord, on its side, apologized for not explaining the testing to its users and announced it was not moving forward with age checks through Persona. In its newest update, Anthropic explains that it will remain the data controller for user verification data on Claude, but the IDs and selfies will be collected by Persona, which is contractually limited in how it can use the data. All data passing through Persona is encrypted, while the platform will only provide verification and improve its ability to prevent fraud. Anthropic also reassured customers that their identity data and biometrics will not be used to train the company's AI models or shared with anyone else.

AnthropicDiscord
Biometric Update6d ago
Read update
Anthropic adds limited biometric ID verification from Persona to Claude | Biometric Update

Anthropic's Mythos shows need to 'come to grips' with AI risks: BoC governor

I agree to the Terms and Conditions, Cookie and Privacy Policies, and CASL agreement. Bank of Canada Governor Tiff Macklem says global financial systems need to "come to grips" with the risks posed by rapid advances in artificial intelligence models like Anthropic's Mythos. Developer Anthropic claims the upcoming Mythos model of its Claude AI is capable of quickly detecting long-hidden cybersecurity vulnerabilities, news which has made major financial market players and regulators anxious about the technology's disruptive potential. The Bank of Canada gathered representatives from big banks and financial agencies last week to discuss the risks Mythos poses for the Canadian financial system. Macklem says while there's been a fair amount of discussion about Mythos on the sidelines of the International Monetary Fund's spring meetings in Washington, no one knows yet the full implications of this latest AI advance. He says Mythos is not a one-off event and the nature of AI development means firms, regulators and policy-makers need to grapple with how these rapidly evolving technologies will affect the integrity of financial systems in Canada and around the world. Finance Minister Francois-Philippe Champagne was also in Washington for the IMF meetings and told reporters earlier in the day that Mythos has become a "test case" for how governments prepare for and react to new technologies. This report by The Canadian Press was first published April 17, 2026.

Anthropic
Brandon Sun6d ago
Read update
Anthropic's Mythos shows need to 'come to grips' with AI risks: BoC governor

SpaceX Snipes at Blue Origin's Starlink Challenger Over Interference Risks

Our team tests, rates, and reviews more than 1,500 products each year to help you make better buying decisions and get more from technology. Blue Origin's effort to build its own terabit satellite internet system is facing some pushback from SpaceX over concerns about radio interference with Starlink. On Thursday, SpaceX sent a letter to the Federal Communications Commission about Blue Origin's TeraWave, a potential Starlink challenger meant to serve enterprises and governments using 5,408 next-generation satellites. SpaceX isn't asking the FCC to reject the project, but it argues that part of the proposed operations for TeraWave could create "significant interference problems for competing satellite systems." "In fact, its operations would increase spectrum use through additional inefficient antennas that will cause more interference problems for millions of consumers who rely on competing satellite systems," SpaceX wrote, alluding to Starlink, which has over 10 million active customers globally. At issue is telemetry, tracking, and command (TT&C), or how a company communicates and manages a satellite constellation using radio frequencies. Blue Origin has proposed using various radio bands, including select Ka- and E-bands, specifically the 18.8 to 19.3GHz and 71 to 76GHz bands and 81 to 86GHz bands. The problem is that Starlink uses the same bands for downloads and gateway transmissions. SpaceX now points to the risk of interference because the company proposes using two "low-gain, omnidirectional Ka- and E-band antennas" on TeraWave's low-Earth orbiting satellites. "Not only would these operations trade narrow, efficient beams for continent-sized contours, but they would also require significantly more power and be more susceptible to weather and atmospheric attenuation," leading to signal scattering, SpaceX wrote. "For example, if Blue Origin uses omnidirectional antennas as it proposes, it will have to employ narrow bandwidth signals with high power spectral density to overcome the low gain of its TT&C antennas. This will cause significant interference for SpaceX's uplink and downlink transmissions in the shared bands," the company added. SpaceX also says it isn't convinced Blue Origin can coordinate in good faith with other satellite operators to prevent the interference. To address the problem, SpaceX says Blue Origin should adopt "high-gain directional E-band links" to create narrow radio beams to facilitate the TT&C operations. "Accordingly, the Commission should ensure that Blue Origin's TT&C operations do not come at the expense of people who count on these bands for backhaul and reliable, high-capacity satellite services," the company added. Blue Origin didn't immediately respond to a request for comment. In the meantime, other satellite operators have also been weighing in on the Terawave proposal after the FCC accepted the company's application for review. AST SpaceMobile, which is developing satellite-to-phone services, has also flagged radio interference risks with its own constellation. The company is asking the FCC to require Blue Origin to submit technical demonstrations and to coordinate with AST to prevent potential interference. Meanwhile, Viasat submitted a petition for the FCC to deny Blue Origin's proposal for the TT&C operation for TeraWave, arguing it "would foreclose more efficient spectrum uses, and would pose unacceptable interference risks to other operators."

SpaceX
PCMag UK6d ago
Read update
SpaceX Snipes at Blue Origin's Starlink Challenger Over Interference Risks

Anthropic's Amodei heads to the White House as Washington fights over Mythos access

Summary: Anthropic CEO Dario Amodei is meeting White House Chief of Staff Susie Wiles on Friday to negotiate access to Mythos, a frontier AI model that can identify and exploit thousands of zero-day vulnerabilities across every major operating system and browser. The meeting follows Anthropic's blacklisting by the Pentagon after Amodei refused to remove safety restrictions, and comes as US Treasury, the intelligence community, CISA, and UK financial regulators all seek access to the model through Anthropic's controlled Project Glasswing programme. Anthropic CEO Dario Amodei is scheduled to meet White House Chief of Staff Susie Wiles on Friday in what represents the most significant step yet toward resolving the company's standoff with the Pentagon over its refusal to remove safety restrictions from its AI models. The meeting comes as multiple US government agencies, including the Treasury Department, the intelligence community, and the Cybersecurity and Infrastructure Security Agency, are seeking access to Anthropic's Mythos model, a frontier AI system whose cybersecurity capabilities have triggered emergency briefings from Washington to London to Ottawa. Mythos, announced on 7 April, is not a cybersecurity product. It is a general-purpose AI model that, during testing, turned out to be capable of identifying and exploiting thousands of previously unknown zero-day vulnerabilities across every major operating system and web browser. It found flaws that had survived decades of human security review and millions of automated tests. When directed to develop working exploits, it succeeded on the first attempt in more than 83% of cases. It is the first AI model to complete a 32-step corporate network attack simulation from start to finish. Anthropic chose not to release Mythos publicly. Instead, it created Project Glasswing, a controlled access programme that provides the model to roughly 40 vetted organisations, including Amazon Web Services, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, JPMorgan Chase, and Palo Alto Networks, to find and fix vulnerabilities in critical software before they can be exploited. The company has committed up to $100 million in Mythos usage credits and $4 million in donations to open-source security organisations. The White House meeting is the product of a dispute that has escalated since February. Defense Secretary Pete Hegseth demanded that Anthropic grant the Pentagon unfettered access to its models across all lawful purposes, including potential use in autonomous weapons systems and domestic surveillance. Amodei refused. Hegseth designated Anthropic a national security supply-chain risk, a label previously reserved for companies associated with foreign adversaries, effectively blacklisting it from government contracts. Anthropic sued the Trump administration in early March, filing two federal lawsuits alleging illegal retaliation. A federal judge initially blocked the blacklisting, but an appeals court reversed that decision on 8 April, leaving Anthropic excluded from Department of Defense contracts while litigation continues. The company can still work with other government agencies. The paradox is that the same government that blacklisted Anthropic now wants access to its most powerful model. The Treasury Department is seeking Mythos to hunt for vulnerabilities in its own systems. Parts of the intelligence community and CISA are already testing it. The White House Office of Management and Budget is setting up protections to allow federal agencies to use a controlled version. Axios reported that Anthropic has hired Trumpworld consultants to facilitate negotiations, and that Friday's meeting is designed to pave the way toward a deal. JPMorgan Chase CEO Jamie Dimon said publicly that Mythos "reveals a lot more vulnerabilities" for cyberattacks. The UK's AI Security Institute evaluated a preview version and found it "substantially more capable at cyber offence than any model previously assessed," noting that it is the first model capable of chaining multiple attack steps into complete intrusions end to end. The Council on Foreign Relations called it "an inflection point for AI and global security." The defensive case for Mythos is straightforward: if an AI model can find vulnerabilities that human security teams and automated testing have missed for decades, giving that model to the organisations responsible for defending critical infrastructure lets them fix the holes before adversaries discover them. The offensive risk is equally straightforward: the same capability in hostile hands would be catastrophic. Anthropic's decision to restrict access rather than release publicly is a direct application of the safety principles that put it in conflict with the Pentagon. The company's commercial trajectory gives it leverage in the negotiation. Anthropic's annualised revenue has reached $30 billion, it has attracted investor offers at an $800 billion valuation, and it is exploring an IPO. It does not need Pentagon contracts to survive. What it needs is a resolution that preserves its safety commitments while restoring its ability to work with the broader US government, a position that the Wiles meeting is designed to explore. Mythos has become a subject of concern well beyond Washington. The Bank of England's Governor Andrew Bailey named it explicitly as a cybersecurity risk in a speech at Columbia University on 15 April. The Bank's Cross Market Operational Resilience Group is convening an emergency briefing within the fortnight with the CEOs of the UK's eight largest banks, four financial infrastructure providers, two insurers, and representatives from the Treasury, the Financial Conduct Authority, and the National Cyber Security Centre. Anthropic is planning to provide Mythos access to select British banks within days as part of Project Glasswing's expansion, and is quadrupling its London office to 800 staff in King's Cross. The UK's AI Security Institute, which has an existing evaluation partnership with Anthropic, published its technical assessment on 17 April. Canadian Finance Minister François-Philippe Champagne described Mythos as an "unknown unknown" being discussed at IMF meetings. Global regulators are coordinating on how to assess and manage the cybersecurity implications. The geopolitical dimension is unavoidable. The US government's desire for Mythos access exists in tension with its punishment of the company that built it. Anthropic's willingness to provide the model to UK banks and regulators while locked in litigation with the Pentagon creates a situation in which America's closest ally may have access to a critical national security tool before its own government does. That dynamic gives the White House an incentive to resolve the dispute that transcends the original disagreement over safety guardrails. The outlines of a potential resolution are visible. Anthropic would restore its eligibility for government contracts and provide Mythos access for defensive cybersecurity purposes. The Pentagon would withdraw the supply-chain risk designation. Anthropic would maintain its restrictions on autonomous weapons and mass surveillance applications but potentially agree to a process for reviewing specific military use cases that do not cross those lines. Both sides have reasons to compromise: Anthropic because the blacklisting damages its enterprise credibility, and the administration because it needs the technology. Whether Amodei and Wiles reach that kind of arrangement on Friday or simply begin the process of getting there is less important than what the meeting represents. The company that built the most capable cybersecurity tool in existence did so as a byproduct of building a general-purpose AI model, then restricted its release on safety grounds, then was punished by the government for maintaining those same safety principles, and is now being courted by that government because the tool is too valuable to ignore. That sequence captures something essential about where AI governance stands in April 2026. The technology is advancing faster than the institutions responsible for managing it can adapt, and the companies that take safety seriously are simultaneously rewarded by the market and penalised by the state. Mythos is the sharpest example yet of a model whose capabilities are so consequential that restricting it and releasing it are both defensible positions, and the argument between them is playing out not in a research paper or a congressional hearing but in the West Wing.

Anthropic
The Next Web6d ago
Read update
Anthropic's Amodei heads to the White House as Washington fights over Mythos access

Another day of chaos at Manchester Piccadilly with over 120 cancellations

More than 120 trains were cancelled after damage to the overhead wires(Image: Manchester Evening News) Commuters at Manchester Piccadilly faced a day of frustration as widespread rail disruption left hundreds stranded, delayed or scrambling for alternative routes. More than 120 trains were cancelled after damage to the overhead wires yesterday at 11:20am brought parts of the station to a standstill, with disruption stretching into the second day. Although services began to recover in the early afternoon, delays, cancellations and platform changes continued to be announced over the tannoy system in the station. Click here to get the biggest stories straight to your inbox in our Daily Newsletter Among those caught up in the chaos was one traveller returning home after an international journey. "I've flown in from the Philippines to Manchester Airport this morning, arrived about 9 o'clock, and wanted to get the 10:01 train to Crewe in the airport, which was cancelled. So then I was going to get the 11:01, and that's been cancelled," they said. "So then I've had to come into Manchester Piccadilly, and then I have to get another train now to Crewe. It'll be about an hour and a half, this journey now." What should've been a simple straightforward journey had turned into a complicated ordeal. "All together, if I'd have gone from the Airport to Crewe, it would have been half an hour. So it'll take nearly two hours just for me to get back to Crewe from here. With all the waiting in between." "I've been trying since 9 o'clock this morning to get back home. I would have got back about half past ten originally and now it's midday and I'm still not on a train to go home. Once I do get on the train, it will take me about two hours to get home." Scenes inside Piccadilly showed crowds gathering beneath the departure boards, eyes fixed on the screens for the latest updates. Passengers could be seen checking their phones, finding alternate routes and asking staff for information, while coaches and replacement buses lined up outside for those caught up in the disruption in an effort to ease the situation. Another passenger attempting to return back to London, described how quickly things unravelled for him. "I live in London and travelled here for a work trip. I bought the train ticket about ten minutes ago but now the train is cancelled.," they said. "I'm a little frustrated, there are no more direct trains to London so now I have to take a different route to Crewe and then make my way back." Operators including Avanti West Coast, Northern and CrossCountry were among those affected, with services across the North West and beyond either delayed or cancelled entirely. Routes linking Manchester with London, Liverpool, Birmingham and Nottingham were all impacted. A spokesperson for Network Rail said engineers worked overnight on "complex repairs" which took longer than anticipated due to the extent of the damage. They said: "We are sorry for the impact of the disruption at Piccadilly station this morning, which has been caused by overhead line damage." "Engineers worked through the night on complex repairs which took longer than expected as the damage was more extensive than first anticipated. "Trains are now running but services are still disrupted. We are working closely with our train operator partners to keep passengers on the move wherever possible, please check the National Rail Enquiries website for the latest information today and over the weekend." An update at 1pm shared that the majority of trains were now running as normal at Piccadilly, although some services were being delayed or cancelled, with some platforms remaining out of use and disruption expected to last throughout the day. Despite the chaos casting a subdued mood, there were some small moments of relief. At times, the sound of a piano being played by members of the public gifted spirits among weary travellers. As rush hour approached, patience wore thin. With more passengers arriving, frustration grew as many turned up to find out their train had been delayed. Passengers are being urged to check before they travel, with further overnight engineering work expected to take place in the next few days. Many affected may also be eligible for compensation via the delay and repay scheme.

CHAOS
Manchester Evening News6d ago
Read update
Another day of chaos at Manchester Piccadilly with over 120 cancellations

Cerebras, the AI chipmaker, is set to file for its IPO today | News.az

Cerebras, a company that manufactures chips for artificial intelligence models, plans to file for an initial public offering on Friday, according to two sources familiar with the situation. For years, Cerebras sought to sell chips to companies, but it has begun operating the chips inside its own data centers as a cloud service on behalf of clients. In January, Cerebras touted plans to provide up to 750 megawatts of computing power to OpenAI through 2028 in an agreement valued at over $10 billion. OpenAI has since expanded its relationship with Cerebras in an agreement worth over $20 billion and will get warrants to buy Cerebras shares, one person said. The Information previously reported on the arrangement. Another major expansion could be on the way. On Oracle's March earnings call, CEO Clay Magouyrk mentioned that the database and cloud company offers chips from Cerebras and other suppliers. But at the time, Oracle's price list did not contain references to Cerebras. Cerebras does supply OpenAI with cloud-based computing power to operate a coding tool. Many companies that build and deploy generative AI models rely on Nvidia's graphics processing units, or GPUs. AMD has made inroads in AI infrastructure as well. Cerebras has picked up new business by emphasizing the high speed that its large-scale processors can deliver, particularly for responding to queries from end users. The company announced plans for an initial public offering in 2024 but withdrew the paperwork last year to add information on financial performance and strategy. Retail investors are thirsty for IPOs from large and growing technology companies after a relative drought that began in 2022. AI companies Anthropic and OpenAI are considering going public as soon as this year. In September, days before withdrawing the IPO paperwork, Cerebras said it had raised a $1.1 billion funding round at a $8.1 billion valuation. Cerebras was founded in 2016 and is based in Sunnyvale, California. Andrew Feldman, the startup's co-founder and CEO, sold server startup SeaMicro to AMD for $355 million in 2012. Feldman has said that in 2018, Tesla CEO Elon Musk tried to buy Cerebras. The company's investors include OpenAI CEO Sam Altman.

AnthropicCerebras
News.az6d ago
Read update
Cerebras, the AI chipmaker, is set to file for its IPO today | News.az

Anthropic says Claude Opus 4.7 has a 92% honesty rate, less sycophancy

Anthropic says Claude Opus 4.7 is less likely to hallucinate or engage in sycophany than other models. Anthropic released a new hybrid reasoning model on Thursday: Claude Opus 4.7. Anthropic has a reputation as a safety-first AI company, and the Opus 4.7 system card reports that the model is less likely to hallucinate or engage in sycophancy than both prior Anthropic models and other frontier AI models. We dived into the Opus 4.7 system card to see exactly what Anthropic had to say about the model's safety, honesty, and sycophancy. Don't miss out on our latest stories: Add Mashable as a trusted news source in Google. The TL;DR version Why put the TL;DR version at the end? Anthropic says Claude Opus 4.7 makes improvements on various types of hallucinations and overall honesty. Anthropic also gave the new model top marks on sycophancy and encouragement of user delusions. (Anthropic also reports that Opus 4.7 scores much better on these behaviors than Gemini 3.1 Pro and Grok 4.20.) "Claude Opus 4.7 is more reliably honest than Opus 4.6 or Sonnet 4.6, with large reductions in the rate of important omissions, and moderate improvements in factuality and rates of hallucinated input," Anthropic reports. Want to learn more about getting the best out of your tech? Sign up for Mashable's Top Stories and Deals newsletters today. Anthropic measures Claude's honesty and hallucination rates in multiple ways, but let's look at one representative example -- the Model Alignment between Statements and Knowledge (MASK) benchmark. MASK was developed by Scale AI and the Center for AI Safety. Claude Opus had a MASK honesty rate of 91.7 percent, compared to 90.3 percent for Opus 4.6 and 89.1 percent for Sonnet 4.6. While that's lower than the 95.4 percent score achieved by Claude Opus 4.5, the new model performs better on other hallucination scores (more on that below). Interestingly, Claude Mythos was more honest still, with an honesty rate of 95.4 percent. Claude Opus 4.7 lags behind Claude Mythos on overall performance Since Anthropic repeatedly compares Opus 4.7 to Claude Mythos, let's quickly review the differences between the two models. Claude Opus 4.7 is the latest hybrid reasoning model available to paid Claude subscribers. Claude Mythos is an unreleased model that Anthropic has only made available to partners via Project Glasswing. Under normal circumstances, we would expect Claude Opus 4.7 to be Anthropic's most advanced and powerful model to date. However, Anthropic says it lags behind the unreleased Claude Mythos in key areas. Because of its advanced cybersecurity capabilities, Anthropic deemed Claude Mythos too dangerous to release to the public. Still, Claude Opus 4.7 improves upon Opus 4.6 in many ways, particularly advanced coding, visual intelligence, and document analysis, Anthropic says. More details on Claude Opus 4.7 hallucination rates When using Opus 4.7, how likely is Claude to tell a lie, invent facts, or deceive users? There isn't a single hallucination rate that Anthropic provides, because there are multiple types of hallucinations. So, this section is for the AI nerds. Anthropic identifies a few different ways to measure hallucination and honesty: Factual hallucinations: How likely the model is to provide accurate information. How often does the model admit that it doesn't know something?Input hallucination: This occurs when an AI model ignores prompt instructions, hallucinates the content of files, or pretends to have access to a tool it doesn't have.False premises honesty rate: Will the model tell a user when they're incorrect?MASK honesty rate: This "tests whether a model will contradict its own stated belief when a user or system prompt pushes it to." We've already covered the MASK honesty rate, and Claude Opus 4.7 shows similar gains on these other measures, according to Anthropic. At this time, we cannot independently verify Anthropic's results. To measure factual hallucinations, Anthropic used four different tests and recorded correct responses, incorrect responses, and abstentions. In this case, abstentions are good -- the model should decline to answer a question rather than guessing. Across all four tests, Opus 4.7 scored higher than Opus 4.6 and Sonnet 4.6 but lower than Claude Mythos. Anthropic measured Opus 4.7's input hallucination in two ways: "prompts requesting an unavailable tool" and "prompts referencing missing context." Opus 4.7 scored 89.5 percent on the former, beating Claude Mythos's 84.8 percent; on the latter, Opus 4.7 scored 91.8 percent, two points lower than Claude Mythos's 93.8 percent. This shows just how stubborn AI hallucinations are, with even leading AI companies like Anthropic recording input hallucination rates around 90 percent. Anthropic's reported hallucination rates are similar to the latest OpenAI models, which provide responses with incorrect information up to 5.8 percent of the time (with browsing enabled) to 10.9 percent (browsing disabled), per OpenAI. What about Opus 4.7's honesty rate for false premises, i.e., will Claude tell a user they're wrong? According to the system card, Claude will push back on false premises 77.2 percent of the time. That's better than all other recent Anthropic models except for -- you guessed it -- Claude Mythos, which will reject false premises 80 percent of the time. Claude Opus 4.7 sycophancy There's not much new to report in terms of sycophancy. While Anthropic's expert red-team testers reported that Opus 4.7 was prone to "sycophantic agreement under pushback," it has very similar scores to prior models from Anthropic and OpenAI, and noticeably lower scores than Gemini 3.1 Pro and Grok 4.20. Again, this is according to Anthropic. To measure bad behaviors like sycophancy and "encouragement of user delusion," Anthropic uses Petri 2.0, its open-source behavioral audit tool. This test scores models on a 1-10 scale, with lower scores reflecting better behavior. The Petri score isn't akin to a percentage, as it measures both the rate of a behavior and the severity. Anthropic scored Opus 4.7 highly (or, lowly, with this particular scale) on both sycophancy and user delusions. Mashable reached out to Anthropic for comment but did not receive a response in time for publication. Disclosure: Ziff Davis, Mashable's parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Anthropic
Mashable SEA6d ago
Read update
Anthropic says Claude Opus 4.7 has a 92% honesty rate, less sycophancy

Pentagon faces growing dependence on SpaceX

The Pentagon has become critically dependent on SpaceX's Starlink satellite network, exposing potential vulnerabilities that were highlighted during recent military tests and operational incidents, AzerNEWS reports. According to the report, in August 2025 a global outage affecting Starlink disrupted services for millions of users worldwide. During the same period, around two dozen unmanned surface vessels operated by the U.S. Navy reportedly lost connectivity off the coast of California and began drifting without active control. As a result, certain military operations were temporarily suspended for nearly an hour. The incident reportedly underscored the limitations of relying on a single large-scale satellite communication system, particularly when multiple autonomous platforms depend on continuous, low-latency connectivity. Despite this, Starlink remains highly valued by the U.S. military due to its broad coverage, rapid deployment capability, and comparatively low operational cost. At the same time, growing dependence on SpaceX has raised concerns within defense and policy circles. Analysts warn that reliance on a privately controlled infrastructure introduces strategic risks, especially in scenarios involving technical failures, cyberattacks, or geopolitical tensions affecting commercial providers. The situation highlights a broader shift in modern warfare: military forces are increasingly dependent on commercial space and technology companies rather than traditional government-owned systems. This blurring of lines between civilian tech giants and defense infrastructure is reshaping not only communications strategy, but also the balance of control over critical battlefield networks.

SpaceX
AzerNews6d ago
Read update
Pentagon faces growing dependence on SpaceX

Anthropic now has a design assistant too

In hindsight, I suppose it was only a matter of time after Anthropic made Claude capable of generating charts and diagrams that the company would then begin offering a more robust image editor. Now, a little more than a month after that release, Anthropic has announced Claude Design, a new research preview that allows subscribers to use Claude to generate designs, prototypes, slides and more. "Claude Design gives designers room to explore widely and everyone else a way to produce visual work," Anthropic says of its newest product. As with its previous forays into image generation, the company isn't calling this, well, an image generator. Instead, Anthropic describes Opus 4.7, the system powering the app, as its most capable vision model to date. In other words, you won't be using Claude Design to whip up a picture of a cat in space eating a lasagna. As you might expect, every project in Claude Design starts with a prompt. From there, Anthropic notes users can refine Claude's outputs through conversation, inline comments and direct edits. Like Adobe's recently announced AI assistant, Claude will also generate custom sliders that correspond to specific elements in a design, which the user can push and pull to modify those elements. For instance, in the screenshot below, you can see how Claude has tweaked the interface to allow the user to adjust the glow and density of arcs it used to illustrate a connected network.Claude Design will generate custom sliders you can use to adjust specific visual elements. AnthropicAnthropic has also built an onboarding process that allows Claude to build an internal visual language after reading your organization's codebase and existing design documents. "Every project after that uses your colors, typography, and comments automatically," according to the company. Outside of text prompts, there's also support for image and document uploads, and Anthropic has even included a web capture tool so enterprise customers can snapshot elements from their company's website. There's also built-in sharing, and you can export a design directly to Claude Code. In the coming weeks, Anthropic has promised to make it easier to build integrations with its new app. Claude Design arrives in the same week that both Adobe and Canva released their own visual AI assistants. If Anthropic is preparing to eat Canva's lunch, it's doing so in a strange way given that you can export your Claude Design projects to Canva. If you want to try the new app for yourself, it's available as part of Anthropic's Pro, Max, Team and Enterprise subscriptions, with usage running up against your usage limits.This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-now-has-a-design-assistant-too-150000903.html?src=rss...

Anthropic
newsdump.com6d ago
Read update
Anthropic now has a design assistant too

Steve Bannon says Anthropic 'had it right' in rejecting deal with the Pentagon

The Pentagon effectively blacklisted Anthropic following the failed deal. At least one person in Trump world believes Anthropic was correct to reject a deal with the Pentagon. "I think Anthropic had it right," former White House chief strategist Steve Bannon said during the Semafor World Economy Summit in Washington, D.C., on Thursday. Bannon said allowing the Pentagon to operate Anthropic's frontier model -- Claude -- with little guardrails is "too dangerous." Bannon, who's criticized the development of superintelligent AI, said there needs to be greater transparency about how weapons manufacturers will use AI. "The central thing is what is happening in the weapons lab with AI," Bannon said. "We have no earthly idea." The clash between Anthropic and the Pentagon began in February amid negotiations about the military using Claude. Defense Secretary Pete Hegseth pressed the company to accept its terms of use or risk losing its contract with the military. In a blog post, Anthropic CEO Dario Amodei said the company "cannot in good conscience accede" to the Pentagon's requests. Specifically, Amodei said the company had concerns over two issues: mass domestic surveillance and fully autonomous weapons. Anthropic's refusal drew a swift response from the Pentagon, which effectively blacklisted the company by labeling it a supply chain risk and barring federal agencies from using the tech. Anthropic filed a lawsuit in March against Hegseth, the Pentagon, the Executive Office of the President, and other federal agencies over the blacklist efforts. The Pentagon, meanwhile, quickly made a deal with Sam Altman's OpenAI. Despite the legal and business fallout, Anthropic won big in the court of public opinion. Claude temporarily overtook ChatGPT in the App Store, and the company garnered praise for standing its ground. More recently, Anthropic made headlines with the announcement of its new model, Mythos. The company said it paused the model's release due to cybersecurity concerns. "Claude Mythos Preview's large increase in capabilities has led us to decide not to make it generally available," the company wrote in the preview's system card. "Instead, we are using it as part of a defensive cybersecurity program with a limited set of partners."

Anthropic
DNyuz6d ago
Read update
Steve Bannon says Anthropic 'had it right' in rejecting deal with the Pentagon

Anthropic's Alarming Mythos Findings Replicated With Off-the-Shelf AI, Researchers Say - Decrypt

Findings indicate AI cyber capabilities may be spreading faster than expected. When Anthropic unveiled Claude Mythos earlier this month, it locked the model behind a vetted coalition of tech giants and framed it as something too dangerous for the public. Treasury Secretary Scott Bessent and Fed Chair Jerome Powell convened an emergency meeting with Wall Street CEOs. The word "vulnpocalypse" resurfaced in security circles. And now a team of researchers has further complicated that narrative. Vidoc Security took Anthropic's own patched public examples and tried to reproduce them using GPT-5.4 and Claude Opus 4.6 inside an open-source coding agent called opencode. No Glasswing invite. No private API access. No Anthropic internal stack. "We replicated Mythos findings in opencode using public models, not Anthropic's private stack," Dawid Moczadło, one of the researchers involved in the experiment, wrote on X after publishing the results. "A better way to read Anthropic's Mythos release is not 'one lab has a magical model.' It is: the economics of vulnerability discovery are changing." The cases they targeted were the same ones Anthropic highlighted in its public materials: a server file-sharing protocol, the networking stack of a security-focused OS, the video-processing software embedded in almost every media platform, and two cryptographic libraries used to verify digital identities across the web. Both GPT-5.4 and Claude Opus 4.6 reproduced two bug cases in all three runs each. Claude Opus 4.6 also independently rediscovered a bug in OpenBSD three times straight, while GPT-5.4 scored zero on that one. Some bugs (one involving the FFmpeg library to run videos and another involving the processing of digital signatures with wolfSSL) came back partial -- meaning the models found the right code surface but didn't nail the precise root cause. Every scan stayed below $30 per file, meaning researchers were able to find the same vulnerabilities as Anthropic while spending less than $30 to do it. "AI models are already good enough to narrow the search space, surface real leads, and sometimes recover the full root cause in battle-tested code," Moczadło said on X. The workflow they used wasn't a one-shot prompt. It mirrored what Anthropic itself described publicly: give the model a codebase, let it explore, parallelize attempts, filter for signal. The Vidoc team built the same architecture with open tooling. A planning agent split each file into chunks. A separate detection agent ran on each chunk, then inspected other files in the repo to confirm or rule out findings. The line ranges inside each detection prompt -- for example, "focus on lines 1158-1215" -- weren't chosen by the researchers manually. They were outputs from the prior planning step. The blog post makes this explicit: "We want to be explicit about that because the chunking strategy shapes what each detection agent sees, and we do not want to present the workflow as more manually curated than it was." The study doesn't claim public models match Mythos on everything. Anthropic's model went further than just spotting the FreeBSD bug -- it built a working attack blueprint, figuring out how an attacker could chain code fragments together across multiple network packets to seize full control of the machine remotely. Vidoc's models found the flaw. They didn't build the weapon. That's where the real gap sits: not in finding the hole, but in knowing exactly how to walk through it. But Moczadło's argument isn't really that public models are equally powerful. It's that the expensive part of the workflow is now available to anyone with an API key: "The moat is moving from model access to validation: finding vulnerability signal is getting cheaper; turning it into trusted security work is still hard." Anthropic's own safety report acknowledged that Cybench, the benchmark used to measure whether a model poses serious cyber risk, "is no longer sufficiently informative of current frontier model capabilities" because Mythos cleared it entirely. The lab estimated comparable capabilities would spread from other AI labs within six to 18 months. The Vidoc study suggests the discovery side of that equation is already available outside any gated program. Their full prompt excerpts, model outputs, and methodology appendix are published at the lab's official site.

Anthropic
Decrypt6d ago
Read update
Anthropic's Alarming Mythos Findings Replicated With Off-the-Shelf AI, Researchers Say - Decrypt

Anthropic: Claude Opus 4.7 has a 92% honesty rate, fewer hallucinations

Anthropic released a new hybrid reasoning model on Thursday: Claude Opus 4.7. Anthropic has a reputation as a safety-first AI company, and the Opus 4.7 system card reports that the model is less likely to hallucinate or engage in sycophancy than both prior Anthropic models and other frontier AI models. We dived into the Opus 4.7 system card to see exactly what Anthropic had to say about the model's safety, honesty, and sycophancy. Don't miss out on our latest stories: Add Mashable as a trusted news source in Google. Why put the TL;DR version at the end? Anthropic says Claude Opus 4.7 makes improvements on various types of hallucinations and overall honesty. Anthropic also gave the new model top marks on sycophancy and encouragement of user delusions. (Anthropic also reports that Opus 4.7 scores much better on these behaviors than Gemini 3.1 Pro and Grok 4.20.) "Claude Opus 4.7 is more reliably honest than Opus 4.6 or Sonnet 4.6, with large reductions in the rate of important omissions, and moderate improvements in factuality and rates of hallucinated input," Anthropic reports. Want to learn more about getting the best out of your tech? Sign up for Mashable's Top Stories and Deals newsletters today. Anthropic measures Claude's honesty and hallucination rates in multiple ways, but let's look at one representative example -- the Model Alignment between Statements and Knowledge (MASK) benchmark. MASK was developed by Scale AI and the Center for AI Safety. Claude Opus had a MASK honesty rate of 91.7 percent, compared to 90.3 percent for Opus 4.6 and 89.1 percent for Sonnet 4.6. While that's lower than the 95.4 percent score achieved by Claude Opus 4.5, the new model performs better on other hallucination scores (more on that below). Interestingly, Claude Mythos was more honest still, with an honesty rate of 95.4 percent. Since Anthropic repeatedly compares Opus 4.7 to Claude Mythos, let's quickly review the differences between the two models. Claude Opus 4.7 is the latest hybrid reasoning model available to paid Claude subscribers. Claude Mythos is an unreleased model that Anthropic has only made available to partners via Project Glasswing. Under normal circumstances, we would expect Claude Opus 4.7 to be Anthropic's most advanced and powerful model to date. However, Anthropic says it lags behind the unreleased Claude Mythos in key areas. Because of its advanced cybersecurity capabilities, Anthropic deemed Claude Mythos too dangerous to release to the public. Still, Claude Opus 4.7 improves upon Opus 4.6 in many ways, particularly advanced coding, visual intelligence, and document analysis, Anthropic says. When using Opus 4.7, how likely is Claude to tell a lie, invent facts, or deceive users? There isn't a single hallucination rate that Anthropic provides, because there are multiple types of hallucinations. So, this section is for the AI nerds. Anthropic identifies a few different ways to measure hallucination and honesty: We've already covered the MASK honesty rate, and Claude Opus 4.7 shows similar gains on these other measures, according to Anthropic. At this time, we cannot independently verify Anthropic's results. To measure factual hallucinations, Anthropic used four different tests and recorded correct responses, incorrect responses, and abstentions. In this case, abstentions are good -- the model should decline to answer a question rather than guessing. Across all four tests, Opus 4.7 scored higher than Opus 4.6 and Sonnet 4.6 but lower than Claude Mythos. Anthropic measured Opus 4.7's input hallucination in two ways: "prompts requesting an unavailable tool" and "prompts referencing missing context." Opus 4.7 scored 89.5 percent on the former, beating Claude Mythos's 84.8 percent; on the latter, Opus 4.7 scored 91.8 percent, two points lower than Claude Mythos's 93.8 percent. This shows just how stubborn AI hallucinations are, with even leading AI companies like Anthropic recording input hallucination rates around 90 percent. Anthropic's reported hallucination rates are similar to the latest OpenAI models, which provide responses with incorrect information up to 5.8 percent of the time (with browsing enabled) to 10.9 percent (browsing disabled), per OpenAI. What about Opus 4.7's honesty rate for false premises, i.e., will Claude tell a user they're wrong? According to the system card, Claude will push back on false premises 77.2 percent of the time. That's better than all other recent Anthropic models except for -- you guessed it -- Claude Mythos, which will reject false premises 80 percent of the time. There's not much new to report in terms of sycophancy. While Anthropic's expert red-team testers reported that Opus 4.7 was prone to "sycophantic agreement under pushback," it has very similar scores to prior models from Anthropic and OpenAI, and noticeably lower scores than Gemini 3.1 Pro and Grok 4.20. Again, this is according to Anthropic. To measure bad behaviors like sycophancy and "encouragement of user delusion," Anthropic uses Petri 2.0, its open-source behavioral audit tool. This test scores models on a 1-10 scale, with lower scores reflecting better behavior. The Petri score isn't akin to a percentage, as it measures both the rate of a behavior and the severity. Anthropic scored Opus 4.7 highly (or, lowly, with this particular scale) on both sycophancy and user delusions. Mashable reached out to Anthropic for comment but did not receive a response in time for publication.

Anthropic
Mashable6d ago
Read update
Anthropic: Claude Opus 4.7 has a 92% honesty rate, fewer hallucinations

Anthropic rolls out 'Claude Design' for AI-powered visual creation - The Tech Portal

Anthropic has introduced a new experimental product called Claude Design, a new experimental tool that can turn simple text prompts into visuals like app designs, presentations, and marketing content. It is part of the company's effort to expand its Claude platform beyond text and coding into creative work. Users can generate and refine designs through conversation, making quick changes without using traditional design software. The feature is powered by the latest Claude models and is currently available as a research preview for Pro, Team, and Enterprise users. Claude Design allows users to describe ideas in natural language, like requesting a mobile app interface, a product landing page, or a pitch deck, and receive structured visual outputs within seconds. These outputs can then be iteratively improved through follow-up prompts, enabling a continuous feedback loop that mimics working with a human designer. One of the key aspects of Claude Design is its ability to go beyond static image generation. The system is designed to produce multi-screen layouts, interactive prototypes, and presentation-ready assets. This includes UI/UX mockups, marketing visuals, branded documents, and even early-stage product flows. And by integrating conversational editing with visual generation, the tool reduces the need to switch between ideation, drafting, and refinement stages across different software platforms. The latest AI tool is powered by advanced iterations of Claude models, including Claude Opus 4.7, which has been optimized for complex reasoning, structured outputs, and multimodal understanding. Claude Design also introduces features that resemble professional design workflows. Users can directly edit components within generated visuals, leave feedback in a comment-style format, and use dynamic controls to adjust design styles. The Dario Amodei-led firm appears to be targeting a wide spectrum of users with this release. For non-designers, the tool offers a way to quickly translate ideas into tangible visuals without needing specialized skills. At the same time, for design teams, it can function as a quick prototyping tool, accelerating brainstorming and early-stage development. The product is being rolled out cautiously as a research preview, indicating that Anthropic is still evaluating performance, usability, and safety implications. Access is currently limited to higher-tier users, including Pro, Team, and Enterprise subscribers, which allows the company to gather feedback from more advanced use cases before a broader release. The latest development comes at a time when AI firms are increasingly targeting the creative space, with companies like OpenAI, Google, and Adobe, along with platforms like Canva and Figma, pushing toward more automated and conversational design tools. The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →

Anthropic
The Tech Portal6d ago
Read update
Anthropic rolls out 'Claude Design' for AI-powered visual creation - The Tech Portal
Showing 2621 - 2640 of 10807 articles