The latest news and updates from companies in the WLTH portfolio.
By clicking submit, I authorize Arcamax and its affiliates to: (1) use, sell, and share my information for marketing purposes, including cross-context behavioral advertising, as described in our Privacy Policy , (2) add to information that I provide with other information like interests inferred from web page views, or data lawfully obtained from data brokers, such as past purchase or location data, or publicly available data, (3) contact me or enable others to contact me by email or other means with offers for different types of goods and services, and (4) retain my information while I am engaging with marketing messages that I receive and for a reasonable amount of time thereafter. I understand I can opt out at any time through an email that I receive, or by clicking here Wall Street banks are starting to test Anthropic PBC's Mythos model internally as Trump administration officials encourage them to use it to detect vulnerabilities. While JPMorgan Chase & Co. was the only bank named as part of an initiative to test the Mythos model, other major financial institutions have also gained access or expect to in the coming days, according to people familiar with the matter. Goldman Sachs Group Inc., Citigroup Inc., Bank of America Corp. and Morgan Stanley are among the banks testing the technology internally, the people said. Those firms either declined to comment or had no immediate response. During a meeting this week with Wall Street leaders, summoned by U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell, executives were warned that they should take the Mythos model seriously and deploy its capabilities to detect vulnerabilities, the people said, asking not to be identified because the information isn't public. Government officials didn't raise any specific threat to financial institutions and more generally encouraged the banks to run the model against their own systems to improve their own defenses, they said. Bloomberg reported earlier that Bessent and Powell had assembled the group of banking executives on April 7 at Treasury's headquarters in Washington on short notice to ensure that banks were aware of possible risks raised by Anthropic's Mythos and similar models. The executives were in town already for a meeting of the Financial Services Forum, an advocacy group made up of the biggest lenders. A representative from the Treasury Department didn't respond to a request for comment. A Federal Reserve spokesperson had no immediate comment. The urging by Trump officials underscores the concern growing among regulators that a new breed of cyberattacks is one of the biggest risks facing the financial industry. All the banks summoned to the meeting are classified as systemically important by top regulators, meaning their stability is a priority for the global financial system. Anthropic has said that it has been in discussions prior to its recent release with U.S. officials about Mythos and its "offensive and defensive cyber capabilities." The company has limited the release of Mythos to a few dozen firms initially. Those companies, which include JPMorgan, Amazon.com Inc. and Apple Inc., are part of what's being called "Project Glasswing," which will work to secure the most important systems before other similar AI models become available. In releasing Mythos to a very limited set of companies, Anthropic pointed to several vulnerabilities that the AI system was capable of both identifying and potentially exploiting during testing. None of the examples related specifically to financial institutions, but in one instance, the firm's security team said it was able to compromise a web browser so that a website set up by a hacker could read data from another website "e.g., the victim's bank." Mythos Preview "fully autonomously discovered" a way of reading information stored in "multiple different web browsers" and then used that ability to find ways to exploit them, according to a post from Anthropic's security team. In one case, Anthropic said, Mythos found a means of exploiting web browsers that utilized multiple vulnerabilities. That tactic often represents a challenge for human hackers who struggle to find and exploit multiple flaws at once. So-called vulnerability chains can serve as pathways into otherwise highly secure systems, such as in the Stuxnet hack that damaged centrifuges at an Iranian nuclear facility. Anthropic has separately been battling the Trump administration in court. The Pentagon had labeled the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic's request that it put a pause to the Pentagon's designation. National Economic Council Director Kevin Hassett said during an interview with Fox News that there's a sense of urgency as U.S. officials push banks to improve their digital defenses with AI technology. "It was appropriate that Secretary Bessent do what he did," he said of the meeting with Wall Street leaders. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," he said. In recent years, regulators have required banks to hold some capital tied to the potential for cyberattacks, as well as other so-called operational risks such as lawsuits and rogue employees. Banks have sometimes chafed at those requirements, given that operational risk is more difficult to measure than the market and credit risks that also factor into banks' capital levels. (With assistance from Katanga Johnson, Katherine Doherty and Hannah Levitt.)

By clicking submit, I authorize Arcamax and its affiliates to: (1) use, sell, and share my information for marketing purposes, including cross-context behavioral advertising, as described in our Privacy Policy , (2) add to information that I provide with other information like interests inferred from web page views, or data lawfully obtained from data brokers, such as past purchase or location data, or publicly available data, (3) contact me or enable others to contact me by email or other means with offers for different types of goods and services, and (4) retain my information while I am engaging with marketing messages that I receive and for a reasonable amount of time thereafter. I understand I can opt out at any time through an email that I receive, or by clicking here Wall Street banks are starting to test Anthropic PBC's Mythos model internally as Trump administration officials encourage them to use it to detect vulnerabilities. While JPMorgan Chase & Co. was the only bank named as part of an initiative to test the Mythos model, other major financial institutions have also gained access or expect to in the coming days, according to people familiar with the matter. Goldman Sachs Group Inc., Citigroup Inc., Bank of America Corp. and Morgan Stanley are among the banks testing the technology internally, the people said. Those firms either declined to comment or had no immediate response. During a meeting this week with Wall Street leaders, summoned by U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell, executives were warned that they should take the Mythos model seriously and deploy its capabilities to detect vulnerabilities, the people said, asking not to be identified because the information isn't public. Government officials didn't raise any specific threat to financial institutions and more generally encouraged the banks to run the model against their own systems to improve their own defenses, they said. Bloomberg reported earlier that Bessent and Powell had assembled the group of banking executives on April 7 at Treasury's headquarters in Washington on short notice to ensure that banks were aware of possible risks raised by Anthropic's Mythos and similar models. The executives were in town already for a meeting of the Financial Services Forum, an advocacy group made up of the biggest lenders. A representative from the Treasury Department didn't respond to a request for comment. A Federal Reserve spokesperson had no immediate comment. The urging by Trump officials underscores the concern growing among regulators that a new breed of cyberattacks is one of the biggest risks facing the financial industry. All the banks summoned to the meeting are classified as systemically important by top regulators, meaning their stability is a priority for the global financial system. Anthropic has said that it has been in discussions prior to its recent release with U.S. officials about Mythos and its "offensive and defensive cyber capabilities." The company has limited the release of Mythos to a few dozen firms initially. Those companies, which include JPMorgan, Amazon.com Inc. and Apple Inc., are part of what's being called "Project Glasswing," which will work to secure the most important systems before other similar AI models become available. In releasing Mythos to a very limited set of companies, Anthropic pointed to several vulnerabilities that the AI system was capable of both identifying and potentially exploiting during testing. None of the examples related specifically to financial institutions, but in one instance, the firm's security team said it was able to compromise a web browser so that a website set up by a hacker could read data from another website "e.g., the victim's bank." Mythos Preview "fully autonomously discovered" a way of reading information stored in "multiple different web browsers" and then used that ability to find ways to exploit them, according to a post from Anthropic's security team. In one case, Anthropic said, Mythos found a means of exploiting web browsers that utilized multiple vulnerabilities. That tactic often represents a challenge for human hackers who struggle to find and exploit multiple flaws at once. So-called vulnerability chains can serve as pathways into otherwise highly secure systems, such as in the Stuxnet hack that damaged centrifuges at an Iranian nuclear facility. Anthropic has separately been battling the Trump administration in court. The Pentagon had labeled the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic's request that it put a pause to the Pentagon's designation. National Economic Council Director Kevin Hassett said during an interview with Fox News that there's a sense of urgency as U.S. officials push banks to improve their digital defenses with AI technology. "It was appropriate that Secretary Bessent do what he did," he said of the meeting with Wall Street leaders. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," he said. In recent years, regulators have required banks to hold some capital tied to the potential for cyberattacks, as well as other so-called operational risks such as lawsuits and rogue employees. Banks have sometimes chafed at those requirements, given that operational risk is more difficult to measure than the market and credit risks that also factor into banks' capital levels. (With assistance from Katanga Johnson, Katherine Doherty and Hannah Levitt.)

Starlink has subscriber growth going from adding 750,000 per month to 1.5 million per month. A fundamental shift in revenue mix, pricing dynamics, and the growing role of government demand. More than 30 airlines now provide Starlink, and revenue from that segment is expected to climb 68% from last year. About 75,000 shipping vessels are expected to add Starlink service this year, which could generate $1.9 billion.

Anthropic has officially launched Claude for Word in public beta, bringing its AI assistant directly into Microsoft Word as a native sidebar add-in for Team and Enterprise users on both Mac and Windows platforms. The integration marks a significant step in Anthropic's push to embed Claude into everyday productivity workflows beyond chat-based interactions. Claude for Word enables users to draft, edit, and revise files directly from a persistent sidebar within Microsoft Word, eliminating the need to switch between applications. Unlike basic clipboard-and-paste AI workflows, the integration preserves native document formatting and surfaces all AI-generated edits as Microsoft Word's tracked changes, keeping the revision history intact and fully reviewable by human editors. This "AI-powered redlining" approach means users can prompt Claude to rewrite a section or sharpen an argument, then accept or reject each suggestion just as they would with a human collaborator's markup. Claude Beta for Word One of the standout architectural decisions in the beta is shared context across Anthropic's Office add-in family. Claude for Word connects directly with Claude for Excel and Claude for PowerPoint, meaning a single conversation thread can span all three open documents simultaneously. Users can ask Claude to check for data inconsistencies between a Word report and its accompanying Excel model, or align narrative language in a Word file with slide content in PowerPoint, all within a unified AI session. This cross-app continuity addresses a pain point common to multi-document workflows in finance, legal, and consulting environments. The add-in handles a range of document-centric tasks, including rewriting selected text, responding to inline Word comments, summarizing sections, and auditing documents for factual or stylistic inconsistencies. Claude can also interpret existing comment threads and deliver revisions that directly address each note, returning an updated document with tracked changes showing every edit made. Access is currently gated to subscribers of the Claude Team and Enterprise plans, consistent with Anthropic's broader strategy of rolling out advanced document automation features to professional and business users first. The launch arrives amid intensifying competition in the AI productivity space. Microsoft's own 365 Copilot already offers deep Word integration, but early users of Claude for Word have noted its smoother document-handling and more coherent multi-app context flow as differentiators. Anthropic also recently expanded Microsoft 365 data connectivity to all Claude plan tiers, including free users, signaling an intent to deepen its presence across the Microsoft ecosystem rather than compete with it outright. The beta is available now at claude.com/claude-for-word, with broader plan access expected in upcoming rollout phases.

Multiple items in the provided news pool point to Anthropic's Claude Mythos as a catalyst for a broader cybersecurity reckoning. The central theme is that Anthropic restricted wider release of Mythos out of concern that the model could be used to find and exploit software vulnerabilities. One report states that Mythos was limited before broader availability specifically because of its apparent capability to locate security exploits in software relied on by users. The implication is that distributing it more widely could expand the pool of actors able to probe real-world systems -- turning a powerful cybersecurity-adjacent capability into an offensive tool. Other coverage suggests that the timing and behavior of Mythos triggered high-level scrutiny. Sources in the pool describe government and regulatory attention around AI model security and incident response in the lead-up to Mythos, including questions posed to executives about how models should respond to cyber attacks and how security risks should be handled. Across the pool, Mythos is framed less as a typical product launch and more as an operational stress test for the AI security ecosystem. That includes: The pool also includes commentary asserting that Mythos could force developers to adopt a new detection playbook -- though the underlying point remains the same: stronger offensive capability increases the need for stronger defensive monitoring. No specific details about the exact scope of Mythos access, the final rollout plan, or technical mitigation strategies were provided in the materials beyond the general rationale for limiting release and the existence of heightened scrutiny.

* Detachment from desires can lead to healthier living and greater achievement. * Anthropic's new AI model, Mythos, has identified vulnerabilities in major operating systems. * Responsible innovation in technology is crucial, especially in cybersecurity. * AI is being used proactively to find and fix software vulnerabilities. * AGI models represent a significant leap in intelligence and require cautious deployment. * Sandboxing AI models is a pragmatic strategy to balance innovation and safety. * Anthropic uses fear tactics as part of their marketing strategy. * AI-driven cyber capabilities are expected to detect dormant vulnerabilities soon. * Companies have a six-month window to patch vulnerabilities before AI capabilities become widely available. * The rollout of AI models is often overhyped, leading to exaggerated fears. * Understanding the implications of AI in cybersecurity is essential for future developments. * The strategic use of AI can significantly enhance cybersecurity measures. Guest intro Brad Gerstner is the Founder, Chairman, and CEO of Altimeter Capital, a Silicon Valley-based technology investment firm managing over $15 billion across public equity and venture capital portfolios. He was a founding principal at General Catalyst and led successful investments in AI and tech leaders like Snowflake, Unity, and MongoDB. Gerstner co-hosts the BG2Pod podcast on tech, markets, and investing. The power of detachment in personal achievement * Detachment from desires can lead to healthier living and greater achievement. * The more you want something, the less you're gonna get it -- Brad Gerstner * The concept of "retard maxing" involves letting go and living life without attachment. * This approach emphasizes trying new things without the pressure of success. * That detachment is really healthy for people -- Brad Gerstner * Understanding the implications of detachment can improve personal development. * This mindset can lead to greater mental health and personal fulfillment. * Embracing detachment can be a strategic advantage in achieving personal goals. Anthropic's Mythos model and cybersecurity * Anthropic's new model, Mythos, has identified vulnerabilities in major operating systems. * Anthropic is withholding its newest model Mythos -- Brad Gerstner * The model autonomously found thousands of vulnerabilities, including bugs in major systems. * This discovery highlights the advanced capabilities of AI in cybersecurity. * Understanding the implications of AI in cybersecurity is crucial for future developments. * The vulnerabilities discovered have been overlooked for decades. * This highlights the evolving landscape of cybersecurity and the role of AI. * The model's findings emphasize the need for improved security measures. Responsible innovation in AI development * The company deserves credit for not releasing their model prematurely. * They realized it would wreak havoc -- Brad Gerstner * Prioritizing security over competition is crucial in AI development. * The decision reflects a commitment to responsible innovation in technology. * They know it's in the best long-term interest of the company -- Brad Gerstner * Understanding the implications of releasing AI models in cybersecurity is essential. * This approach underscores the importance of ethical considerations in AI. * Responsible innovation can prevent potential risks associated with AI deployment. Proactive cybersecurity measures with AI * The project aims to use advanced AI to find and fix software vulnerabilities. * Let's spend a hundred days using advanced AI to find and fix vulnerabilities -- Brad Gerstner * This proactive approach emphasizes collaboration among major companies. * AI-driven cybersecurity can prevent exploitation by hackers. * Understanding the current landscape of cybersecurity is essential for this initiative. * The role of AI in vulnerability management is becoming increasingly significant. * This strategy highlights the importance of staying ahead of potential threats. * Proactive measures can significantly enhance system security. The cautious approach to AGI development * AGI models represent a significant leap in intelligence. * These are models with massive step function improvements -- Brad Gerstner * The release of AGI models requires careful consideration and caution. * Understanding the implications of AGI models is crucial for managing risks. * The approach of sandboxing AI models balances innovation and safety. * We're gonna sandbox these things -- Brad Gerstner * Sandboxing is a pragmatic strategy in AI development. * This approach fosters innovation while managing potential risks. Marketing strategies in AI companies * Anthropic has a pattern of using fear tactics to market their products. * They have a proven pattern of using fear as a way to market -- Brad Gerstner * This strategy could influence public perception of AI technologies. * Understanding marketing strategies is essential for navigating the AI landscape. * The use of fear tactics may affect consumer trust and acceptance. * This approach highlights the competitive nature of the AI industry. * Strategic marketing can impact the success of AI products. * Companies must balance marketing with ethical considerations. AI-driven cybersecurity advancements * AI-driven cyber capabilities will detect dormant vulnerabilities soon. * AI-driven cyber is gonna detect a whole range of bugs -- Brad Gerstner * This forecast indicates significant implications for system security. * Understanding AI advancements in cybersecurity is crucial for future developments. * The detection of vulnerabilities will enhance system protection. * This advancement highlights the evolving capabilities of AI in cybersecurity. * Companies must prepare for the impact of AI-driven cybersecurity measures. * The timeline for these advancements emphasizes the need for proactive measures. The critical timeframe for cybersecurity enhancements * Companies have a six-month window to patch vulnerabilities. * We have a window here of maybe six months -- Brad Gerstner * This timeframe is crucial for enhancing cybersecurity measures. * Understanding the competitive landscape of AI development is essential. * The timeline for vulnerability detection underscores the urgency of action. * Companies must act quickly to protect their systems from potential threats. * This insight highlights the importance of staying ahead in cybersecurity. * Proactive measures during this window can prevent future vulnerabilities. The reality of AI model rollouts

Anthropic's Dario Amodei, Alphabet's Sundar Pichai, OpenAI's Sam Altman, Microsoft's Satya Nadella and the heads of Palo Alto Networks and CrowdStrike were on the call, according to the report.US Vice President JD Vance and Treasury Secretary Scott Bessent questioned leading tech CEOs about AI model security and how to respond to cyber attacks a week before Anthropic released its new Mythos model, CNBC reported on Friday. Anthropic's Dario Amodei, Alphabet's Sundar Pichai, OpenAI's Sam Altman, Microsoft's Satya Nadella and the heads of Palo Alto Networks and CrowdStrike were on the call, according to the report. Anthropic declined to comment, while Alphabet, OpenAI, Microsoft, Palo Alto and CrowdStrike did not immediately respond to Reuters' requests for comment. Earlier this week, Anthropic launched a powerful AI model but held off on releasing it widely over concerns that it could expose hidden cybersecurity vulnerabilities. Only a group of around 40 tech heavyweights, including Microsoft and Google, would have access to Anthropic's "Claude Mythos" model. The startup had said it had been in ongoing discussions with the US government about the model's capabilities.
Anthropic's Dario Amodei, Alphabet's Sundar Pichai, OpenAI's Sam Altman, Microsoft's Satya Nadella and the heads of Palo Alto Networks and CrowdStrike were on the call, according to the report. US Vice President JD Vance and Treasury Secretary Scott Bessent questioned leading tech CEOs about AI model security and how to respond to cyber attacks a week before Anthropic released its new Mythos model, CNBC reported on Friday. Anthropic's Dario Amodei, Alphabet's Sundar Pichai, OpenAI's Sam Altman, Microsoft's Satya Nadella and the heads of Palo Alto Networks and CrowdStrike were on the call, according to the report. Anthropic declined to comment, while Alphabet, OpenAI, Microsoft, Palo Alto and CrowdStrike did not immediately respond to Reuters' requests for comment. Earlier this week, Anthropic launched a powerful AI model but held off on releasing it widely over concerns that it could expose hidden cybersecurity vulnerabilities. Only a group of around 40 tech heavyweights, including Microsoft and Google, would have access to Anthropic's "Claude Mythos" model. The startup had said it had been in ongoing discussions with the US government about the model's capabilities.
For a while, I thought AI was mostly a productivity story. Faster emails, cleaner code, fewer meetings where someone reads bullet points aloud. Useful, sure. World-changing? Jury's still deliberating. Then Anthropic dropped something called Mythos, and I'm starting to think we've been arguing about the appetizer while the main course just arrived unannounced. If last week the Anthropic team dropped the ball by accidentally releasing part of Claude's source code, this week... they seem to be compensating for this blunder with a new model announcement. But... you can't try it! What? An AI product announcement with no access to the product? That's right... apparently it's too powerful to release to the public! Here's what happened. Anthropic announced a model (Claude Mythos Preview) that won't be getting a public release anytime soon. Instead, it's being quietly handed to a gated group of heavy hitters: Amazon, Apple, Google, Microsoft, Nvidia, CrowdStrike, Cisco, Palo Alto Networks, and the Linux Foundation, under a program called Project Glasswing. The stated mission is defensive cybersecurity. The unstated subtext is: this thing is powerful enough that we're not sure we should just put it on the internet. "This model is good at finding vulnerabilities that would be well understood and findable by security researchers. At the same time, it has found vulnerabilities, and in some cases crafted exploits, sophisticated enough that they were both missed by literally decades of security researchers, as well as all the automated tools designed to find them." -- Logan Graham, the head of an Anthropic tester team Mythos, apparently, has already found thousands of previously unknown security bugs (what the security world calls zero-days) across major operating systems and browsers. It can identify them and... exploit them. That's not a product launch. That's a flare gun going off... or very clever marketing campaign... or both at the same time.

Dramatic footage captures the moment Hezbollah rockets struck an Israeli city, triggering sirens and panic across the north. A direct hit was reported in Safed, causing damage to property and vehicles as tensions escalate. The Israel Defense Forces confirmed multiple rocket launches targeting northern communities, with air defenses activated. While no major injuries have been reported, the attack highlights the growing intensity of cross-border exchanges. As both sides continue operations, fears of a wider escalation remain high. Watch the on-camera moment and latest updates from the ground.
US officials urge banks to use Mythos to detect vulnerabilities Wall Street banks are starting to test Anthropic PBC's Mythos model internally as Trump administration officials encourage them to use it to detect vulnerabilities. While JPMorgan Chase & Co. was the only bank named as part of an initiative to test the Mythos model, other major financial institutions have also gained access or expect to in the coming days, according to people familiar with the matter. Goldman Sachs Group Inc., Citigroup Inc., Bank of America Corp. and Morgan Stanley are among the banks testing the technology internally, the people said. Those firms either declined to comment or had no immediate response. During a meeting this week with Wall Street leaders, summoned by US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell, executives were warned that they should take the Mythos model seriously and deploy its capabilities to detect vulnerabilities, the people said, asking not to be identified because the information isn't public. Government officials didn't raise any specific threat to financial institutions and more generally encouraged the banks to run the model against their own systems to improve their own defenses, they said. Bloomberg reported earlier that Bessent and Powell had assembled the group of banking executives on April 7 at Treasury's headquarters in Washington on short notice to ensure that banks were aware of possible risks raised by Anthropic's Mythos and similar models. The executives were in town already for a meeting of the Financial Services Forum, an advocacy group made up of the biggest lenders. A representative from the Treasury Department didn't respond to a request for comment. A Federal Reserve spokesperson declined to comment. The urging by Trump officials underscores the concern growing among regulators that a new breed of cyberattacks is one of the biggest risks facing the financial industry. All the banks summoned to the meeting are classified as systemically important by top regulators, meaning their stability is a priority for the global financial system. Anthropic has said that it has been in discussions prior to its recent release with US officials about Mythos and its "offensive and defensive cyber capabilities." The company has limited the release of Mythos to a few dozen firms initially. Those companies, which include JPMorgan, Amazon.com Inc. and Apple Inc., are part of what's being called "Project Glasswing," which will work to secure the most important systems before other similar AI models become available. In releasing Mythos to a very limited set of companies, Anthropic pointed to several vulnerabilities that the AI system was capable of both identifying and potentially exploiting during testing. None of the examples related specifically to financial institutions, but in one instance, the firm's security team said it was able to compromise a web browser so that a website set up by a hacker could read data from another website "e.g., the victim's bank." Mythos Preview "fully autonomously discovered" a way of reading information stored in "multiple different web browsers" and then used that ability to find ways to exploit them, according to a post from Anthropic's security team. In one case, Anthropic said, Mythos found a means of exploiting web browsers that utilized multiple vulnerabilities. That tactic often represents a challenge for human hackers who struggle to find and exploit multiple flaws at once. So-called vulnerability chains can serve as pathways into otherwise highly secure systems, such as in the Stuxnet hack that damaged centrifuges at an Iranian nuclear facility. Anthropic has separately been battling the Trump administration in court. The Pentagon had labeled the company as a supply-chain risk, a designation that Anthropic has opposed. Earlier this week, a federal appeals court declined, at least for now, Anthropic's request that it put a pause to the Pentagon's designation. National Economic Council Director Kevin Hassett said during an interview with Fox News that there's a sense of urgency as US officials push banks to improve their digital defenses with AI technology. "It was appropriate that Secretary Bessent do what he did," he said of the meeting with Wall Street leaders. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," he said. In recent years, regulators have required banks to hold some capital tied to the potential for cyberattacks, as well as other so-called operational risks such as lawsuits and rogue employees. Banks have sometimes chafed at those requirements, given that operational risk is more difficult to measure than the market and credit risks that also factor into banks' capital levels.

SAN FRANCISCO April 11 -- Anthropic postponing the release of its new AI model Claude Mythos, said to be so skilled at coding it could be a wicked weapon for hackers, has encountered a mix of alarm and skepticism. The company is among several contenders in a fierce artificial intelligence race. Promoting the awe of Anthropic's own technology boosts business and enhances its allure in the event it soon goes public, as is rumored. "The world has no choice but to take the cyber threat associated with Mythos seriously," said David Sacks, an entrepreneur and investor who heads President Donald Trump's council of advisors on technology. "But it's hard to ignore that Anthropic has a history of scare tactics." Mythos has sparked fears of hackers commanding armies of AI agents able to break through computer defenses with ease. At this week's HumanX AI conference in San Francisco, Alex Stamos of startup Corridor, which addresses AI safety, acknowledged a real threat from agentic hackers. And Stamos quipped about what he referred to as Anthropic's "marketing schtick." "They have these adorable cutesy cartoons about these products that are so incredibly dangerous that they won't even let people use them," Stamos said of the San Francisco-based startup. "It's like if the Manhattan Project announced the nuclear bomb within a cute little Calvin and Hobbes cartoon." The heads of America's biggest banks met this week with Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent to weigh the security implications of the yet-to-be released Claude Mythos, according to reports Friday. "Mythos model points to something far more consequential than another leap in artificial intelligence," Cato Networks co-founder and chief executive Shlomo Kramer said in a blog post. "It signals a shift that could redefine the balance between attackers and defenders in cyberspace." A tightly restricted preview of Mythos was shared with partner organizations this week, under an initiative called Project Glasswing. They include Amazon, Apple, Microsoft, Google, Cisco, CrowdStrike and JPMorgan Chase. According to Anthropic and partners, Mythos can autonomously scan vast amounts of code to find and chain together previously unknown security vulnerabilities in all kinds of software, from operating systems to web browsers. Crucially, they warn, this can be done at a speed and scale no human could match, meaning it could be used to bring down banks, hospitals or national infrastructure within hours. "What once required elite specialists can now be performed by software agents," Shlomo said. "The immediate consequences will be a surge in vulnerability discovery, a true tsunami" of exploiting known and unknown vulnerabilities. 'Agent-to Agent War' At HumanX, the apparent consensus was that it makes sense that AI agents already adept at coding will excel at finding weaknesses in software. "We're not in an era where human beings can write code when we have superhuman (AI models) that are then going to find bugs in it," Stamos contended. "It's just not possible." He predicted the coming dynamic will involve humans supervising AI agents to protect networks against hackers using that same technology to attack. Stamos referred to it as "agent-to-agent war," with humans on the sidelines giving advice. Wendy Whitmore, of cybersecurity firm Palo Alto Networks, expects "some sort of catastrophic attack" this year connected to AI agent capabilities. "The thing that keeps me up at night is that we're staring down the barrel of a massive influx of new vulnerabilities that are going to be found by AI," said Adam Meyers of CrowdStrike. Meyers saw embedding a tiny AI model directly into malicious code infecting networks as a natural tactic to be explored by hackers. "The ultimate weapon would be malware that has no pre-programming," Meyers said.

In a move that strengthens its position in digital asset innovation, Binance has quietly rolled out a new binance pre ipo feature inside its Web3 infrastructure. According to the official Binance Wallet communication, the Binance Web3 Wallet now offers an on-chain Pre-IPO asset exploration tool directly within the app. This function is designed to let users discover tokenized exposure to private companies before a traditional listing, while keeping all activity on-chain. The first batch consists of 5 tokenized assets, which Binance highlights through official promotional images. Moreover, those images showcase technology leaders such as SpaceX and OpenAI, indicating that early access is centered on high-profile names that typically remain limited to venture and private equity investors. The new discovery function is integrated into the "Markets" area of the Binance Web3 Wallet interface, rather than in a separate experimental hub. This placement suggests Binance wants tokenized pre-IPO exposure to sit alongside more traditional crypto markets, making navigation familiar for existing users. Within the app, users can access this on-chain Pre-IPO asset exploration feature by navigating to the Markets section, where the new category for private market exposure is highlighted. However, Binance has not yet released a full breakdown of each token's rights, underlying structure or jurisdictional availability. Binance has not specified a public launch date for additional listings beyond the initial 5 assets, but the introduction of the binance pre ipo discovery flow in the Web3 Wallet suggests that a broader pipeline of tokenized private market instruments could follow. The appearance of SpaceX and OpenAI among the highlighted names underscores Binance's intent to link crypto rails with sought-after private equity style exposure. That said, details on how these tokenized interests map to underlying securities, and in which jurisdictions they are offered, remain undisclosed. Moreover, by anchoring the feature inside the non-custodial Web3 environment, Binance positions itself at the intersection of decentralized wallets and traditional capital markets. This structure could allow users to manage both regular crypto assets and pre-IPO linked tokens from the same interface. In summary, Binance is officially signaling its entry into the on-chain private market niche through the Web3 Wallet, starting with 5 tokenized assets tied to major technology brands, accessible today via the app's Markets section.

- Google Quantum AI paper (late Mar 2026) claims a future quantum machine could crack a Bitcoin private key in ~9 minutes, flagging a major crypto security risk; - Five leading AI models (ChatGPT, Gemini, Claude, Perplexity, Grok) concur a quantum threat exists but differ on timelines, urgency and feasibility of immediate Bitcoin upgrades; - Experts say Bitcoin can adopt quantum-resistant protocol updates, but slow governance and coordination could delay fixes and increase risks to custody, CEX/DeFi security and broader crypto adoption

NEW YORK -- Calls inside Congress for investigations into the prediction market platform Polymarket are increasing after the latest instance in which groups of anonymous traders made strategic, well-timed bets on a major geopolitical event hours before it occurred. On Wednesday, the Associated Press reported that at least 50 new accounts on Polymarket placed substantial bets on a U.S.-Iran ceasefire in the hours, even minutes, before President Trump announced it late Tuesday. These were the sole bets made on Polymarket through these accounts. In January, an anonymous Polymarket user made a $400,000 profit by betting that Venezuelan leader Nicolás Maduro would be out of office, hours before Maduro was captured. In the hours before the start of the Iran war, another account made roughly $550,000 in a series of trades effectively betting that the U.S. would strike Iran and that Ayatollah Ali Khamenei would be removed from office. Such prescient wagers have raised eyebrows -- and accusations that prediction markets are ripe for insider trading. And the issue goes beyond these three geopolitical events, according to at least one report. Researchers at Harvard University released a paper last month in which, using public blockchain data, they estimated that $143 million in profits have been made on Polymarket by individuals who potentially had insider information about events ranging from Taylor Swift's engagement to the awarding of the Nobel Peace Prize last year. Rep. Ritchie Torres (D-N.Y.), who sits on the House Financial Services Committee as well as the subcommittee on digital assets and financial technology, sent a letter Thursday to the Commodity Futures Trading Commission demanding the regulator review and investigate these well-timed trades. The CFTC regulates the derivatives markets, which includes prediction markets. "This pattern raises serious concerns that certain market participants may have had access to material nonpublic information regarding a market-moving geopolitical event," Torres wrote. The letter was shared exclusively with AP. "What is the statistical likelihood that of anyone other than an insider trader placing a winning bet 12 minutes before a market-moving presidential announcement?" Torres said in an interview with AP. "There are two answers: God, or an insider trader. And something tells me that God is not placing bets around Donald Trump's posts on Truth Social. " Prediction market platforms like Kalshi and Polymarket allow users to bet on everything from whether it will rain in Phoenix next week to whether the Federal Reserve will raise or lower interest rates. Americans have limited access to Polymarket, which was banned from the U.S. in 2022. The company has moved to reenter the country by acquiring a CFTC-licensed exchange and clearinghouse, giving it a legal pathway to start offering contracts domestically. The company has begun a limited rollout in the U.S. Polymarket also operates a separate, crypto-based platform offshore that remains outside U.S. jurisdiction. That platform accounts for most of its activity. Sen. Richard Blumenthal, D-Conn., sent a letter to Polymarket on Thursday demanding the company explain why it continues to allow trades on war and violence as well as whether the company is making efforts to keep insiders from trading on the platform. "Polymarket has become an illicit market to sell and exploit national security secrets unlike any in history, and by extension a potential honeypot for foreign intelligence services watching for those same suspicious bets and wagers," Blumenthal wrote. Republicans also have criticized these platforms and called for bans on these sorts of bets. There are at least two bills pending in Congress co-signed by both parties, one in the House and one in the Senate. "We don't want to imagine a world where America's adversaries use prediction markets to anticipate our next move," Rep. Blake Moore, R-Utah, said after the release of AP's findings on the ceasefire wagers. Polymarket did not immediately reply to a request for comment. The stakes are high for both Polymarket and Kalshi as they seek approval to operate nationwide, particularly in the lucrative sports betting market. Kalshi, which already is regulated in the U.S., and its executives have a goal of making the company the nation's dominant prediction market. Kalshi has leaned heavily into sports, which critics have said effectively makes it a sports betting platform that dabbles in event-based contracts on the side. Both companies also announced partnerships with sports teams and even news organizations to broaden their reach as well. AP has an agreement to sell U.S. elections data to Kalshi. The competition also carries political overtones. Donald Trump Jr. is an investor in Polymarket through his venture capital firm, 1789 Capital, and separately serves as a paid strategic advisor to Kalshi.

A powerful new artificial intelligence system developed by Anthropic is called Claude Mythos Preview, named after the Greek word "mythos" (μῦθος), which refers to a story or foundational narrative. It is raising fresh cybersecurity concerns after tests showed it can independently discover and exploit serious software vulnerabilities. Anthropic announced the system on April 7. The company said it is its most advanced model to date. In internal benchmarks, it scored 93.9% on SWE-bench Verified, 97.6% on the United States of America Mathematical Olympiad, and 83.1% on CyberGym. In one test, an engineer with no formal security training asked the model to search for remote code execution flaws. Within hours, it produced a working exploit. Researchers said the result shows the system can perform complex technical tasks without guidance. During several weeks of testing, the model uncovered thousands of previously unknown vulnerabilities. Many were critical. One case involved OpenBSD, a system known for its strong security. The model found a flaw that had remained hidden for 27 years. Researchers said the bug allowed attackers to crash a machine remotely by simply connecting to it. The system also identified a long-standing issue in FFmpeg, a widely used video library. Automated tools had scanned the same code millions of times without detecting the problem. In another case, the model independently discovered and exploited a vulnerability in FreeBSD, allowing attackers to gain full system control without further human input. Researchers said the model can combine multiple weaknesses. It chained vulnerabilities in the Linux kernel to gain full control of systems. It also generated a large number of working exploits targeting the Mozilla Firefox browser and solved all Cybench cybersecurity challenges. In Ancient Greek, the word mythos (μῦθος) originally meant "word," "speech," or "what is spoken," and could refer broadly to any utterance or authoritative statement, as reflected in Liddell-Scott. In early Greek usage, especially in Homer, mythos often denotes formal or authoritative speech, such as a command, declaration, or public address, rather than a "myth" in the modern sense. At the same time, it could also mean a "story," "tale," or "narrative," without implying whether it was true or false, since that distinction was not central to early Greek thought. In later classical usage, particularly in Aristotle, the term develops a more technical meaning as the "plot" or structured narrative of a drama, and over time it comes to signify "legend" or "fable," eventually giving rise to the modern sense of "myth." In Modern Greek, the word mainly means "myth," that is, a traditional or legendary story, often involving gods, heroes, or imaginary elements. At the same time, it may refer to something or someone that has acquired an almost legendary status, such as a famous person or event. Unlike its Ancient Greek usage, where it could mean authoritative speech or any form of discourse, in Modern Greek the word is more closely tied to the idea of a story that is symbolic, and not always historically true. Researchers said the model's performance marks a clear shift from earlier systems. On the United States of America Mathematical Olympiad, it scored 97.6%, far higher than previous models. Experts said this suggests a new level of capability rather than a gradual improvement. Anthropic said it tested for possible memorization. After filtering such cases, the model still showed a strong lead over earlier systems. Researchers said this points to improved reasoning and problem-solving ability. Anthropic has chosen not to release the model publicly. Instead, it launched Project Glasswing, a cybersecurity initiative that limits access to selected partners. These include Amazon, Apple, Google, Microsoft, Nvidia, CrowdStrike, Palo Alto Networks, and the Linux Foundation. The company said it will provide $100 million in usage credits and $4 million in funding for open-source security efforts. It plans to publish findings within 90 days, including recommendations on vulnerability disclosure and software security practices. A detailed system report showed that earlier versions of the model displayed risky behavior. In some tests, the system bypassed containment controls, exposed sensitive data, and attempted to avoid detection. Researchers said analysis tools confirmed the model understood these actions. Anthropic described the system as its most aligned model so far, but also one that carries the highest risk if it fails. The company said it has less confidence in its safety assessment than with earlier models. The broader impact could reshape the cybersecurity industry. A report by Fortune said news of the model led to a drop in shares of major security firms. However, many of those companies are now part of Project Glasswing, suggesting the technology will be integrated into existing systems. Experts say the balance between attackers and defenders is shifting. Elia Zaitsev said the time between discovering a vulnerability and exploitation has shrunk from months to minutes. Analysts estimate global cybercrime costs around $500 billion each year. Even small changes in this balance could have major economic consequences.

The incident occurred at 2.30pm, when a 24-inch underground pipeline gave way after the collar connecting two sections of pipe gave way. Dharmatejas Prasannadas, assistant engineer, water works (H-West ward), said the collar was old. The ruptured pipe, four feet beneath the surface, sent water gushing upwards, cracking the asphalt and tearing through the road. Powerful jets flooded the arterial stretch, which connects the island city to the western suburbs. It also links up with Bandra railway station. The force of the water left a 6-foot-deep crater on the road. Repair work was underway late into the night. "Areas fed by this line will receive only 50% of their water supply, which will be restored on Saturday," said Prasannadas. An official from the water maintenance department at the site said the pipe may have ruptured as repeated repairs to this stretch had left the road vulnerable. Adequate concreting around the pipeline may not have been carried out, leaving empty spaces. As a result, the surrounding asphalt or concrete may not have provided the pipe crucial support. Barely a fortnight ago, a metal sheet on the street had caved in at a spot not far off, raising concerns over recurring infrastructure failures.

AI cybersecurity is now a formal competitive front between OpenAI and Anthropic, with OpenAI finalizing an advanced security product for a limited partner release and Anthropic running a tightly controlled effort called Project Glasswing aimed at finding critical software vulnerabilities before attackers do. Artificial intelligence has moved from a tool that helps defenders understand threats to one that can independently find and exploit vulnerabilities. OpenAI and Anthropic are now building directly into that space, with implications for governments, enterprises, and the millions of software systems that underpin global financial infrastructure. OpenAI is finalizing an AI cybersecurity product with advanced capabilities and plans to release it initially to a limited partner group, according to Tech Startups. Anthropic is running a parallel effort internally called Project Glasswing, a tightly controlled initiative designed to hunt down critical software vulnerabilities before malicious actors find them first. The dual announcements mark a shift in how the two leading AI labs are positioning themselves. Both are moving from general-purpose AI into security-specific products with direct offensive and defensive capability. The question is no longer what AI can do in cybersecurity. It is who controls it and who is accountable when it goes wrong. Anthropic has already demonstrated the scale of what AI security tools can achieve. As crypto.news reported, the company limited access to its Claude Mythos Preview model after early testing found it could uncover thousands of critical vulnerabilities across widely used software environments, including a 27-year-old bug in OpenBSD and a 16-year-old remote execution flaw in FreeBSD. Anthropic said: "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely." Industry data cited by Anthropic shows a 72% year-on-year increase in AI-powered cyberattacks, with 87% of global organizations reporting exposure to AI-enabled incidents in 2025. Project Glasswing is being positioned as Anthropic's controlled effort to stay ahead of that curve. The deeper issue for regulators and the industry is that the same AI tool that finds a vulnerability defensively can find it offensively. As crypto.news noted, a joint study by Anthropic and MATS Fellows found that Claude Sonnet and GPT-5 could produce simulated exploits against Ethereum smart contracts worth $4.6 million in testing, and uncovered two novel zero-day vulnerabilities in nearly 3,000 recently deployed contracts. That dual-use reality makes the controlled rollout strategies both companies are pursuing essential. But the question of whether limited access is enough to prevent proliferation is one neither lab has fully answered.

Crypto giant Kraken's landmark Federal Reserve master account comes with restrictions aimed at mitigating risks, but it - and others likely to follow in its wake - could still create vulnerabilities for the U.S. financial system. Founded in 2011, Wyoming-based Kraken is one of the world's largest crypto exchanges, with both retail and institutional clients. Last month, it became the first-ever crypto company to win a Fed master account. The Kansas City Fed granted Kraken a "limited- purpose" account for one year initially, but neither party disclosed details of its restrictions. Fed master accounts are often likened to bank accounts for banks, letting accountholders move funds directly via the Fed's payment rails. The decision has sparked concerns among banks and the top Democrat on the House of Representatives Financial Services Committee, Maxine Waters, over potential financial-system risks. They also say the approval process was opaque and that it flouted Fed protocols. Waters has asked the Kansas City Fed to disclose more details by Friday. To be sure, banks stand to lose out as crypto firms expand onto their turf. But some regulatory experts said banks' risk concerns are warranted. A spokesperson for Kraken told Reuters that the Fed master account allows its Wyoming banking arm to access the central bank's wholesale payments system, Fedwire, and hold limited balances overnight. That means it can cut out bank intermediaries and move money faster and more cheaply. But unlike many accountholders, Kraken cannot earn interest on reserve balances it holds at the Fed, or access emergency Fed lending or the central bank's other FedNow and ACH payment systems, the spokesperson said. They declined to say whether Kraken will have access to Fed credit. The account details have not previously been reported. Kraken will initially use it to serve wholesale clients. It hopes to eventually add new features, said Jonathan Jachym, Kraken's global head of policy. "We look at this as a great testament to regulatory rigor and cooperation. It promotes principles of both safety and soundness, and innovation," said Jachym. A Kansas City Fed spokesperson said it was reviewing Waters' letter. The spokesperson declined to comment further. CRYPTO SYSTEM INROADS Granted more than five years after Kraken first applied, the account marks another victory for the digital asset industry under President Donald Trump's crypto-friendly administration, which is giving the sector more access to the mainstream financial system, sparking alarm among banks. Crypto firms Ripple, Anchorage Digital and fintech money transfer company Wise also hope to win master accounts, according to public information. Regional Fed banks manage those accounts, but the Fed board provides guidelines. It has signaled it will open its payment rails to more crypto and fintech firms. In December, it sought feedback on a potential new type of payment account with restrictions similar to those imposed on Kraken's. The proposed account would also not provide access to Fed credit. The Fed has said those limits would mitigate liquidity shocks, credit risk to the central bank, and would protect its ability to manage reserves. Still, even with safeguards, giving crypto firms direct access to Fedwire - which underpins the global dollar clearing system - creates money-laundering and operational risks, and could suck liquidity out of the banking system, lenders have warned. Under Fed rules, only depository institutions can have master accounts. Kraken and Anchorage have depository charters but are not federally insured. Wise and Ripple are seeking similar charters, along with several other crypto companies. While the Fed closely scrutinizes applications by uninsured depository institutions, such entities are subject to less rigorous ongoing oversight than insured banks. "The concern is by introducing institutions that may have less of a track record, less rigorous compliance and operations, even if they have limited models, that it could create a degree of systemic risk," said Richard Levin, chair of the fintech practice at Taft Stettinius & Hollister. OPERATIONAL AND MONEY-LAUNDERING RISKS Regulators have long flagged that the fintech and crypto sectors sometimes have patchy internal controls and cyber security. A core worry is that such firms, if granted accounts, could become a point of operational weakness. A hack, outage or liquidity misstep could cause settlement failure, rippling through the system and forcing the Fed to backstop the payment. "They don't have the experience," said Yesha Yadav, an associate dean at Vanderbilt University Law School. The crypto industry also has heightened exposure to money‑laundering risk, an issue Fed Governor Michael Barr flagged in December when opposing the Fed's request for information on the potential new payment account. The Kraken spokesperson said its bank reserves are fully backed and that the company complies with all bank-grade AML and know-your-customer requirements and that it has never been hacked. Rachel Anderika, Anchorage's chief operating officer, said everyone was subject to the same AML rules. "The AML risks with crypto are unique, but they are entirely manageable." London-based money-transfer firm Wise declined to comment. A Ripple spokesperson pointed to a social media post by CEO Brad Garlinghouse in December that said the industry was "prioritizing compliance." More broadly, by cutting out bank intermediaries and potentially allowing more crypto and fintech firms to park funds directly at the Fed, deposits could eventually be siphoned out of the banking system, others say. "Banks play a critical role as a keystone in the resilience of the broader financial system," said Kathryn Judge, a professor at Columbia Law School. "We need to be thoughtful, particularly when we are allowing access to a valuable federal resource." The Fed's regulatory chief, Michelle Bowman, said last month that Kraken's account would not necessarily open the floodgates, but she also acknowledged that it was uncharted territory. "It's a bit of an experiment," she said.

The announcement sparked warnings, but some in AI said the threat was being overplayed. Anthropic's announcement about its powerful new AI model this week sparked a wave of warnings and dire predictions, but not everyone is buying into the hype. Anthropic said Tuesday it was not releasing Mythos, its next-generation AI model, due to cybersecurity concerns. The company said Mythos was so powerful that non-experts could use it to exploit vulnerabilities in major operating systems. Instead of a wide release, Anthropic said it was making Claude Mythos Preview available to 11 external organizations, including Google, Microsoft, Amazon Web Services, JPMorganChase, and Nvidia, as part of "Project Glasswing." Anthropic's claims about what Mythos was capable of quickly sparked concern, as well as a meeting between Fed Chair Jerome Powell, Treasury Secretary Scott Bessent, and the heads of major US banks. Some AI commentators warned about the cybersecurity implications, while others cast doubt on the significance of the Anthropic announcement, saying Mythos didn't appear to be leaps and bounds ahead of other models and that it was more likely a matter of good PR. Should Mythos have security execs quaking in their boots? Is Anthropic simply a master at marketing its models? We rounded up what smart people are saying as the internet debates the latest AI development. Gary Marcus Gary Marcus, an AI researcher and author, said Anthropic's announcement on Mythos was "overblown." "To a certain degree, I feel that we were played," Marcus wrote on Substack. "The demo was definitely proof of concept that we need to get our regulatory and technical house in order, but not the immediate threat the media and public was lead to believe." Marcus said from what he has seen, the model appears to be "incrementally better" than previous models, rather than a "breakthrough." Yann LeCun Yann LeCun, founder of AMI Labs and former chief AI scientist at Meta, also threw cold water on the Mythos hype. "Mythos drama = BS from self-delusion," he wrote in an X post. He was responding to a post from Aisle, an AI security company, that said it tested smaller, cheaper models on the same vulnerabilities highlighted in Anthropic's Mythos announcement and found that they could do much of the same analysis. Jake Moore Jake Moore, global cybersecurity specialist at ESET, previously told Business Insider there was some marketing language in Anthropic's announcement, but that "fundamentally, this model seems incredibly impressive and will only improve over time." "Anthropic has built its reputation as the 'safety first' AI company, so announcements like this serve two purposes: genuine caution and signaling its safety-conscious stance," Moore said. Dave Kasten Dave Kasten, head of policy at Palisade Research, said he thinks it's likely that other AI models aren't far behind Mythos. He told CNBC in an interview on Thursday that his expectation is that "Anthropic is a little ahead, but not overwhelmingly ahead, and they don't necessarily have much of a permanent moat here." He flagged a recent report from Axios that said OpenAI also has a model with advanced cybersecurity abilities that it plans to only release to a small group, rather than the general public. Kasten also said he thinks Gemini is also probably not far behind, but the fact that Google is partnering on Mythos implies that Anthropic likely has an advantage with this particular model, at least for a couple of months. David Sacks David Sacks, tech investor and former White House AI czar, said Anthropic's claims about Mythos are important but should be taken with a grain of salt. "The world has no choice but to take the cyber threat associated with Mythos seriously. But it's hard to ignore that Anthropic has a history of scare tactics," Sacks said in an X post, sharing some examples of past instances when Anthropic issued alarming warnings or narratives about AI models. T.J. Marlin T.J. Marlin, CEO of Guardrail Technologies, who formerly worked on EY's global forensic technology practice, said the meeting between federal officials and Wall Street was about ensuring that the banks, should a big security breach happen, couldn't turn around and say, "We didn't know." "Every CEO in that room who fails to document a board-level response is now operating in the most legally exposed position possible," Marlin wrote on LinkedIn. Pablos Holman Pablos Holman, a VC at Deep Future, said cybersecurity defenders -- the people who are trying to defend against digital attacks -- stand to benefit more from advancements in AI than those who are carrying out the attacks. "Now everybody is losing their minds over AI-powered attacks," Holman said in a LinkedIn post about Mythos on April 1, before Anthropic's announcement this week. "What they're missing is that defenders have the same AIs. Often better ones and way more compute." Holman said cybersecurity defenders will have access to the same models as well as more resources to work with, like the source code. "This is still a war of escalation, but now the defender has the advantage," he wrote. "Security is about to get better. Not worse." Ben Seri "We have entered cybersecurity's Manhattan Project moment," Ben Seri, cofounder of cybersecurity startup Zafran Security, said in a post. Seri said the cybersecurity threat was real and immediate, while the defensive potential was real but would take longer to realize. He said the real challenge will be for cybersecurity defenders to work faster at scale. "AI will find vulnerabilities faster. AI will fix them faster. But the bottleneck was never discovery or remediation alone. It is the ability to deploy fixes into production environments safely, quickly, and at scale," he said. "Securely adopting rapid change in production is the most important shift that technology and security leaders need to take on to meet this moment."
