The latest news and updates from companies in the WLTH portfolio.
Google AI coding strike team is the company's latest high‑stakes bet in the race to build powerful AI tools that can write and maintain complex software. Google DeepMind has formed a dedicated "strike team" of researchers and engineers to strengthen its AI coding models, following concern that rival Anthropic has pulled ahead with its Claude Code assistant. The initiative was first reported by The Information and is aimed at improving performance on long, multi-step coding tasks where current systems often struggle. The group is led by Sebastian Borgeaud, who previously served as pre‑training lead for Google's Gemini models, underscoring how central the project is to the company's AI roadmap. Senior figures, including Google co‑founder Sergey Brin and DeepMind CTO Koray Kavukcuoglu, are said to be closely involved. Anthropic has focused heavily on AI‑assisted programming with Claude Code, an agentic coding tool that can understand large codebases, edit files and run commands through natural language. The company has said that a majority of its own code is now written with the help of AI, highlighting how deeply such tools are woven into its engineering workflow. This has added to the urgency inside Google, where internal sentiment holds that Anthropic's coding experience is currently ahead of Gemini's. External commentary has also framed Google's move as a structural response rather than a routine product update, intended to close what some describe as an "agentic coding gap." AI‑assisted development is already significant inside Google. Chief financial officer Anat Ashkenazi recently said around half of the company's code is now written by AI coding agents before being reviewed by human engineers. That figure has climbed from "well over 30%" of new code in early 2025, reflecting a rapid shift in day‑to‑day software development. Until now, many of Google's AI products were designed primarily for external customers, but the strike team is expected to push deeper internal adoption, supported by an internal leaderboard tracking how often staff use AI coding tools. The Google AI coding strike team shows how far big tech is willing to go to automate programming and stay competitive. If the effort succeeds, Gemini‑powered tools could narrow the gap with Anthropic's Claude Code and make AI‑generated code an even larger share of Google's vast software stack. But the outcome of this race will hinge on whether these systems can reliably handle long‑term, mission‑critical work without eroding trust among human developers.

The conventional wisdom in large organizations is that speed and governance exist in permanent tension. Move fast and you compromise controls. Enforce controls and you slow everything down. Most enterprise teams accept this trade-off as a structural fact of organizational life rather than a problem that can be solved. But the teams inside large organizations that consistently move faster than their peers have discovered something important: the bottlenecks are almost never caused by the governance requirements themselves. They are caused by governance systems that were not designed to scale. When the tools that enforce oversight are also the tools that enable execution, speed and control stop being opposites. That is the operating principle behind the most effective project management tools in enterprise environments today. Enterprise knowledge bases fail in predictable ways. They start well-organized, grow without structure, become unnavigable, and are eventually abandoned in favor of email chains and personal drives. Lark Wiki is built to resist that pattern by giving large organizations the tools to maintain knowledge quality as the content volume and the team size both grow. The result: The Wiki becomes a living operational reference rather than an archive. Knowledge that was previously locked in individual inboxes or inaccessible legacy systems becomes searchable and current, and the access model ensures that the right people can find it without compromising the security requirements that enterprise governance demands. Enterprise meetings carry a complexity that standard video conferencing tools struggle to handle. A global all-hands, a cross-functional strategy session, or a company-wide training event requires an infrastructure that can manage hundreds of participants, facilitate structured small-group discussion, and keep everyone engaged without losing the conversational quality of a smaller call. Lark Meetings is built for that range. The result: Enterprise-wide events run with the scale of a broadcast and the quality of a conversation. Large teams can gather, divide into working groups, and reconvene without the coordination overhead that typically makes organization-wide meetings feel like logistical exercises rather than productive sessions. Enterprise operations teams spend a disproportionate amount of time producing reports rather than acting on them. The data exists across multiple systems, someone has to compile it, format it, and present it on a cycle that is always slightly behind the decisions it is meant to inform. Lark Base replaces that cycle with a live operational view that updates itself. The result: Reporting stops being a dedicated activity and becomes a continuous background process. The operational team's time shifts from compiling information to acting on it, which is where enterprise agility is actually won or lost. In large organizations, approval processes are necessary but often badly designed. They create accountability without creating speed, because the routing logic is built for compliance rather than efficiency. Lark Approval is designed to satisfy both requirements simultaneously. The result: Governance requirements are met automatically by the routing logic rather than manually by an administrator. Approvals move faster because the system does the compliance work, and the audit trail that regulators and internal risk teams require is maintained as a byproduct of normal operations. Enterprise documents fail at the same point: they are produced, reviewed, and filed, but the actions they were supposed to generate never get formally captured or tracked. Lark Docs changes the relationship between documentation and execution by making documents an active part of the workflow rather than a record produced after the work is done. The result: Documents become the place where accountability is established, not just where work is described. The enterprise team gains a documentation layer that enforces follow-through by design rather than depending on individuals to manually transfer action items from documents to task trackers. When large organizations audit their operational speed, the bottlenecks almost always trace back to the same root cause: information that should be visible is not, and approvals that should be automatic are manual. The leadership team evaluates Google Workspace pricing and similar platforms as the baseline infrastructure, then adds governance tools on top. The result is a system where the work platform and the oversight platform are separate, and coordination between them requires dedicated operational staff. Lark collapses that structure. The governance layer lives inside the same environment as the execution layer, so the compliance overhead does not sit on top of the work but runs alongside it. Approvals happen in the same platform as the documents that triggered them. Access controls are built into the knowledge base rather than managed separately. Audit trails are generated by the tools the team uses every day rather than by a parallel compliance system. Enterprise agility is not about removing governance. It is about building governance into the infrastructure so that it accelerates decisions rather than delaying them. Large organizations that operate on a unified set of productivity tools where oversight and execution share the same environment move faster than their peers not because they have relaxed their controls, but because their controls no longer require a separate system to enforce them.

Anthropic stated that it has found no evidence that the unauthorized access has impacted its systems or extended beyond the third-party vendor's environment. Anthropic is investigating a claim that a small group of people gained unauthorized access to its Claude Mythos model, a cybersecurity tool designed to identify and exploit vulnerabilities in major operating systems and web browsers. The company confirmed It's looking into reports that unauthorized users accessed the Mythos preview through a third-party vendor environment, according to a statement provided to Bloomberg and reported by multiple outlets including The Guardian and The Verge. Anthropic stated that it has found no evidence that the unauthorized access has impacted its systems or extended beyond the third-party vendor's environment. The Mythos model, part of Anthropic's Project Glasswing initiative, has been made available to a select group of companies including Apple, Nvidia, Google, Amazon Web Services, and Microsoft for testing purposes. The company has warned that Mythos could pose significant cybersecurity risks if misused, describing it as capable of enabling cyber-attacks when directed by a user to exploit vulnerabilities. According to Bloomberg, the group that accessed the model consists of a "handful" of individuals who gained access on the same day Mythos was announced as being released to initial vendor partners. The group reportedly used a combination of a contractor's access and commonly used internet sleuthing tools to locate and access the model, with one member identified as a worker at a third-party contractor for Anthropic. Members of the group are part of a Discord channel focused on uncovering information about unreleased AI models and have been using Mythos regularly since gaining access, providing screenshots and a live demonstration to Bloomberg as evidence. The group told Bloomberg they are interested in "playing around" with the technology rather than causing harm, and have not run cybersecurity prompts designed to exploit vulnerabilities. Anthropic continues to investigate the incident and has not released further details about the third-party vendor involved or the specific methods used to gain access.

Exchanges are clamoring over SpaceX's highly anticipated IPO, but could Elon Musk and Company defy norms and list on the Texas Stock Exchange rather than the Nasdaq or NYSE? Trader Talk host Kenny Polcari is joined by Amy Wu Silverman, RBC Capital Markets Head of Derivatives Strategy, Kevin Kelly, Kelly Intelligence Founder, CEO and CCO, and former Ellevest Head of People Ops Amanda Polcari to discuss.
In early April, Anthropic announced its latest Mythos model, saying it would remain exclusive to select tech companies for cybersecurity purposes. Anthropic has now confirmed it's actively investigating an incident where a group claims to have unauthorized access to Mythos. A Bloomberg report, citing anonymous sources, documentation, and examples of Mythos up and running, alleges that a group of users accessed the Mythos model without Anthropic's authorization. Mythos is said to be capable of exploiting vulnerabilities in "every major operating system and every major web browser," if the user intends to do so, according to Anthropic. At launch, Anthropic claimed to have found "thousands of high-severity vulnerabilities" in everyday software. Yesterday, Mozilla claimed to have found 271 vulnerabilities within Firefox through its use of Mythos. Anthropic previously said it would restrict access to the model to 11 tech companies through its Project Glasswing program. Restricting users means software makers can fix any identified software issues before bad actors gain access to similar AI models. However, that exclusivity may not have been as strong as first thought, with this group of users who talk in a private Discord group claiming to have had access since day one. If true, they've had access to the software for over two weeks. The group told Bloomberg that it accessed the tool through a member's third-party contractor status with Anthropic. It also used tools typically employed by cybersecurity researchers, along with knowledge of where Anthropic hosts other models, to better predict where Mythos would sit within its systems. A spokesperson for Anthropic told Bloomberg, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." It says there's currently no evidence that access went beyond the vendor's own tools. Speaking with Bloomberg, the group says it's not intending to cause any damage with its access to Mythos. That may not be true for other groups who may be trying to gain access to Mythos themselves.

In early April, Anthropic announced its latest Mythos model, saying it would remain exclusive to select tech companies for cybersecurity purposes. Anthropic has now confirmed it's actively investigating an incident where a group claims to have unauthorized access to Mythos. A Bloomberg report, citing anonymous sources, documentation, and examples of Mythos up and running, alleges that a group of users accessed the Mythos model without Anthropic's authorization. Mythos is said to be capable of exploiting vulnerabilities in "every major operating system and every major web browser," if the user intends to do so, according to Anthropic. At launch, Anthropic claimed to have found "thousands of high-severity vulnerabilities" in everyday software. Yesterday, Mozilla claimed to have found 271 vulnerabilities within Firefox through its use of Mythos. Anthropic previously said it would restrict access to the model to 11 tech companies through its Project Glasswing program. Restricting users means software makers can fix any identified software issues before bad actors gain access to similar AI models. However, that exclusivity may not have been as strong as first thought, with this group of users who talk in a private Discord group claiming to have had access since day one. If true, they've had access to the software for over two weeks. The group told Bloomberg that it accessed the tool through a member's third-party contractor status with Anthropic. It also used tools typically employed by cybersecurity researchers, along with knowledge of where Anthropic hosts other models, to better predict where Mythos would sit within its systems. A spokesperson for Anthropic told Bloomberg, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." It says there's currently no evidence that access went beyond the vendor's own tools. Speaking with Bloomberg, the group says it's not intending to cause any damage with its access to Mythos. That may not be true for other groups who may be trying to gain access to Mythos themselves.

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

SAN SALVADOR, El Salvador, April 22, 2026 (GLOBE NEWSWIRE) -- Bitget Wallet, the everyday finance app with over 90 million users globally, has integrated Polymarket, the world's largest prediction market, to bring AI-powered prediction market trading to its self-custodial wallet platform, allowing users to access real-world event markets through a seamless mobile-first trading experience. The integration brings prediction markets into Bitget Wallet's decentralized interface, expanding access to information-driven trading tied to real-world events and market moving developments. The launch comes as prediction markets continue to emerge as one of the fastest-growing segments in digital finance, with industry research estimating trading volume could grow more than 400% between 2024 and 2026, underscoring rising global demand for the category. Prediction markets are moving from standalone platforms into core financial infrastructure, and Bitget Wallet provides a distribution layer where users already holding funds can now seamlessly express views on real-world outcomes, from elections to macro trends, within a familiar, everyday interface.

In early April, Anthropic announced its latest Mythos model, saying it would remain exclusive to select tech companies for cybersecurity purposes. Anthropic has now confirmed it's actively investigating an incident where a group claims to have unauthorized access to Mythos. A Bloomberg report, citing anonymous sources, documentation, and examples of Mythos up and running, alleges that a group of users accessed the Mythos model without Anthropic's authorization. Mythos is said to be capable of exploiting vulnerabilities in "every major operating system and every major web browser," if the user intends to do so, according to Anthropic. At launch, Anthropic claimed to have found "thousands of high-severity vulnerabilities" in everyday software. Yesterday, Mozilla claimed to have found 271 vulnerabilities within Firefox through its use of Mythos. Anthropic previously said it would restrict access to the model to 11 tech companies through its Project Glasswing program. Restricting users means software makers can fix any identified software issues before bad actors gain access to similar AI models. However, that exclusivity may not have been as strong as first thought, with this group of users who talk in a private Discord group claiming to have had access since day one. If true, they've had access to the software for over two weeks. The group told Bloomberg that it accessed the tool through a member's third-party contractor status with Anthropic. It also used tools typically employed by cybersecurity researchers, along with knowledge of where Anthropic hosts other models, to better predict where Mythos would sit within its systems. A spokesperson for Anthropic told Bloomberg, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." It says there's currently no evidence that access went beyond the vendor's own tools. Speaking with Bloomberg, the group says it's not intending to cause any damage with its access to Mythos. That may not be true for other groups who may be trying to gain access to Mythos themselves.

WASHINGTON -- AI coding platform Cursor is teaming up with SpaceX in an effort to ramp up the development of its artificial intelligence coding tools, the company announced. The startup said the partnership will allow it to expand its model training capabilities by tapping into advanced computing infrastructure, helping it to build more powerful versions of its AI systems. In a social media post, SpaceX founder Elon Musk said the combining of these two companies could allow them to "build the world's most useful" AI models. Cursor, which develops "agentic" coding models designed to assist with software development, said its progress has been closely tied to the amount of computing power available. Its initial model, Composer, launched less than a year ago, followed by newer versions that improved performance through expanded training and reinforcement learning. But the company said its growth has been limited by access to large-scale computing resources, which is a common challenge across the AI industry. Through the new partnership, Cursor will use infrastructure tied to SpaceX's affiliated AI company, xAI, including its "Colossus" system, to significantly scale up training. The company said increased compute capacity has already proven critical to improving its models, with each upgrade leading to more advanced capabilities at lower cost.

IFA animal health chair David Hall said the associated information campaign was "inadequate" and "deeply confusing", leaving farmers and mart operators unclear on how to apply the rules in practice. "Minister Heydon's TB Action Plan has now been implemented, but the way in which it has been communicated to stakeholders has been shambolic," he said. An information leaflet issued to herd owners failed to clearly outline the new testing requirements, instead offering vague references to changes across different categories of animals, he added. Subsequent guidance has compounded the issue, according to the IFA, particularly around the introduction of three herd categories. "Farmers have been left in the dark as to whether they are in Category 1, 2 or 3. These labels are meaningless without clear explanations, and there has been no effort made to directly inform individual farmers of their status," Mr Hall said. He warned the lack of clarity had created widespread uncertainty at mart level, with operators now required to enforce rules that had not been properly communicated. Further confusion has arisen from inconsistencies in department guidance. A QR code in the leaflet directs farmers to the department's TB Hub, where frequently asked questions are said to contradict the published TB Action Plan. "If the department cannot get its own information straight, how can it expect farmers or marts to interpret and implement these rules correctly?" Hall said, adding the situation could have been avoided with proper stakeholder engagement. He called for a substantial lead-in period before enforcement, warning it would be unacceptable for farmers to face penalties under rules that remain unclear. The reality is that the department was not ready to implement this plan. The minister and his officials must now act quickly to fix the problems of their own making and restore confidence. ICMSA deputy president Eamon Carroll said the "utter confusion" went "far beyond normal 'teething problems'," and highlighted particular concern around the online compliance certificate used during cattle sales. Farmers have raised issues with a warning linked to "H" animals, which states the animal was part of a 'high-risk' cohort. According to the ICMSA, this has led buyers to wrongly assume herds have TB issues and withdraw from sales. "The department has confirmed that this wording is just a warning, and that genuinely high-risk animals are identified differently. In that case, why include a carelessly misleading designation at all?" Mr Carroll said. He warned the system risks undermining cattle movements and called for the warning to be removed immediately.

The group is said to be a part of a private Discord community that hunts for information about unreleased AI models. Earlier this month, Anthropic released a preview of what it described as its "most powerful model yet," called Mythos, which it said to have advanced cybersecurity capabilities. Experts and even Anthropic itself have warned that the model could be extremely dangerous in the wrong hands, potentially enabling severe cyberattacks faster than companies can respond. That concern is partly why the company opted for a limited rollout of the model to major technology and financial institutions under an initiative called Project Glasswing. Since the public reveal of the model, it has created a frenzy among security experts and U.S. government officials. Reports say the technology has even prompted emergency discussions between officials and major Wall Street banks some days ago. But despite the tight restrictions Anthropic placed on access to the model, a small group of outsiders reportedly gained entry anyway. According to a report from Bloomberg, a handful of users in a private online forum managed to access Mythos. The access allegedly occurred on the same day the model was announced for limited testing, though details are only now coming to light. The information came from an individual familiar with the situation, who reportedly provided screenshots and a live demonstration of the model to verify the claim. Unauthorized access to such a system raises concerns because of what the model is capable of doing. In Anthropic's own words, Mythos can identify and exploit vulnerabilities "in every major operating system and every major web browser when directed by a user to do so." In simple terms, the model can scan software for security flaws. In theory, that capability could help organizations defend themselves or allow attackers to locate weaknesses in their systems. That dual-use potential is a key reason Anthropic restricted the release. The company reportedly shared access to Mythos with a small number of organizations, including companies such as Apple, Amazon, and Cisco Systems, allowing them to test their own infrastructure for vulnerabilities before a wider rollout. According to Bloomberg, the group responsible for the alleged unauthorized access is part of a private Discord community that searches for information about unreleased AI models. Members reportedly use bots and other tools to scan sites such as GitHub for technical clues. One individual in the group is said to have had contractor-level access to a third-party vendor environment used by Anthropic. That access reportedly helped the group get closer to the Mythos system. The method used to locate the model appears to have been surprisingly simple. The group allegedly made "an educated guess" about the model's online location based on knowledge of the naming patterns Anthropic uses for its systems. Some of those technical details were reportedly exposed in a recent data breach involving Mercor, a company that works with several AI developers. Responding to the report, Anthropic said it is investigating the situation. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," the company said in a statement. Anthropic added that it currently has no evidence the reported access went beyond the vendor environment or affected its internal systems. According to Bloomberg's source, the group did not use the model to attempt cyberattacks. Instead, they reportedly ran simple tests, such as asking the model to build basic websites.

In early April, Anthropic announced its latest Mythos model, saying it would remain exclusive to select tech companies for cybersecurity purposes. Anthropic has now confirmed it's actively investigating an incident where a group claims to have unauthorized access to Mythos. A Bloomberg report, citing anonymous sources, documentation, and examples of Mythos up and running, alleges that a group of users accessed the Mythos model without Anthropic's authorization. Mythos is said to be capable of exploiting vulnerabilities in "every major operating system and every major web browser," if the user intends to do so, according to Anthropic. At launch, Anthropic claimed to have found "thousands of high-severity vulnerabilities" in everyday software. Yesterday, Mozilla claimed to have found 271 vulnerabilities within Firefox through its use of Mythos. Anthropic previously said it would restrict access to the model to 11 tech companies through its Project Glasswing program. Restricting users means software makers can fix any identified software issues before bad actors gain access to similar AI models. However, that exclusivity may not have been as strong as first thought, with this group of users who talk in a private Discord group claiming to have had access since day one. If true, they've had access to the software for over two weeks. The group told Bloomberg that it accessed the tool through a member's third-party contractor status with Anthropic. It also used tools typically employed by cybersecurity researchers, along with knowledge of where Anthropic hosts other models, to better predict where Mythos would sit within its systems. A spokesperson for Anthropic told Bloomberg, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." It says there's currently no evidence that access went beyond the vendor's own tools. Speaking with Bloomberg, the group says it's not intending to cause any damage with its access to Mythos. That may not be true for other groups who may be trying to gain access to Mythos themselves.

WASHINGTON -- AI coding platform Cursor is teaming up with SpaceX in an effort to ramp up the development of its artificial intelligence coding tools, the company announced. The startup said the partnership will allow it to expand its model training capabilities by tapping into advanced computing infrastructure, helping it to build more powerful versions of its AI systems. In a social media post, SpaceX founder Elon Musk said the combining of these two companies could allow them to "build the world's most useful" AI models. Cursor, which develops "agentic" coding models designed to assist with software development, said its progress has been closely tied to the amount of computing power available. Its initial model, Composer, launched less than a year ago, followed by newer versions that improved performance through expanded training and reinforcement learning. But the company said its growth has been limited by access to large-scale computing resources, which is a common challenge across the AI industry. Through the new partnership, Cursor will use infrastructure tied to SpaceX's affiliated AI company, xAI, including its "Colossus" system, to significantly scale up training. The company said increased compute capacity has already proven critical to improving its models, with each upgrade leading to more advanced capabilities at lower cost.

WASHINGTON -- AI coding platform Cursor is teaming up with SpaceX in an effort to ramp up the development of its artificial intelligence coding tools, the company announced. The startup said the partnership will allow it to expand its model training capabilities by tapping into advanced computing infrastructure, helping it to build more powerful versions of its AI systems. In a social media post, SpaceX founder Elon Musk said the combining of these two companies could allow them to "build the world's most useful" AI models. Cursor, which develops "agentic" coding models designed to assist with software development, said its progress has been closely tied to the amount of computing power available. Its initial model, Composer, launched less than a year ago, followed by newer versions that improved performance through expanded training and reinforcement learning. But the company said its growth has been limited by access to large-scale computing resources, which is a common challenge across the AI industry. Through the new partnership, Cursor will use infrastructure tied to SpaceX's affiliated AI company, xAI, including its "Colossus" system, to significantly scale up training. The company said increased compute capacity has already proven critical to improving its models, with each upgrade leading to more advanced capabilities at lower cost.

WASHINGTON -- AI coding platform Cursor is teaming up with SpaceX in an effort to ramp up the development of its artificial intelligence coding tools, the company announced. The startup said the partnership will allow it to expand its model training capabilities by tapping into advanced computing infrastructure, helping it to build more powerful versions of its AI systems. In a social media post, SpaceX founder Elon Musk said the combining of these two companies could allow them to "build the world's most useful" AI models. Cursor, which develops "agentic" coding models designed to assist with software development, said its progress has been closely tied to the amount of computing power available. Its initial model, Composer, launched less than a year ago, followed by newer versions that improved performance through expanded training and reinforcement learning. But the company said its growth has been limited by access to large-scale computing resources, which is a common challenge across the AI industry. Through the new partnership, Cursor will use infrastructure tied to SpaceX's affiliated AI company, xAI, including its "Colossus" system, to significantly scale up training. The company said increased compute capacity has already proven critical to improving its models, with each upgrade leading to more advanced capabilities at lower cost.

A group of unauthorized users reportedly has gained access to Anthropic's controversial Claude Mythos Preview AI frontier model despite the AI vendor's efforts to keep it out of public hands by limiting the organizations that can use it. Bloomberg reported that the unnamed group had tried multiple ways to gain access to the AI model since it was first announced earlier this month, and finally was able to get through via a third-party vendor. The users, who accessed Mythos on the day it was announced, are part of a Discord online forum group known to search for information about unreleased AI models. According to the report, the group, using knowledge it had about a format Anthropic had used for other models, "made an education guess about [Mythos'] online location." A person inside the group that Bloomberg communicated with told the news outlet that they were "interested in playing around with new models, not wreaking havoc with them." In a statement to TechCrunch, an Anthropic spokesperson said the company was investigating the claim of unauthorized access to Mythos through a third-party vendor, and that the company has not found indications that the group's activities have effected its systems. Anthropic's announcement of Mythos April 7 sent shockwaves through the cybersecurity industry. The vendor described a frontier model that is significantly better than any other developed at detecting and identifying software vulnerabilities, noting that in tests, Mythos was able to find a security flaw that had been present yet undetected for 27 years. However, the model also is very good at creating exploits for the vulnerabilities, which convinced Anthropic executives to limit the release of Mythos to a select group of organizations that will use them to create stronger defenses as part of the AI vendor's new Project Glasswing. OpenAI a week later followed a similar path with the unveiling of GPT-5.4-Cyber, a frontier model focused on cybersecurity that the vendor also designated for particular users, though granting access to more organizations and individuals than Anthropic. The introduction of Mythos ignited debates about everything from cybersecurity as such autonomous AI models come into play to what organizations need to do to secure their IT environments to whether Mythos' capabilities are unique. However, enterprises and their security teams need to pay attention, according to Brian Fox, co-founder and CTO of Sonatype, which provides a software supply chain management platform. "If the early reporting is right, Mythos could be a watershed moment," Fox said. "What is not new is the reality it is forcing people to confront. Beneath the AI framing sits the same software supply chain reality we have been discussing for years: dependencies, build pipelines, third-party software, and infrastructure remain the attack surface." Fox added that "what changed is speed. AI can now find and operationalize weaknesses across that stack faster than most organizations can inventory, prioritize, and patch them. What we are seeing in response to the Mythos news is many organizations coming to terms with a reality that has existed for a long time: they are not actually in control of their software supply chains." Tech vendors are beginning to roll out offerings aimed at helping organizations deal with the cyber risks posed by such frontier models. IBM Consulting last week introduced IBM Autonomous Security, a collection of specialized agents created to make enterprises' often sprawling security stacks work a more unified and coordinated fashion and creating what the vendor called "a systemic defense" that is needed to address the autonomous and fast-moving threats from such models. At the same time, IBM is offering a new service for assessing a company's security weaknesses and responding to them. Likewise, Palo Alto Networks launched Unit 42 Frontier AI Defense, an offering that uses AI models to help organizations "identify and validate the exposures most likely to be chained into real attacks before attackers weaponize them," with Sam Rubin, senior vice president of consulting and threat intelligence at Unit 42, writing that "frontier AI is changing what is possible for attackers. In the hands of defenders, it can become a decisive advantage." Mythos and GPT-5.4-Cyber have garnered much of the attention about the cybersecurity risks such frontier models represent. However, some security vendors wrote that they tested publicly available AI models and found that many of them came close to or matched Mythos' ability to find and identify zero-day vulnerabilities. Executives with startup Aisle, which offers an AI-native app security platform, wrote that over the past year, they had built an AI system for discovering, validating, and patching zero-days in open source software. In tests, they "took the specific vulnerabilities Anthropic showcases in their announcement, isolated the relevant code, and ran them through small, cheap, open-weights models. Those models recovered much of the same analysis." The models included GPT-OSS-120b, DeepSeek R1, Qwen3, and Gemma 4. The results varied depending on the model and the task, they wrote. Researchers with Vidoc Security Lab, another AI-based cybersecurity startup, wrote that they came up with similar results with OpenAI's GPT-5.4 and Anthropic's Claude Opus 4.6 models running OpenCode, an open source AI coding agent, scanning for security flaws in open software like OpenBSD and FFmpeg. "If public models can already do useful work inside that kind of workflow, then the story is not 'Anthropic has a magical cyber artifact,'" they wrote. "The story is that serious AI-assisted vulnerability research is no longer confined to a single frontier lab. That does not make the workflow easy. It means the moat is moving up the stack, from model access to validation, prioritization, and remediation."

Exchanges are clamoring over SpaceX's highly anticipated IPO, but could Elon Musk and Company defy norms and list on the Texas Stock Exchange rather than the Nasdaq or NYSE? Trader Talk host Kenny Polcari is joined by Amy Wu Silverman, RBC Capital Markets Head of Derivatives Strategy, Kevin Kelly, Kelly Intelligence Founder, CEO and CCO, and former Ellevest Head of People Ops Amanda Polcari to discuss.
Alphabet Inc.'s Google unveiled a slew of tools to build AI agents aimed at helping companies automate tasks in the tech giant's latest attempt to take on OpenAI and Anthropic PBC in the burgeoning market. At an annual conference in Las Vegas, Google's cloud computing unit on Wednesday showcased a set of tools that can create AI agents and track their work within companies, including a dedicated inbox for the virtual bots to post information and progress reports. Google also introduced updates across its Workspace productivity suite and offered up a vision in which AI agents dramatically overhaul the day to day routines of the average worker. The company's researchers invented much of the technology that touched off the current AI boom, but now Google is in a tight race with leading AI agent makers to win business from corporate customers clamoring for the technology to boost productivity. With the company pouring as much as $185 billion into capital expenditure this year alone, investors are hoping that it can drum up enough new business to justify the steep investment in AI. Get the Morning & Evening Briefing Americas newsletters. Get the Morning & Evening Briefing Americas newsletters. Get the Morning & Evening Briefing Americas newsletters. Start every morning with what you need to know followed by context and analysis on news of the day each evening. Plus, Bloomberg Weekend. Start every morning with what you need to know followed by context and analysis on news of the day each evening. Plus, Bloomberg Weekend. Start every morning with what you need to know followed by context and analysis on news of the day each evening. Plus, Bloomberg Weekend. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. The search giant is hoping that its combination of chips, AI models and developer tools will give it an edge. It's poised to announce a new generation of custom-designed chips, including one dedicated to inference, or running AI models after they've been trained. With this push, Google will further challenge market leader Nvidia Corp. in a fast-growing category for semiconductors that's fueled by surging adoption of AI software. "This isn't about offering individual services that can be cobbled together; it is about providing a comprehensive backbone for innovation," Google Cloud CEO Thomas Kurian said in a blog post. A particular focus for Google is AI coding, a market where company leaders are growing increasingly worried that they have fallen behind. Many engineers in Silicon Valley toggle between Anthropic's Claude Code and OpenAI's Codex to see which program will give them the best results, but Google often isn't in the conversation, startup founders told Bloomberg News. In a bid to court developers, Google said its Gemini Enterprise Agent Platform would include new features such as Memory Bank and Memory Profile to help agents to remember past interactions with users, a weakness of some early AI tools. Another new feature, Agent Simulation, will help developers more thoroughly test how the tools work before launch. Anthropic has begun to turn its attention to workers in other sectors with its Cowork product, and Google is chasing that business too. Google said workers could use its Gemini Enterprise app, which it framed as the "front door for AI for every employee," to create agents without writing a line of code. The company also announced Projects, a collaboration platform designed for workers to collaborate with their colleagues as well as agents. Google said the tool brings together information from sources such as Workspace, Microsoft Corp.'s OneDrive and company chats to help agents operate with the proper context. Other offerings by the company are intended to help clients make sure that agents can operate in fields with compliance issues. Google also unveiled new cybersecurity agents that it said clients could use to protect their systems. AI models are identifying a torrent of bugs, but questions are mounting about how they could be exploited without proper safeguards.

WASHINGTON -- AI coding platform Cursor is teaming up with SpaceX in an effort to ramp up the development of its artificial intelligence coding tools, the company announced. The startup said the partnership will allow it to expand its model training capabilities by tapping into advanced computing infrastructure, helping it to build more powerful versions of its AI systems. In a social media post, SpaceX founder Elon Musk said the combining of these two companies could allow them to "build the world's most useful" AI models. Cursor, which develops "agentic" coding models designed to assist with software development, said its progress has been closely tied to the amount of computing power available. Its initial model, Composer, launched less than a year ago, followed by newer versions that improved performance through expanded training and reinforcement learning. But the company said its growth has been limited by access to large-scale computing resources, which is a common challenge across the AI industry. Through the new partnership, Cursor will use infrastructure tied to SpaceX's affiliated AI company, xAI, including its "Colossus" system, to significantly scale up training. The company said increased compute capacity has already proven critical to improving its models, with each upgrade leading to more advanced capabilities at lower cost.
