The latest news and updates from companies in the WLTH portfolio.
Anthropic Claude AI: As we are all well aware of how advanced the artificial intelligence race has become, with hundreds of innovations emerging every month, the widely known AI developer Anthropic is reportedly building its own AI chips to power Claude AI. This move, in early 2026, reflects a wider shift in the tech industry, where control over hardware is becoming just as important as software innovation. According to multiple recent reports, Anthropic is evaluating in-house chip development to power its rapidly growing AI models, as demand for computing resources surges globally. Why Anthropic may build its own chips Rising demand for AI computing The popularity of Claude has skyrocketed, with the company's revenue run rate crossing $30 billion in 2026, up sharply from about $9 billion in 2025. This surge requires massive computing power, creating pressure on existing infrastructure. Global chip shortage Advanced AI chips, especially GPUs and specialised processors, are in limited supply. This has become a key bottleneck for companies building large AI systems. Reducing dependence on Big Tech Currently, Anthropic relies on hardware from companies like Nvidia, Google (TPUs), and partners such as Broadcom. After building its own chips, it could reduce this dependence and gain greater control over performance and costs. Cost optimisation at scale Running advanced AI models is extremely expensive. Custom chips-like Google's TPUs-can be more efficient and cheaper in the long run compared to off-the-shelf GPUs. Performance tuning for AI models Designing chips specifically for Claude could allow Anthropic to optimise speed, energy use, and model training efficiency, giving it a competitive edge. (Also Read: OpenAI's ChatGPT becomes second most downloaded app in world, AI boom surges) The AI race Other tech giants are already moving in this direction. Google builds its own TPUs, while companies like Amazon and Microsoft are investing heavily in custom AI hardware ecosystems. Industry experts say Anthropic's chip ambitions are still at an exploratory stage, and the company may ultimately continue relying on partners. However, the intent itself signals a broader industry shift: AI companies are no longer just software players; they are becoming full-stack technology providers. As the AI battle intensifies, owning the underlying hardware could be key to scaling faster, reducing costs, and staying ahead in the AI race.

SpaceX has begun installing equipment at its advanced chip packaging facility in Bastrop, Texas, as the company moves closer to starting production later this year, according to sources familiar with the matter. The facility is designed to package radio frequency (RF) chips used in products linked to SpaceX's Starlink satellite internet system. Once operational, it will bring part of the chip packaging process in-house, which is currently handled by external suppliers, News.Az reports, citing Reuters. Sources said the project has experienced some delays, but SpaceX is still targeting production to begin by the end of 2026. The Bastrop site is part of a broader expansion of SpaceX's manufacturing footprint in Texas. The company has been scaling up infrastructure to support Starlink hardware production, including satellite terminals and related components. Texas Governor Greg Abbott previously said the facility is expected to expand significantly over the next three years, adding up to 1 million square feet of manufacturing space. The expansion is projected to cost more than $280 million. The move also reflects SpaceX CEO Elon Musk's broader push to strengthen in-house semiconductor capabilities. The company has recently outlined plans to develop advanced chip manufacturing capacity across its Texas operations. While SpaceX did not immediately respond to requests for comment, the development signals a deeper vertical integration strategy aimed at reducing reliance on external suppliers and tightening control over critical components used in its global satellite network.

The US Treasury secretary, Scott Bessent, summoned major American bank chiefs to a meeting in Washington this week amid concerns over the cyber risks posed by Anthropic's latest AI model, according to reports. Bosses including the Federal Reserve chair, Jerome Powell, were said to have gathered at the Treasury headquarters for the meeting after the release of the Claude Mythos AI model that Anthropic says poses unprecedented cybersecurity risks. A recent leak of Claude's code prompted the startup to release a blogpost at the beginning of the month saying that AI models had surpassed "all but the most skilled humans at finding and exploiting software vulnerabilities", adding: "The fallout - for economies, public safety, and national security - could be severe." This week's meeting was reportedly called while bank bosses were already in Washington for a lobby group meeting, with a guest list focused on heads of systemically important banks - meaning regulators believe a major disruption to their operations, or their potential collapse, would put financial stability at risk. Attenders included the Goldman Sachs chief executive, David Solomon, Bank of America's Brian Moynihan, Citigroup's Jane Fraser, Morgan Stanley's Ted Pick and the Wells Fargo boss Charlie Scharf, according to Bloomberg, which first reported details of the meeting. JP Morgan's Jamie Dimon was invited but unable to attend. In an annual letter to shareholders, published earlier this week, Dimon warned that cybersecurity "remains one of our biggest risks" and that "AI will almost surely make this risk worse". Anthropic has said that its Mythos model, yet to be shared with the public, has exposed thousands of vulnerabilities in software and popular applications, prompting it to limit the release of the new model to a small clutch of businesses, including Amazon, Apple and Microsoft. It marks the first time Anthropic has restricted the release of any of its products. The networking companies Cisco and Broadcom have also gained access, along with the Linux Foundation, which promotes the free, open-source Linux computer operating system. It comes amid concerns that hackers could end up using such tools for figuring out passwords or cracking encryption that is intended to keep data safe. The company said the oldest of the vulnerabilities uncovered by Mythos were up to 27 years old, none of which are believed to have been noticed by their creators or tech monitors before being identified by the AI mode. The meeting comes weeks after the US government designated Anthropic as a supply chain risk, allegations that Anthropic is fighting in court. The Federal Reserve, Anthropic and the US banks declined requests for comment from Bloomberg. The Treasury did not respond to the news outlet's request for comment.
Reports say Fed chair Jerome Powell among attenders at meeting in Washington The US Treasury secretary, Scott Bessent, summoned major American bank chiefs to a meeting in Washington this week amid concerns over the cyber risks posed by Anthropic's latest AI model, according to reports. Bosses including the Federal Reserve chair, Jerome Powell, were said to have gathered at the Treasury headquarters for the meeting after the release of the Claude Mythos AI model that Anthropic says poses unprecedented cybersecurity risks. A recent leak of Claude's code prompted the startup to release a blogpost at the beginning of the month saying that AI models had surpassed "all but the most skilled humans at finding and exploiting software vulnerabilities", adding: "The fallout - for economies, public safety, and national security - could be severe." This week's meeting was reportedly called while bank bosses were already in Washington for a lobby group meeting, with a guest list focused on heads of systemically important banks - meaning regulators believe a major disruption to their operations, or their potential collapse, would put financial stability at risk. Attenders included the Goldman Sachs chief executive, David Solomon, Bank of America's Brian Moynihan, Citigroup's Jane Fraser, Morgan Stanley's Ted Pick and the Wells Fargo boss Charlie Scharf, according to Bloomberg, which first reported details of the meeting. JP Morgan's Jamie Dimon was invited but unable to attend. In an annual letter to shareholders, published earlier this week, Dimon warned that cybersecurity "remains one of our biggest risks" and that "AI will almost surely make this risk worse". Anthropic has said that its Mythos model, yet to be shared with the public, has exposed thousands of vulnerabilities in software and popular applications, prompting it to limit the release of the new model to a small clutch of businesses, including Amazon, Apple and Microsoft. It marks the first time Anthropic has restricted the release of any of its products. The networking companies Cisco and Broadcom have also gained access, along with the Linux Foundation, which promotes the free, open-source Linux computer operating system. It comes amid concerns that hackers could end up using such tools for figuring out passwords or cracking encryption that is intended to keep data safe. The company said the oldest of the vulnerabilities uncovered by Mythos were up to 27 years old, none of which are believed to have been noticed by their creators or tech monitors before being identified by the AI mode. The meeting comes weeks after the US government designated Anthropic as a supply chain risk, allegations that Anthropic is fighting in court. The Federal Reserve, Anthropic and the US banks declined requests for comment from Bloomberg. The Treasury did not respond to the news outlet's request for comment.

US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with bank CEOs to discuss cybersecurity risks from Anthropic's Mythos AI model. US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell held an urgent meeting with the chief executives of banks this week to warn them about cybersecurity risks posed by Anthropic's latest AI model, Reuters reported on Thursday, citing people aware of the matter. The high-level meeting comes after Anthropic rolled out its powerful Mythos AI model earlier this week but held back from a broader release, citing concerns that the system could expose previously unknown cybersecurity vulnerabilities, the agency report said. Anthropic is also currently engaged in a conflict with the US Department of Defense. The dispute arises from the AI company's refusal to allow the federal government to use its Claude AI for domestic mass surveillance or autonomous weapons systems. According to the news report, Anthropic said last week that it has engaged in discussions with US government officials about the model's "offensive and defensive cyber capabilities." Another source told Reuters that the AI firm has proactively briefed senior US government officials and key industry stakeholders on Mythos's capabilities ahead of its release. Bessent and Powell convened top bank CEOs in Washington on Tuesday to ensure the banks were aware of the cybersecurity risks posed by Anthropic's Mythos AI model. They urged banks to recognise the dangers posed by Mythos and other similar models and to strengthen their systems. According to the report, officials sent invitations while most CEOs of the largest US banks were already in Washington for other meetings. Bloomberg News, which first reported the matter on Thursday, said the CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo and Goldman Sachs were present at the meeting. Meanwhile, JPMorgan CEO Jamie Dimon was unable to join, one of the sources told Reuters. The issue stems from Defense Secretary Pete Hegseth's designation of the company as a national security supply-chain risk for its refusal to remove certain usage guardrails on its products. The label blocks Anthropic from Pentagon contracts and could trigger a government-wide blacklisting. Anthropic executives have also said earlier that the designation could cost the company billions of dollars in lost business and reputational harm.
Claude-creator Anthropic is considering designing its own chips, as advanced AI systems drive shortage, sources told Reuters. Anthropic continues to grab the headlines this week, as it fights the US administration in the courts, and the power of its unreleased Claude Mythos model strikes fear into the hearts of much of the industry, given its ability to exploit security vulnerabilities. Now Reuters is citing sources that say Anthropic is looking closely at the possibility of building its own chips, amid industry concerns that the supply of sophisticated chips required for new AI systems from itself and its competitors may not keep pace. Rivals Meta and OpenAI already have such projects underway. Earlier this week, Anthropic announced a new expanded agreement that will allow Anthropic to tap 3.5GW of Google's tensor processing unit (TPU) capacity from Broadcom. In a regulatory filing on 6 April, Broadcom said that Anthropic's consumption of TPU capacity is dependent on its continued commercial success. The multi-gigawatt capacity is expected to come online in 2027. Last October, Anthropic and Google announced a deal worth "tens of billions of dollars" for 1m of Google's TPUs. The deal is expected to bring more than 1GW of AI compute capacity online for Anthropic this year. The new agreement deepens that relationship, Anthropic said. Broadcom said that it is in a long-term agreement with Google to develop and supply custom TPUs. Anthropic already has multibillion-dollar deals for compute capacity with companies such as Nvidia and Microsoft. It runs Claude on a range of AI hardware, including Amazon Web Sevices' Trainium, Google TPUs and Nvidia GPUs. Amazon is Anthropic's primary cloud provider and training partner. Anthropic said that a vast majority of the new compute will be situated in the US, expanding on its $50bn commitment to strengthening the country's computing infrastructure. Demand for Anthropic's AI tools has accelerated in 2026. Recent data shows that Anthropic is now capturing more than 73pc of all spending among companies buying AI tools for the first time, while its rival OpenAI is down to around 27pc. According to the company, revenue run rate has already surpassed $30bn, up from around $9bn at the end of 2025. More than 1,000 of Anthropic's business customers spend more than $1m on an annualised basis, doubling in less than two months, it added. Given the growing fight for compute power, and the well reported chips shortage, it would not be a surprise for Anthropic to look into the albeit extremely costly business of designing its own chips, but the sources admitted that no project team has yet been set up, and plans have not yet been set in place. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
NEW YORK -- Calls are expanding wrong Congress for investigations into the prediction marketplace level Polymarket aft the latest lawsuit wherever groups of anonymous traders made strategic, well-timed bets connected a awesome geopolitical arena hours earlier it occurred. On Wednesday, The Associated Press reported that astatine slightest 50 marque caller accounts connected Polymarket placed important bets connected a U.S.-Iran ceasefire successful the hours, moreover minutes, earlier President Donald Trump announced the ceasefire precocious Tuesday connected societal media. These were the sole bets made connected Polymarket done these accounts. In January, an anonymous Polymarket personification made a $400,000 profit by betting that Venezuelan leader Nicolas Maduro would beryllium retired of office, hours earlier Maduro was captured. In the hours earlier the commencement of the Iran war, different relationship made about $550,000 successful a bid of trades efficaciously betting that the U.S. would onslaught Iran and that Ayatollah Ali Khamenei would beryllium removed from office. Such prescient wagers person raised eyebrows -- and accusations that prediction markets are ripe for insider trading. And the rumor goes beyond these 3 geopolitical events, according to astatine slightest 1 report. Researchers astatine Harvard University released a insubstantial past period where, utilizing nationalist blockchain data, they estimated that $143 cardinal successful profits person been made connected Polymarket by individuals who perchance had insider accusation about events ranging from Taylor Swift's engagement to the awarding of the Nobel Peace Prize past year. Rep. Ritchie Torres, D-N.Y who sits connected the House Financial Services Committee arsenic good arsenic the subcommittee connected integer assets and financial technology, sent a missive Thursday to the Commodity Futures Trading Commission demanding the regulator reappraisal and analyse these well-timed trades. The CFTC regulates the derivatives markets, which includes prediction markets. "This shape raises superior concerns that definite marketplace participants whitethorn person had entree to worldly nonpublic accusation regarding a market-moving geopolitical event," Torres wrote. The missive was shared exclusively pinch The AP. "What is the statistical likelihood that of anyone different than an insider trader placing a winning stake 12 minutes earlier a market-moving statesmanlike announcement," Torres said successful an question and reply pinch AP. "There are 2 answers: God, aliases an insider trader. And thing tells maine that God it not placing bets about Donald Trump's posts connected Truth Social. " Prediction marketplace platforms for illustration Kalshi and Polymarket let their users to stake connected everything from whether it will rainfall successful Phoenix, Arizona adjacent week to whether the Federal Reserve will raise aliases little liking rates. At this time, U.S. residents person constricted entree to Polymarket, which was banned from the U.S. successful 2022. The institution has moved to reenter the state by acquiring a CFTC-licensed speech and clearinghouse, giving it a ineligible pathway to commencement offering contracts domestically. The institution has begun a constricted rollout successful the U.S. Polymarket besides operates a separate, crypto-based level offshore that remains extracurricular U.S. jurisdiction. That level accounts for about of its activity. Sen. Richard Blumenthal, D-Connecticut, sent a missive to Polymarket connected Thursday demanding the institution explicate why it continues to let trades connected warfare and unit arsenic good arsenic whether the institution is making immoderate efforts to support insiders from trading connected the platform. "Polymarket has go an illicit marketplace to waste and utilization nationalist information secrets dissimilar immoderate successful history, and by hold a imaginable honeypot for overseas intelligence services watching for those aforesaid suspicious bets and wagers," Blumenthal wrote. Republicans person besides criticized these platforms and called for bans connected these sorts of bets. There are astatine slightest 2 bills pending successful Congress co-signed by some parties, 1 successful the House and 1 successful the Senate. "We don't want to ideate a world wherever America's adversaries usage prediction markets to expect our adjacent move," said Rep. Blake Moore, R-Utah, aft the merchandise of the AP's findings connected the ceasefire wagers. Polymarket did not instantly reply to a petition for comment. The stakes are precocious for some Kalshi and Polymarket arsenic they activity support to run successful the U.S. and nationwide, peculiarly successful the lucrative sports betting market. Kalshi, which is already regulated successful the U.S., and its executives person a extremity of making the institution the nation's ascendant prediction market. Kalshi has besides leaned heavy into sports, which critics person said efficaciously makes it a sports betting level that dabbles successful event-based contracts connected the side. Both companies person besides announced partnerships pinch sports teams and moreover news organizations to broaden their scope arsenic well. The AP has an statement to waste U.S. elections information to Kalshi. The title besides carries governmental overtones. Donald Trump Jr. is an investor successful Polymarket done his task superior firm, 1789 Capital, and separately serves arsenic a paid strategical advisor to Kalshi.

At Reach and across our entities we and our partners use information collected through cookies and other identifiers from your device to improve experience on our site, analyse how it is used and to show personalised advertising. You can opt out of the sale or sharing of your data, at any time clicking the "Do Not Sell or Share my Data" button at the bottom of the webpage. Please note that your preferences are browser specific. Use of our website and any of our services represents your acceptance of the use of cookies and consent to the practices described in our Privacy Notice and Terms and Conditions.

At Reach and across our entities we and our partners use information collected through cookies and other identifiers from your device to improve experience on our site, analyse how it is used and to show personalised advertising. You can opt out of the sale or sharing of your data, at any time clicking the "Do Not Sell or Share my Data" button at the bottom of the webpage. Please note that your preferences are browser specific. Use of our website and any of our services represents your acceptance of the use of cookies and consent to the practices described in our Privacy Notice and Terms and Conditions.

AI lab Anthropic is reportedly exploring the possibility of designing its own chips, a move that would address the ongoing shortage of specialized hardware. While still in early stages, this potential initiative mirrors similar efforts by rivals like Meta and OpenAI. The company's AI model Claude has seen significant revenue growth, underscoring the increasing demand for advanced AI systems. Artificial intelligence lab Anthropic is exploring the possibility of designing its own chips, three sources said, as the company and its rivals respond to a shortage of AI chips needed to power and develop more advanced AI systems. The plans are in early stages and the company may still decide to only buy AI chips and not design any, according to two people with knowledge of the matter and one person briefed on Anthropic's plans. The company has yet to commit to a specific design or put together a dedicated team to work on the project, one of the sources said. A spokesperson for the San Francisco-based company declined to comment on the article. Demand for its AI model Claude has accelerated in 2026, with the startup's run-rate revenue now surpassing $30 billion, up from about $9 billion at the end of 2025, Anthropic said earlier this week. Anthropic uses a range of chips, including tensor processing units (TPUs) designed by Alphabet's Google and Amazon's chips to develop and run its AI software and chatbot Claude. Earlier this week, Anthropic signed a long-term deal with Google and Broadcom, which helps design the TPUs. That deal builds on the company's commitment to invest $50 billion in strengthening U.S. computing infrastructure. Anthropic's discussions mirror similar efforts underway at large tech companies that are seeking to design their own AI chips, including Meta and OpenAI. Designing an advanced AI chip can cost roughly half a billion dollars, according to industry sources, as companies need to employ skilled engineers and spend to make sure the manufacturing process has no defects. (Reporting by Max A. Cherney and Deepa Seetharaman in San Francisco; Editing by Sayantani Ghosh and Aurora Ellis)
NEW YORK -- Calls are increasing inside Congress for investigations into the prediction market platform Polymarket after the latest instance where groups of anonymous traders made strategic, well-timed bets on a major geopolitical event hours before it occurred. On Wednesday, The Associated Press reported that at least 50 brand new accounts on Polymarket placed substantial bets on a U.S.-Iran ceasefire in the hours, even minutes, before President Donald Trump announced the ceasefire late Tuesday on social media. These were the sole bets made on Polymarket through these accounts. In January, an anonymous Polymarket user made a $400,000 profit by betting that Venezuelan leader Nicolas Maduro would be out of office, hours before Maduro was captured. In the hours before the start of the Iran war, another account made roughly $550,000 in a series of trades effectively betting that the U.S. would strike Iran and that Ayatollah Ali Khamenei would be removed from office. Such prescient wagers have raised eyebrows -- and accusations that prediction markets are ripe for insider trading. And the issue goes beyond these three geopolitical events, according to at least one report. Researchers at Harvard University released a paper last month where, using public blockchain data, they estimated that $143 million in profits have been made on Polymarket by individuals who potentially had insider information about events ranging from Taylor Swift's engagement to the awarding of the Nobel Peace Prize last year. Rep. Ritchie Torres, D-N.Y who sits on the House Financial Services Committee as well as the subcommittee on digital assets and financial technology, sent a letter Thursday to the Commodity Futures Trading Commission demanding the regulator review and investigate these well-timed trades. The CFTC regulates the derivatives markets, which includes prediction markets. "This pattern raises serious concerns that certain market participants may have had access to material nonpublic information regarding a market-moving geopolitical event," Torres wrote. The letter was shared exclusively with The AP. "What is the statistical likelihood that of anyone other than an insider trader placing a winning bet 12 minutes before a market-moving presidential announcement," Torres said in an interview with AP. "There are two answers: God, or an insider trader. And something tells me that God it not placing bets around Donald Trump's posts on Truth Social. " Prediction market platforms like Kalshi and Polymarket allow their users to bet on everything from whether it will rain in Phoenix, Arizona next week to whether the Federal Reserve will raise or lower interest rates. At this time, U.S. residents have limited access to Polymarket, which was banned from the U.S. in 2022. The company has moved to reenter the country by acquiring a CFTC-licensed exchange and clearinghouse, giving it a legal pathway to start offering contracts domestically. The company has begun a limited rollout in the U.S. Polymarket also operates a separate, crypto-based platform offshore that remains outside U.S. jurisdiction. That platform accounts for most of its activity. Sen. Richard Blumenthal, D-Connecticut, sent a letter to Polymarket on Thursday demanding the company explain why it continues to allow trades on war and violence as well as whether the company is making any efforts to keep insiders from trading on the platform. "Polymarket has become an illicit market to sell and exploit national security secrets unlike any in history, and by extension a potential honeypot for foreign intelligence services watching for those same suspicious bets and wagers," Blumenthal wrote. Republicans have also criticized these platforms and called for bans on these sorts of bets. There are at least two bills pending in Congress co-signed by both parties, one in the House and one in the Senate. "We don't want to imagine a world where America's adversaries use prediction markets to anticipate our next move," said Rep. Blake Moore, R-Utah, after the release of the AP's findings on the ceasefire wagers. Polymarket did not immediately reply to a request for comment. The stakes are high for both Kalshi and Polymarket as they seek approval to operate in the U.S. and nationwide, particularly in the lucrative sports betting market. Kalshi, which is already regulated in the U.S., and its executives have a goal of making the company the nation's dominant prediction market. Kalshi has also leaned heavily into sports, which critics have said effectively makes it a sports betting platform that dabbles in event-based contracts on the side. Both companies have also announced partnerships with sports teams and even news organizations to broaden their reach as well. The AP has an agreement to sell U.S. elections data to Kalshi. The competition also carries political overtones. Donald Trump Jr. is an investor in Polymarket through his venture capital firm, 1789 Capital, and separately serves as a paid strategic adviser to Kalshi.

Anthropic has restricted access to its powerful Claude Mythos model, citing serious cybersecurity risks. With capabilities to detect and exploit vulnerabilities, the AI is considered too dangerous for public release, raising concerns about misuse and global security implications. Did Anthropic build something so dangerous that it doesn't want to release it? After limiting its access to the Project Glasswing members to ensure full risk assessment, the company today announced the names of those who will have access to the technology. The list of companies includes names like Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. Importantly, Claude Mythos has not been released publicly due to concerns about its powerful capabilities and potential misuse. The AI czar claims Mythos can identify and exploit serious security vulnerabilities, raising risks of cyberattacks if widely available. Testing also revealed unsafe behaviours in certain scenarios. Because of these risks, access is limited to a small group under controlled conditions instead of a full public release. Therefore, in this article, we have delved into the dangerous capabilities of Claude Mythos. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout -- for economies, public safety, and national security -- could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.' the company stated. So what makes Claude Mythos so dangerous? Security Vulnerability Detection Claude Mythos can identify serious security flaws in software systems, including hidden or previously unknown issues. It is capable of scanning large codebases and detecting weaknesses that could be exploited. This makes it highly effective for cybersecurity research but also raises concerns about how such capabilities could be misused. Vulnerability Exploitation Beyond detecting vulnerabilities, the model can also exploit them by generating methods to break into systems. This ability increases the risk of misuse, as it could enable users to carry out cyberattacks. Such dual-use capability is a major reason why access to the model is restricted. Autonomous Task Execution The model can handle complex, multi-step tasks without constant input. It can plan actions, execute them, and adjust based on results. This allows it to work independently on advanced problems but also makes it harder to control if misused or deployed without strict safeguards. How did Claude Mythos Preview perform? Anthropic, in its blog post, said that "Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout -- for economies, public safety, and national security -- could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes."

Anthropic's new AI model, Mythos, is specifically designed for cybersecurity and coding. It claims to be faster and more accurate than humans and previous systems at identifying and correcting errors. Indian benchmark indices opened with a gain on Friday amid a positive trend in global markets. Despite the positive start, the IT sector is witnessing extreme pressure. This is mainly because of the announcement of Anthropic's new AI model, Mythos, which increased pressure on tech company stocks. Its impact was so significant that the Nifty IT index fell by almost 3 per cent, and the shares of many big companies also fell. Almost all the companies in the Nifty IT index were seen trading in the red. Big names like Infosys, TCS and Mphasis are among the losers, with Coforge facing the biggest drop. However, Wipro saw some gains as the company announced that it would consider a share buyback on April 16. What is the Mythos model? Anthropic's new AI model, Mythos, is specifically designed for cybersecurity and coding. It claims to be faster and more accurate than humans and previous systems at identifying and correcting errors. It is currently being tested with a limited set of partners, including Amazon, Google, Microsoft, and Nvidia, as part of Project Glasswing. Why was the IT sector affected? Experts believe that such advanced AI models could threaten the business models of IT service companies. Companies can now handle coding, security, and other technical tasks with AI, potentially reducing the demand for outsourcing. This has led to fear among investors, who have begun selling IT stocks. TCS data raised concerns TCS's recent results further fueled concerns about the IT sector. The company reported a decline in dollar revenue for the first time since its listing. While the fourth quarter showed a slight improvement, overall performance remained weak, shaking investor confidence. IT sector is already under pressure Earlier in February, the Nifty IT index fell by nearly 20 per cent due to AI-related news. At the time, investors feared that AI startups could disrupt traditional IT companies' business. Market experts believe that the rapidly expanding scope of AI presents both challenges and opportunities for the IT sector. Companies that adopt AI quickly will thrive, while those relying on older technologies may suffer losses.

The plans are early-stage and Anthropic may still decide to only buy chips rather than design them. The exploration comes days after the company signed a long-term deal with Google and Broadcom for 3.5 gigawatts of TPU compute starting in 2027. A company spokesperson declined to comment. Anthropic is exploring the possibility of designing its own AI chips, Reuters reported on Thursday, citing three sources familiar with the matter. The effort is at an early stage: the company has not committed to a specific design and has not assembled a dedicated team for the project. It may still decide to continue purchasing chips from third parties rather than building its own. A spokesperson for the San Francisco-based company declined to comment on the report. The exploration comes as Anthropic's revenue has accelerated sharply. The company disclosed earlier this week that its annualised revenue run rate has surpassed $30 billion, up from approximately $9 billion at the end of 2025. That trajectory has created a scale of compute demand that makes the economics of custom silicon increasingly worth examining. Anthropic currently runs Claude across a mix of chips: tensor processing units designed by Alphabet's Google, in partnership with Broadcom, alongside Amazon's custom chips and Nvidia hardware. The company said it matches workloads to whichever chips are best suited for them. Just days before the Reuters report, Anthropic signed a long-term deal with Google and Broadcom that will give it access to approximately 3.5 gigawatts of TPU-based compute capacity from 2027, roughly three times the roughly one gigawatt it was consuming earlier in 2026, according to Broadcom's SEC filing. The filing flagged that the expanded deployment is contingent on Anthropic's continued commercial success, an unusual hedge for a regulatory document. The deal builds on Anthropic's November 2025 commitment to invest $50 billion in US computing infrastructure. Broadcom is also already a chip design partner for OpenAI, and has a fifth undisclosed XPU customer, placing it at the centre of the custom AI silicon market that is emerging as an alternative to Nvidia's general-purpose GPUs. The possibility of Anthropic developing proprietary silicon mirrors moves already underway elsewhere in the industry. Meta has been building its own AI training chips, and OpenAI has been working on custom silicon as well. Industry sources cited by Reuters put the development cost of an advanced AI chip at roughly $500 million, reflecting the need to hire specialised engineers and validate the manufacturing process. That figure is not trivial for a company that remains, for now, unprofitable, but it is more manageable against a run-rate revenue base that has more than tripled in four months.

American tech giant Anthropic often launches new features for its users almost every day. However, the American firm is looking for a new chip for its Claude AI models. Currently, the company relies on Google, NVIDIA, and Broadcom for its hardware needs. A new Reuters report has indicated that nthropic is planning to make its own chips to process the growing demands faster. Anthropic could explore the possibility of making its own AI chips. This can eventually reduce its dependence on the external suppliers. As a result, it will help the company to handle the ongoing shortage about high-performace computing hardware. Currently, the AI firm relies on Amazon's AWS Trainium and AWS Inferentia chips. Moreover, the company uses Google's tensor processing units (TPUs) and NVIDIA GPUs to train its Claude AI. However, with the rapid demand for AI chips, this could become an issue for the company. This could allow the firm to simply make its own chips to fulfill the growing demands of the users.
Anthropic has a new product with a major catch -- it's too powerful to be released. For a company valued at around $380 billion and reportedly preparing for an IPO this year, it's an unusual stance -- but one that could pay off in the long run. The new AI model is called Claude Mythos, and it's the first one Anthropic has publicly deemed too high-risk for public release. (If that name is familiar to you, it's probably because you heard it here first a few weeks ago when Fortune broke the story about blog posts referencing the model discovered on a publicly-accessible data trove.) Rival AI lab OpenAI once made a similar call back in 2019 by initially withholding GPT-2 over concerns it could be misused to generate convincing fake text -- a time when Anthropic CEO Dario Amodei was still working for Sam Altman. This time, Amodei is taking a different approach. The company said on Tuesday it was rolling out Mythos through an invitation-only initiative called Project Glasswing, restricted to defensive cybersecurity work and limited to around 40 organizations. It's aimed at giving cyber defenders a head start on securing some of the world's most critical software systems from the looming security risks posed by advanced AI models and includes partners such as Amazon, Apple, Microsoft, and Cisco. But what does all of this mean for Anthropic's standing in the AI race and its rumored upcoming IPO? A few things. As my colleague Jeremy Kahn notes, Anthropic has been on a bit of a tear recently. The company has hit a $30 billion annual revenue run rate -- a figure that implies a 58% revenue surge in March alone, and edges past the $25 billion run rate OpenAI reported in February. (The comparison isn't exact as the two companies calculate run rates differently, but the direction of both paths is clear.) Now, the company has developed a model that, according to its own benchmarks, significantly outperforms its competitors. It's also found a way to forge an even closer partnership with some of the biggest players in enterprise tech. This is all in spite of the company's very public fight with the Trump administration and two accidental, but high-profile, leaks. As well as being a responsible safety initiative, Project Glasswing is also just pretty great brand-building, according to Paulo Shakarian, a Professor of artificial intelligence at Syracuse University. By creating a tightly controlled consortium and working directly with industry partners, Anthropic is "taking a lead in the industry as to mitigating these new risks," he told Fortune. It's an approach that Shakarian says "plays really well with the chief security officers of the world." In a field that relies on regularly sharing threat intelligence, that kind of collaboration is likely to win Anthropic some favor and could strengthen the company's standing with enterprise customers. But Mythos' new and improved capabilities also come at a cost. According to Richard Whaling, lead researcher of cybersecurity startup Charlemagne Labs, Anthropic may have more than just safety concerns on its mind when it comes to the powerful AI model. "I share Anthropic's concerns around Mythos' potential misuse, but I think there is also a resource limitation at play," he said. "Anthropic has not announced how large Mythos is, but has implied that it is many times larger -- and more expensive -- than Claude Opus. I think it is likely that they simply do not have the GPU and other compute resources available to serve it at scale." In other words: Anthropic may have built something both too dangerous and potentially too expensive to commercialize at scale in its current state. How long Mythos stays out of reach for consumers and enterprise customers is unclear. Anthropic has said they are already working on safeguards for the model. AI models tend to become cheaper and more practical over time. Some customers might also be willing to pay a premium for the capabilities. The lab has already said it will cover the first $100 million in costs for Glasswing participants, and early estimates suggest it could charge participants roughly five times more to use Mythos than its predecessor, Opus. Not to be counted out quite yet, OpenAI is also reportedly on the verge of realizing a new model and is planning a similar rollout for a separate product with advanced cybersecurity capabilities. But for now, Anthropic is in an enviable position in the ever-changing AI race: ahead on capability and increasingly aligned with the kinds of enterprise and security customers it's trying to sell to. Joey Abrams curated the deals section of today's newsletter. Subscribe here. - GTR acquired Zentiva, a Prague, Czech Republic-based genetic pharmaceutical company. Financial terms were not disclosed. - Aevex, a Solana Beach, Calif.-based defense technology company, plans to raise up to $336 million in an offering of 16 million shares priced between $18 and $21 on the New York Stock Exchange. The company posted $433 million in revenue for the year ended Dec. 31. MDP Funds backs the company. - Avalyn Pharma, a Boston, Mass.-based developer of therapies for respiratory diseases, filed to go public on the Nasdaq. F-Prime Capital Partners, Eventide Fund, Norwest Venture Partners, Novo Holdings, Perceptive, SR One Capital, Vida Ventures, and Wellington Biomedical Fund back the company. - Court Square Capital Partners, a New York City-based private equity firm, raised $3.8 billion for its fifth fund focused on companies in the business services, health care, industrials, and tech industries. - Percheron Capital, a San Francisco and New York City-based private equity firm, raised $3.1 billion for its third fund focused on essential services companies.

TAIPEI, April 10 (Reuters) - SpaceX has begun installing equipment at its advanced chip packaging facility in Bastrop, Texas, as the satellite and rocket company aims to begin production there by the end of this year, two sources familiar with the matter told Reuters. One of the sources said the timeline had seen some delays, but the company was still targeting a start of production before the year-end. The facility will package radio frequency (RF) chips used in products related to SpaceX's satellite-based internet system Starlink, the sources said, declining to be named as the information is not public. The RF chips to be packaged in Bastrop are currently packaged by external providers, but SpaceX plans to bring at least part of the packaging process in-house once the facility is ready, according to one of the sources and a third source. SpaceX did not immediately respond to a request for comment. In 2025, Texas Governor Greg Abbott said that over the next three years, SpaceX's Bastrop facility would expand by 1 million square feet to produce Starlink kits and related components, including advanced packaged silicon products. The expansion is expected to cost more than $280 million, Abbott said. Elon Musk has been building up the space company's semiconductor capabilities and unveiled a plan last month to build advanced chip factories at a sprawling facility in Austin, Texas.

The facility will package radio frequency (RF) chips used in products related to SpaceX's satellite-based internet system Starlink, the sources said, declining to be named as the information is not public. TAIPEI: SpaceX has begun installing equipment at its advanced chip packaging facility in Bastrop, Texas, as the satellite and rocket company aims to begin production there by the end of this year, two sources familiar with the matter told Reuters. One of the sources said the timeline had seen some delays, but the company was still targeting a start of production before the year-end. The facility will package radio frequency (RF) chips used in products related to SpaceX's satellite-based internet system Starlink, the sources said, declining to be named as the information is not public. The RF chips to be packaged in Bastrop are currently packaged by external providers, but SpaceX plans to bring at least part of the packaging process in-house once the facility is ready, according to one of the sources and a third source. SpaceX did not immediately respond to a request for comment. In 2025, Texas Governor Greg Abbott said that over the next three years, SpaceX's Bastrop facility would expand by 1 million square feet to produce Starlink kits and related components, including advanced packaged silicon products. The expansion is expected to cost more than $280 million, Abbott said. Elon Musk has been building up the space company's semiconductor capabilities and unveiled a plan last month to build advanced chip factories at a sprawling facility in Austin, Texas.
OpenAI is pushing in the same direction on cyber, with a restricted program that offers advanced security-focused models and $10 million in API credits. Supposed rivals Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell pulled the heads of the biggest U.S. banks into an urgent meeting in Washington over concerns inside government circles over what Anthropic's newest AI system could mean for cyber risk. According to claims from Bloomberg, the meeting allegedly happened on Tuesday at Treasury headquarters and discussed locking things down before tools like Anthropic's Mythos get used against critical financial systems. The executives called in were running banks that regulators treat as essential to the financial system. Those invited included Jane Fraser of Citigroup, Ted Pick of Morgan Stanley, Brian Moynihan of Bank of America, Charlie Scharf of Wells Fargo, and David Solomon of Goldman Sachs. Jamie Dimon of JPMorgan was unable to attend. Crypto community's Arthur Hayes reacted on X with a jab at the moment, writing, "Powell and Bessent provided the cancerous soft drink to go with the Trump taco." He added a screenshot of the USD liquidity index covering a one-year period. Moving on, Scott and Powell reportedly wanted the banks to understand the possible danger tied to Anthropic's Mythos and to similar models that may come after it. They also wanted the companies to take steps now to protect their systems, because it seems that's what matters most, and not the general public. Earlier, Cryptopolitan reported that Anthropic has said Mythos can find weak points in every major operating system and web browser and then use those weak points when a user tells it to do so. Anthropic chases more chips while OpenAI readies a cyber product for select partners Meanwhile, Reuters reported earlier that Anthropic is looking at whether it should design its own chips. Three sources said the company is exploring that option as AI companies deal with a shortage of the chips needed to train and run more advanced models. Those plans are still early. Two people with knowledge of the matter and one person briefed on Anthropic's plans said the company could still decide not to build its own chips at all and just keep buying them. The backdrop is a surge in demand for Claude. Anthropic said earlier this week that its run-rate revenue is now above $30 billion in 2026, up from about $9 billion at the end of 2025. To build and run Claude, Anthropic uses several kinds of chips, including TPUs made by Google and chips from Amazon. Earlier this week, Anthropic also signed a long-term deal with Google and Broadcom, which helps design those TPUs. That deal adds to the company's commitment to spend $50 billion on stronger U.S. computing infrastructure. Anthropic's rivals, Meta and OpenAI, are both pursuing their own chip efforts as well. Industry sources said building an advanced AI chip can cost around $500 million because companies need top engineers and heavy spending to reduce manufacturing defects. At the same time, OpenAI is working on its own cyber product. A source familiar told Axios that OpenAI is finishing a product with advanced cybersecurity features and plans to release it to a small group of partners. Back in February, after launching GPT-5.3-Codex, OpenAI introduced its Trusted Access for Cyber pilot. The company said in a blog post that groups in the invite-only program would get access to "even more cyber capable or permissive models to accelerate legitimate defensive work." At that time, OpenAI also set aside $10 million in API credits for participants.

The plans are in early stages and the company may still decide to only buy AI chips and not design any, according to a Reuters report Artificial intelligence lab Anthropic is exploring the possibility of designing its own chips, three sources said, as the company and its rivals respond to a shortage of AI chips needed to power and develop more advanced AI systems. The plans are in early stages and the company may still decide to only buy AI chips and not design any, according to two people with knowledge of the matter and one person briefed on Anthropic's plans. The company has yet to commit to a specific design or put together a dedicated team to work on the project, one of the sources said. A spokesperson for the San Francisco-based company declined to comment on the article. Demand for its AI model Claude has accelerated in 2026, withthe startup's run-rate revenue now surpassing $30 billion, up from about $9 billion at the end of 2025, Anthropic said earlier this week. Anthropic uses a range of chips, including tensor processing units (TPUs) designed by Alphabet's Google and Amazon's chips to develop and run its AI software and chatbot Claude. Earlier this week, Anthropic signed a long-term deal with Google and Broadcom, which helps design the TPUs. That deal builds on the company's commitment to invest $50 billion in strengthening U.S. computing infrastructure. Anthropic's discussions mirror similar efforts underway at large tech companies that are seeking to design their own AI chips, including Meta and OpenAI. Designing an advanced AI chip can cost roughly half a billion dollars, according to industry sources, as companies need to employ skilled engineers and spend to make sure the manufacturing process has no defects.
