The latest news and updates from companies in the WLTH portfolio.
Billionaire CEO Evan Spiegel said in a memo that Snap would eliminate 16 percent of its workforce. Artificial intelligence was linked to an estimated 50,000 layoffs in 2025, and just this year, Amazon, Atlassian, Pinterest, Block, and Fiverr have announced layoffs linked to AI. Now, you can add Snap to the list. In a memo to Snap employees posted on Wednesday, billionaire CEO Evan Spiegel said the company is laying off about 1,000 employees, or 16 percent of its workforce. As part of these cuts, 300 open roles have also been eliminated. Spiegel told North American employees to work from home on Wednesday, telling them they would find out if they were impacted imminently. The memo to employees cited the importance of artificial intelligence, and Spiegel said the company would reduce its annual costs by $500 million by the end of the year. Last fall, I described Snap as facing a crucible moment, requiring a new way of working that is faster and more efficient, while pivoting towards profitable growth....While these changes are necessary to realize Snap's long-term potential, we believe that rapid advancements in artificial intelligence enable our teams to reduce repetitive work, increase velocity, and better support our community, partners, and advertisers. We have already witnessed small squads leveraging AI tools to drive meaningful progress across several important initiatives, including Snapchat+, enhanced ad platform performance, and efficiency improvements in our Snap Lite infrastructure. The company has said that it uses AI to generate code and improve efficiency, but it's also worth noting that activist investor Irenic Capital Management (which holds 2.5 percent of the company) called on Snap to make cuts last week and better use AI, according to Reuters. It's also important to mention that Snap's much-publicized $400 million partnership with AI firm Perplexity has also fallen through, according to tech reporter Alex Heath's Sources newsletter. If the deal had gone through, Perplexity would have given Snap a combination of cash and equity to integrate Perplexity's AI search into the Snapchat app. If nothing else, AI eliminating human jobs is no longer a purely hypothetical threat, but a grim reality for workers in the tech sector.

The NAACP is alleging that the more than two dozen gas turbines powering the data center facilities are operating without an air permit and are endangering residents. The company famous for producing the chatbot Grok, xAI, is being sued by the NAACP over alleged health impacts its data centers are creating for residents of Mississippi and Tennessee. What's happening? The NAACP is suing Elon Musk's company xAI, which also owns the platform X that was formerly known as Twitter, in claiming that the company's use of gas-burning turbines to power data centers is violating the Clean Air Act and harming residents of two states. Specifically, the NAACP is alleging that the more than two dozen gas turbines powering the data center facilities are operating without an air permit and are endangering residents, per CNBC. The lawsuit against xAI was filed in the Northern District of Mississippi with representation from both Earthjustice and the Southern Environmental Law Center. Within the suit, the NAACP notes that the gas power plants are specifically situated in areas with large Black populations. Why is the lawsuit important? Abre' Conner, who serves as the NAACP's Director of Environmental and Climate Justice, wrote in an email that "our right to clean air is not up for negotiation, especially when companies prove expediency, not people, is their priority." Conner went on to explain that "a data center should not be a potential death sentence for a community's health." And these gas turbines do produce harmful pollutants, like nitrogen oxides, particulate matter, and formaldehyde, all of which can create serious health consequences to those who encounter them. People who live near the Memphis area's xAI facilities have repeatedly reported that the air quality in the area has declined and that the air has a terrible odor. Scientists at the University of Tennessee, Knoxville, confirmed that pollutant levels have increased after xAI came to town. "The xAI turbines are leading to a public health crisis in Memphis by releasing ... pollutants known to directly harm the lungs," Austin Dalgo, a South Memphis doctor, explained to Time. "These emissions pose the greatest risk to our city's most vulnerable residents, including children, the elderly, and individuals with respiratory conditions like asthma and COPD." What's next? Residents near Memphis have been battling xAI and its polluting facilities for over a year now. But this lawsuit brings additional attention to its facilities, even as xAI made countless headlines after being acquired by SpaceX. While the lawsuit unfolds, xAI plans to build more facilities in the area to power its artificial intelligence pursuits. So even if the NAACP is unsuccessful in shuttering these facilities, more protections must be put in place to protect residents from the various downsides of the AI boom. Get TCD's free newsletters for easy tips, smart advice, and a chance to earn $5,000 toward home upgrades. To see more stories like this one, change your Google preferences here.

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarising her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability.

Tech giant Alphabet (GOOGL) could be sitting on a massive payoff from its early investment in Elon Musk's space firm SpaceX. More specifically, new filings reveal that Google owned about 6.11% of SpaceX at the end of 2025, which would be worth roughly $122 billion if the company reaches a $2 trillion valuation in its expected IPO. However, after SpaceX merged with Elon Musk's xAI earlier this year, that stake is estimated to have dropped closer to 5%, or about $100 billion at the same valuation. SpaceX is reportedly targeting a June listing that could raise as much as $75 billion, potentially making it the largest IPO ever. If the company reaches a $2 trillion valuation, even a very small stake could turn investors into billionaires overnight. In addition, Elon Musk, who still owns around 40% of the company, could become the world's first trillionaire. Early investors and employees are also expected to see massive gains, with some analysts noting that those who invested as recently as 2021 could still achieve returns of around 20 times their original investment. Looking back, Google first invested in SpaceX in 2015 as part of a $1 billion funding round that valued the company at just $10 billion. Since then, ownership stakes have gradually declined due to dilution and additional funding rounds, although the value of those stakes has increased significantly. Meanwhile, Alphabet has already benefited from this investment through unrealized gains reported in its earnings, including an $8 billion boost in early 2025 tied to a private holding believed to be SpaceX. Turning to Wall Street, analysts have a Strong Buy consensus rating on GOOGL stock based on 25 Buys and five Holds assigned in the past three months. Furthermore, the average GOOGL price target of $385.46 per share implies 14.8% upside potential.

A leaked memo from OpenAI executive Denise Dresser has revealed a hostile stance toward rival Anthropic. The document accuses Anthropic of inflating its $30 billion revenue run rate by $8 billion through questionable accounting. Additionally, OpenAI slams its competitor's "fear-based" culture and lack of compute power. A leaked internal memo from OpenAI's Chief Revenue Officer, Denise Dresser, has revealed the company's aggressive plan to take over the enterprise AI market. The document, first reported by The Verge, reveals that the company is no longer content with just leading the race. It shows OpenAI taking direct aim at one of its biggest rivals, Anthropic, accusing the firm of everything from creative accounting to being built on a culture of "fear" in the AI segment. Accusations of Anthropic's "juiced" revenue in OpenAI leaked memo One of the most interesting parts of the memo involves Anthropic's wallet. Reports from Bloomberg recently suggested that Anthropic's annualized revenue was trending over $30 billion. However, it seems OpenAI is throwing cold water on those figures. Dresser claimed in the memo that Anthropic uses "accounting treatment that makes revenue look bigger than it is." According to the leak, OpenAI believes Anthropic is overstating its financial situation by roughly $8 billion. The memo alleges that Anthropic "grosses up" its revenue-sharing agreements with giants like Google and Amazon, rather than using net revenue figures. For Dresser, this is a tactical attempt by Anthropic to look more successful than it actually is, as both companies reportedly eye initial public offerings. Anthropic "built on fear" Dresser didn't stop at the balance sheet. She launched a philosophical attack, stating that Anthropic's entire story is "built on fear, restriction, and the idea that a small group of elites should control AI." This is a sharp jab at Anthropic's "safety-first" branding, which OpenAI now frames as a liability. The memo further argues that Anthropic made a "strategic misstep" by failing to secure enough compute infrastructure. Dresser noted that customers are already feeling the consequences through service throttling and lower reliability. By contrast, she claims OpenAI acted faster on the "compute curve," giving them a structural advantage that Anthropic simply cannot match. Tactical errors in a platform war OpenAI also criticized Anthropic's heavy focus on coding assistants. While Dresser acknowledged that this gave them an "early wedge" in the market, she warned that "you do not want to be a single-product company in a platform war." The memo outlines a shift in OpenAI's own strategy to capitalize on these perceived weaknesses. OpenAI is working to displace its rival in environments where Anthropic used to feel safe with moves like expanding reach via Amazon AWS Bedrock.

Capability of 'Starship V3' engine verified on the ground Drawing attention ahead of the Artemis III launch next year An engine static fire test for the first-stage rocket of Starship, 'Super Heavy', developed by SpaceX, is being conducted on the 15th (local time) at the Starbase launch site in Texas. Provided by SpaceX A static fire test for Starship's first-stage rocket, 'Super Heavy', is being conducted at the Starbase launch site in Texas on the 15th (local time). Provided by SpaceX An engine static fire test for the second-stage rocket, the 'Starship spacecraft', is being conducted at the Starbase launch site in Texas on the 14th (local time). Provided by SpaceX The static fire test of a new variant of Starship, the largest launch vehicle for humanity developed by the U.S. private space company SpaceX, has been completed. The new Starship, whose engine thrust is much stronger than that of previous versions, will be launched into Earth orbit next month. Including earlier variants, this twelfth test launch is highly significant. This is because Starship is scheduled to rehearse in Earth orbit in the middle of next year, meeting Artemis III to operate as a lunar lander. If the test launch next month succeeds, it will brighten prospects not only for next year's Artemis III launch but also for the Artemis IV launch aimed at 'human presence on the lunar surface' in 2028. On the 15th (local time), via X, SpaceX announced that it had completed a static fire test of Starship, the largest launch vehicle developed by the company (124.4m), at the Starbase launch site in Texas. The previous day, the test targeted the second stage, the 'Starship spacecraft', and on this day it targeted the first stage, 'Super Heavy'. A static fire test is a procedure in which a rocket is firmly secured on the ground and its engines are ignited to verify thrust and stability. SpaceX has launched Starship a total of 11 times since 2023. Nevertheless, there was a reason to conduct a static fire test that is typically performed for rockets being launched for the first time after development. This is because the Starship to be flight-tested for the twelfth time next month will be equipped with a new engine, 'Raptor 3'. For this reason, SpaceX has also given the new Starship the nickname 'Starship V3'. The transport capability of Starship fitted with the new engine is expected to improve dramatically. It will be able to place a total payload of 100 tons into low Earth orbit. Previous Starship variants were limited to 35 tons. The new Starship will thus be able to carry far more satellites at once and disperse them into Earth orbit. This can greatly reduce the cost of satellite launches. There is another reason this test flight draws attention beyond 'improved transport performance'. Starship is slated to be used as a lunar lander. In the middle of next year, Starship is to rendezvous in Earth orbit with the NASA crewed spacecraft Artemis III, which is scheduled to be launched then, and perform 'docking', that is, a maneuver to couple their vehicles. The plan is to conduct a trial in which astronauts who have been aboard Artemis III transfer to the lunar lander. If this test goes well, Artemis IV, which will send two people to the lunar surface at the end of 2028, can also be launched as scheduled. If the Starship test flight next month succeeds, it will give a green light to these NASA plans. Conversely, if a serious problem such as an in-flight explosion occurs, delays to the timeline for a human lunar landing cannot be ruled out. SpaceX also plans to use Starship in the future as a transportation system connecting Earth and Mars. 한글기사 원본(Original Korean Story)

SINGAPORE - Organisations in Singapore are being urged to strengthen their cybersecurity measures, days after artificial intelligence company Anthropic began testing a frontier model that is reportedly able to break existing software. Immediate mitigation measures include applying software patches to all critical and high-severity vulnerabilities, implementing multi-factor authentication on all interfaces and gateways, and reviewing user permissions to remove unnecessary access rights, said the Cyber Security Agency of Singapore in an advisory on April 15. "Frontier AI models can reportedly reduce the time taken to identify vulnerabilities and engineer exploits - cutting short the duration from months to hours," said CSA. The agency added that such models are capable of analysing billions of lines of codes to identify weaknesses, and conduct security analysis at speeds that outpace the time taken to do a review manually. "However, the same capability could also be misused by cyber threat actors to accelerate vulnerability exploitation and the development of malicious capabilities," it added. While there are no indications that such capabilities are being misused currently, it added that the advisory is meant to help organisations plan ahead to guard against such risks. Still, companies should immediately patch critical vulnerabilities in internet-facing systems, which could cause widespread impact on company systems if compromised. "These assets face the greatest exposure to automated attacks and present the highest risk of widespread impact if compromised," said CSA. Access to all internet-facing development and test environments should also be strictly controlled, or disconnected from Internet, said the agency. User permissions should also be reviewed to only give access rights to who need it for their job function, and dormant and unused work accounts should be deleted. CSA's advisory comes days after news broke earlier in April that Anthropic has begun testing its latest AI model with a group of around 50 firms, instead of launching it for public use. The Claude Mythos is reportedly able to autonomously surface vulnerabilities in software systems and generate codes to exploit flaws. Anthropic said the model has found vulnerabilities in every major browser and operating system. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," said Anthropic in a statement on its website. "The fallout - for economies, public safety, and national security - could be severe." In the longer run, CSA has also urged organisations to continuously monitor critical attack pathways such as network traffic and user behaviour, and to focus surveillance on high-risk activities on privileged accounts and access to sensitive systems. To shorten time needed to deploy security updates, companies are also advised to streamline approval processes and pre-test security patches in isolated environments. "AI-powered attacks can weaponise newly disclosed vulnerabilities within hours of publication, making rapid patch deployment critical to preventing mass exploitation," said CSA. To quickly pick up on vulnerabilities, the authorities also called for companies to use AI tools to continuously scan and identify misconfigurations and weak credentials across the company's IT infrastructure. "Frontier AI models represent a major advancement in enhancing cybersecurity capabilities but there are also risks involved," said CSA. "Organisations should take proactive steps to raise cyber hygiene standards and strengthen overall cyber defence posture to protect themselves against risk of attacks from frontier AI models."

President Trump firmly stated that there "should be" strict government oversight on AI development, including an emergency kill switch. The premise that AI is dangerous has been established for a while now. Be the AI Godfather, Yann LeCun or Anthropic's CEO, Dario Amodei, urges have been made to the world's AI companies to ensure that the development of this vastly capable technology needs to be done responsibly. Now, it's the turn of the US President Donald Trump to come up with a warning that AI systems should have some sort of a 'kill switch'. This government-mandated kill switch for advanced AI systems, according to Trump, should aim to protect humanity from potential existential threats posed by this rapidly evolving technology. No specific details were provided on how the proposed 'AI kill switch' would work, who would control it, or the exact conditions for its activation. The statement, however, adds a political momentum to call for stronger AI governance, something which the Indian government had pushed for at the AI Impact Summit 2026 held earlier this year. Trump's remarks come at a time when global concerns over unchecked AI development continue to escalate. With powerful new models demonstrating unprecedented capabilities in both defensive and offensive applications, the US President stressed the urgent need for safeguards that could instantly shut down dangerous AI systems if they spiral out of control. Why Trump calls for AI kill switch In an interview with Fox Business Network's "Mornings with Maria", President Trump firmly stated that there "should be" strict government oversight on AI development, including an emergency kill switch. He warned that unchecked AI advancement could pose a serious threat to humanity's existence. In the interview, Trump acknowledged AI's dual nature, saying that it could either destabilise or revolutionise the banking system. "It could also be the kind of technology that allows greatness in the banking system, makes it better and safer and more secure," he remarked. The proposal comes exactly when American AI firms are developing models so powerful that they themselves fear releasing it to the public. Industry experts are concerned about Anthropic's latest Claude Mythos AI model, which has reportedly demonstrated extraordinary capabilities in identifying software vulnerabilities. In its preview version, the model reportedly found a 27-year-old bug in OpenBSD - the world's most secure OS, and detected thousands of zero-day vulnerabilities across both open and closed-source software. Anthropic's management had clarified that such capabilities could be weaponised against the banking sector's often outdated legacy systems, raising fears of large-scale financial cyberattacks. To mitigate the concerns, Anthropic has already partnered with over 40 major companies, including Apple and Amazon, to use Mythos primarily for strengthening cybersecurity defences. Chasing the competition, OpenAI also announced developing GPT-5.4-Cyber as a specialised version of its GPT-5.4 model that focuses on vulnerability detection in cybersecurity applications. Similar to Claude Mythos, GPT-5.4-Cyber is only available to select users. Is this Trump's second stand against Anthropic? This isn't the first instance when Trump has taken a stand against an AI concern. A month ago, the US government labelled Anthropic a "supply chain risk," prohibiting federal agencies from using its tools. The company is currently challenging this classification in court. The major public dispute erupted when Anthropic refused the Pentagon's demands to grant unrestricted access to its Claude AI models for potential use in mass domestic surveillance and fully autonomous lethal weapons systems. At the time, Trump sharply criticised Anthropic as a "radical left, woke company," ordered all federal agencies to immediately cease using its technology (with a six-month phase-out directive for the Pentagon), and the company was designated a "supply chain risk" to national security. Anthropic is yet to respond to Trump's concern.

Anthropic's Revenue Surge, Iran's Viral AI Propaganda, Allbirds' AI Pivot, and Google-ICE Data Sharing Scrutiny Jim Love covers four AI and tech headlines: OpenAI investors are growing uneasy as Anthropic's annualized revenue reportedly surged from about $9B at the end of 2025 to $30B by March, boosting its appeal at a ~$380B valuation versus OpenAI's ~$850B and heavy cash burn. Iran's AI-generated Lego-style propaganda videos have gone viral across TikTok, X, and YouTube, highlighting how cheap, culturally fluent generative content can shape opinion and sparking censorship disputes. Footwear brand Allbirds is attempting a dramatic pivot into AI compute leasing after selling its IP and assets for $39M, with shares spiking 582% in a day. The EFF urges investigations into Google for allegedly sharing subscriber info with ICE without user notification despite prior promises. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt
OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products - already Anthropic's bread and butter - has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It's part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. Friar, the former CEO of neighbourhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed - and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signalling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognising that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritise AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualised revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for sceptics of the financial viability of AI products like ChatGPT and Claude, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has already imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."
Jose Najarro enjoys investing in the tech market, more importantly, the semiconductor sector. Before partnering with the Fool, Jose worked as a Senior Electrical Engineer for General Dynamics, where he had first-hand experience seeing how emerging technology can change the world. Jose Najarro went to NJIT, receiving his Bachelor's and Master's degree in Electrical Engineering. Jose Najarro has positions in CrowdStrike, Microsoft, and Synopsys. The Motley Fool has positions in and recommends Autodesk, Cadence Design Systems, Cloudflare, CrowdStrike, Microsoft, and Synopsys. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

Tech startups such as Adcendo, Spiral Therapeutics, Anthropic, among others, have announced their venture capital funding to bolster innovation and business growth. Adcendo Raises $75 mn Adcendo announced the closing of an oversubscribed $75 million Series C financing round led by Jeito Capital, with participation from Vida Ventures, BPI France, EIFO, and existing investors. The company focuses on developing antibody-drug conjugates (ADCs) for cancer treatment. The funding will support advancement of multiple clinical programs, including ongoing Phase I trials and dose expansion studies across several cancer indications. The company plans to use the proceeds to achieve key clinical milestones and progress its pipeline of targeted therapies. The financing reflects continued investor participation from both new and existing stakeholders, supporting ongoing research and development activities in oncology and precision medicine. Spiral Therapeutics Secures $27 mn Spiral Therapeutics announced the completion of a $27 million Series B financing round led by Gund Investment, with participation from Advanced Bionics and other investors. The funding supports development of therapies for inner ear disorders and includes a strategic collaboration with Advanced Bionics. The partnership focuses on combining drug delivery technologies with cochlear implant systems to improve patient outcomes. The capital will be used to advance clinical-stage programs and expand research and development activities. Existing investors also participated in the round. The financing marks a milestone for the company as it progresses its therapeutic pipeline and strengthens collaborations with medical technology partners in hearing and auditory health solutions. Anthropic Valuation Talks Surge as Claude Demand Drives $800 bn Investor Interest Anthropic is drawing strong investor attention, with venture capital firms reportedly offering valuations of up to $800 billion, more than double its recent $380 billion valuation. In February, Anthropic raised $30 billion, reflecting growing confidence in its AI capabilities. Demand for its flagship model, Claude, has accelerated sharply, pushing annualized revenue beyond $30 billion in 2026 from $9 billion in 2025. Anthropic is also exploring a potential IPO this year. Meanwhile, its newly launched Mythos model highlights advanced coding and autonomous capabilities, raising both opportunities and cybersecurity concerns. THASNIYA VP

The rollout of identity verification has drawn criticism from some users, especially those who were attracted to Anthropic for its privacy-first approach. Critics argue that the requirement seems to be a company-led decision rather than a response to regulatory mandates. There are also concerns about the risks of storing sensitive identity data with third-party providers like Persona, despite its widespread use in financial services.

Anthropic's ID verification for Claude introduces accountability but raises serious concerns about privacy, surveillance, and the future of anonymous AI usage. In a move that could reshape how users interact with artificial intelligence tools, Anthropic has begun requiring government-issued identification for access to certain features on its Claude platform. The decision marks a significant shift in the AI landscape, where identity verification has traditionally not been part of the user experience. According to the company, users attempting to access select functionalities will now need to complete a verification process similar to Know Your Customer (KYC) checks commonly used in banking and telecom sectors. This involves uploading a valid government ID -- such as a passport, driver's license, or national identity card -- and taking a live selfie for authentication. Anthropic specifies that only original physical documents are accepted, ruling out photocopies or digital versions. While the company has not clearly outlined which features require this verification, it has emphasized that the measure is part of a broader effort to ensure responsible AI usage. Anthropic states that the process helps it "prevent abuse, enforce our usage policies, and comply with legal obligations." The verification process is designed to be quick, typically taking around five minutes, and allows multiple attempts if the initial submission fails. The verification itself is handled by Persona Identities, a third-party provider specializing in secure identity authentication. Anthropic maintains that user data collected during this process is not used to train its AI models and remains stored on Persona's servers rather than its own systems. Additionally, the company notes that accounts may be restricted or banned in certain scenarios, such as when users are under 18 or accessing the platform from unsupported regions. This development comes shortly after Anthropic experienced a surge in user activity, following its decision to step away from a potential partnership with the U.S. Department of Defense. The company had reportedly expressed concerns over the possible use of its AI models for large-scale domestic surveillance. Despite Anthropic's assurances, the introduction of ID verification has triggered a wave of concern among users. Critics argue that such measures could erode privacy and set a precedent for stricter regulations around AI usage. One user claimed that this may pave the door for laws which track all AI use. The person wrote, "Next up will be laws: No AI without gov-issued ID, All AI use tracked to individual - no private AI." Others believe the move could drive users toward competing platforms that do not impose similar requirements. One user reckoned that this may backfire on Anthropic as no other AI company, such as OpenAI and Google want such verifications. The user wrote, "Anthropic just handed their competitors a gift." As the debate continues, Anthropic's decision raises broader questions about the balance between safety, accountability, and user privacy in the rapidly evolving AI ecosystem.

Anthropic has rolled out identity verification on Claude as a measure to prevent abuse, enforce usage policies, and comply with legal obligations. As part of the roll out, the AI company is asking select users to hand over a government-issued photo ID and a live selfie when accessing Claude. "We are rolling out identity verification for a few use cases, and you might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures," Anthropic said. "We only use your verification data to confirm who you are and not for any other purposes," it added.As listed on its official support page, Anthropic is accepting physical government-issued photo IDs from most countries. Common examples include:"Your ID must be issued by a government, clearly legible, undamaged, and include a photo of you," the company says. IDs that are not accepted are:Anthropic assures users that verification data is "used solely to confirm who you are and to meet our legal and safety obligations." "We are not collecting more than we need. We ask for the minimum information required to verify your identity," it says."Verification data stays between you, Persona, and Anthropic, except where we're legally required to respond to valid legal processes. Your verification data is never shared with third parties for marketing, advertising, or any purpose unrelated to verification and compliance," the company adds.Coming weeks after millions joined Claude specifically over OpenAI's surveillance deal, the ID verification requirement to use claude has not gone well with Anthropic users. X user Kai @hqmank wrote in a post: "Claude now requires government ID verification (via Persona) before subscription.ChatGPT doesn't.Gemini doesn't.Anthropic just handed their competitors a gift.""didnt just add KYC, they collapsed the boundary between identity and thought. once access to intelligence is gated by who you are, it stops being a tool and starts being infrastructure for control," said another user. "This is a bad move for what reason did they do this? I was JUST going to reup on a pro plan...I may just go with Super Grok for now as I already have Gemini Pro," said a third.
(HedgeCo.Net) The artificial intelligence arms race has entered a new -- and arguably unprecedented -- phase. Anthropic, one of the leading frontier AI developers, is reportedly entertaining investment offers that would value the company at a staggering $800 billion. If confirmed, the figure would not only double its valuation from just months ago but would also position the firm among the most valuable private companies in history -- surpassing many publicly traded technology giants and fundamentally reshaping how investors think about AI-driven enterprise value. For venture capitalists, institutional investors, and Wall Street alike, the implications are profound. This is not merely another funding round; it represents a potential inflection point in how capital markets price artificial intelligence, intellectual property, and the future of digital infrastructure. From Challenger to Cornerstone Founded in 2021 by former OpenAI researchers, Anthropic has rapidly evolved from a challenger in the generative AI space into a central pillar of the industry's competitive landscape. Its focus on "constitutional AI" -- a framework designed to align machine intelligence with human values -- has resonated strongly with enterprise clients, regulators, and policymakers. While competitors have pursued scale at all costs, Anthropic has positioned itself as a safety-first innovator. This strategic differentiation has proven to be more than philosophical. It has become a commercial advantage, particularly as corporations and governments grow increasingly cautious about deploying large-scale AI systems without robust guardrails. The company's flagship models -- widely viewed as among the most advanced in the world -- have seen rapid adoption across sectors including finance, healthcare, legal services, and defense. These use cases extend beyond simple chatbot functionality, encompassing complex reasoning, workflow automation, and decision support systems. In short, Anthropic is no longer just an AI startup. It is becoming infrastructure. The $800 Billion Question The reported $800 billion valuation has sent shockwaves through both Silicon Valley and global capital markets. To put the figure in perspective, it would place Anthropic in the same league as some of the largest publicly traded firms in the world -- despite being privately held and only a few years removed from its founding. Such a valuation raises immediate questions: What justifies this level of pricing? And perhaps more importantly, what does it signal about the future trajectory of AI? At its core, the valuation reflects expectations of exponential growth. Investors are not valuing Anthropic based on current revenues alone -- though those are reportedly accelerating rapidly -- but on its potential to dominate a foundational layer of the global economy. AI is increasingly viewed as a horizontal technology, akin to electricity or the internet. It has applications across virtually every industry, from automating back-office operations to transforming scientific research. If Anthropic can establish itself as a leading provider of this infrastructure, the addressable market is effectively limitless. The Role of Strategic Capital A key driver behind Anthropic's meteoric rise has been its ability to attract strategic capital from some of the world's most influential players. Unlike traditional venture funding, which often prioritizes financial returns, these investments are deeply intertwined with long-term partnerships. Technology giants, cloud providers, and enterprise software firms are not merely investing in Anthropic -- they are integrating its models into their ecosystems. This creates a powerful feedback loop: as adoption increases, so does the value of the platform, which in turn attracts more users and capital. The reported involvement of Morgan Stanley and Goldman Sachs in early IPO discussions underscores the growing intersection between Silicon Valley and Wall Street. These institutions are positioning themselves at the center of what could be one of the largest public offerings in history. For banks, the opportunity is twofold. First, there is the immediate financial upside of underwriting a landmark IPO. Second, and perhaps more importantly, there is the strategic value of aligning with a company that could redefine the financial services industry itself. AI as the New Asset Class Anthropic's valuation also reflects a broader shift in how investors categorize artificial intelligence. Increasingly, AI is being treated not just as a sector, but as an asset class in its own right. This has significant implications for portfolio construction. Traditional asset allocation frameworks -- built around equities, fixed income, and alternatives -- may need to evolve to accommodate the unique characteristics of AI investments. These include high upfront capital requirements, network effects, and winner-take-most dynamics. For hedge funds and private equity firms, the challenge is identifying where value will ultimately accrue. Will it be captured by model developers like Anthropic? By infrastructure providers such as cloud platforms? Or by application-layer companies that leverage AI to disrupt existing industries? The answer is likely all of the above, but the distribution of returns may be highly uneven. As with previous technological revolutions, a small number of dominant players could capture a disproportionate share of the value. The IPO Horizon The prospect of an October 2026 IPO adds another layer of complexity to the story. If Anthropic does go public at or near its rumored valuation, it would represent a watershed moment for equity markets. The offering would likely attract unprecedented demand from institutional and retail investors alike. Given the scarcity of pure-play AI equities, Anthropic could become a cornerstone holding for a wide range of portfolios. However, the transition from private to public markets also introduces new challenges. Public investors tend to be less patient than venture capitalists, placing greater emphasis on near-term financial performance. This could create tension between the company's long-term ambitions and the expectations of its shareholders. Moreover, the IPO would subject Anthropic to increased regulatory scrutiny. As governments around the world grapple with the implications of advanced AI, companies at the forefront of the technology are likely to face evolving compliance requirements. Risks Beneath the Surface Despite the optimism surrounding Anthropic, there are significant risks that cannot be ignored. Chief among them is the sustainability of its valuation. At $800 billion, even minor deviations from growth expectations could have outsized impacts on investor sentiment. The history of technology markets is replete with examples of companies that were priced for perfection, only to face sharp corrections when reality fell short. Competition is another critical factor. The AI landscape is intensely competitive, with well-capitalized players vying for dominance. Advances by rivals could erode Anthropic's market position, particularly if they offer comparable capabilities at lower cost. There are also technical and operational challenges. Developing and maintaining cutting-edge AI models requires enormous computational resources and specialized talent. Any disruption in these areas could slow the company's progress. Finally, there is the question of regulation. As AI becomes more integrated into critical systems, governments are likely to impose stricter controls on its development and deployment. While Anthropic's focus on safety may provide some insulation, the regulatory environment remains uncertain. The Broader Market Impact Anthropic's valuation is not occurring in a vacuum. It is part of a broader re-rating of technology assets driven by the AI boom. Public markets have already seen significant gains in companies associated with artificial intelligence, from semiconductor manufacturers to software providers. This has created a virtuous cycle, where rising valuations attract more capital, which in turn fuels further innovation and growth. However, it also raises concerns about potential overheating. For hedge funds, the challenge is navigating this environment without becoming overly exposed to a single theme. As BlackRock's recent warning on crowding suggests, the convergence of capital into popular trades can create systemic risks. Anthropic's rise could exacerbate these dynamics. If the company becomes a focal point for AI investment, it may draw even more capital into an already crowded space. This could increase volatility, particularly if sentiment shifts. A Defining Moment for Venture Capital For the venture capital industry, Anthropic represents both a triumph and a test. On one hand, it validates the model of backing high-risk, high-reward technologies. On the other hand, it raises questions about valuation discipline and the sustainability of returns. An $800 billion valuation implies enormous expectations for future growth. Meeting these expectations will require not only technological excellence but also effective execution across a range of business functions. For investors, the key question is whether this represents the early stages of a long-term trend or the peak of a speculative cycle. The answer will likely depend on how quickly AI can translate into tangible economic value. Conclusion Anthropic's reported $800 billion valuation is more than just a headline -- it is a signal of the transformative potential of artificial intelligence and the willingness of capital markets to bet on that future. If realized, it would mark one of the most significant valuation milestones in modern financial history, reshaping the landscape for venture capital, public equities, and alternative investments alike. Yet, as with any period of rapid innovation and exuberance, caution is warranted. The path from breakthrough technology to sustainable profitability is rarely linear, and the stakes have never been higher. For now, Anthropic stands at the center of the AI revolution -- a company that embodies both the promise and the uncertainty of a new economic era.

Anthropic has begun testing identity verification for its chatbot, Claude, requiring some users to upload government IDs or take a selfie. For a company that has built very much of its reputation around privacy, a recent move has now raised a few eyebrows. The San Francisco-based AI giant has quietly started testing identity verification for its chatbot Claude and in some cases, that means users may have to upload a government ID or even take a live selfie. It's not a blanket rule yet, but it's enough to get people talking. Keep reading to understand what's changing and why it matters.
Jason Statham has fought crime syndicates, giant sharks, and even Dominic Torretto (Vin Diesel) himself, but next year, he'll be fighting his most challenging foe yet -- the United States government. That's right, the action icon's highly anticipated sequel, The Beekeeper 2, has unveiled some absolutely bonkers plot details at CinemaCon 2026, as well as a sneak peek at some brand-new footage. The behind-closed-doors sneak peek has Statham's Adam Clay embark on an action-packed quest that involves the kidnapping of the President of the United States, all but assuring the thrilling sequel is doubling down on the absurd elements that made the original a hit. With a video message to the good folks of CinemaCon, Jason Statham dropped in for a video visit to introduce a never-before-seen clip from the highly anticipated return of his character Adam Clay in The Beekeeper 2. Speeding down a forested lane, a vehicle is spotted by Adam's target. Just when they think they've got the skilled assassin, they realize it's a trap, and Adam pops out from the shadows. Embarking on a new mission, Adam loads up on weaponry and gets a healing bee treatment. After The Beekeepers kidnap the President, all eyes are on Adam to step in and save the day. Packed with crazy weapons (including a flamethrower), epic fight scenes, and incredible one-liners, it looks like our favorite action hero is back and better than ever. This is a developing story. Stay tuned to Collider as the updates come in. The Beekeeper 2 Like Follow Followed Action Crime Thriller Release Date January 15, 2027 Director Timo Tjahjanto Writers Kurt Wimmer Producers Jason Statham, Chris Long, Kurt Wimmer Cast Powered by Expand Collapse

WASHINGTON, April 16, 2026 (AP) -- The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It's part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of AI products like ChatGPT and Claude, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has already imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It's part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of AI products like ChatGPT and Claude, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has already imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."
