The latest news and updates from companies in the WLTH portfolio.
The firm says it withheld an AI model on cybersecurity grounds but sceptics say this was hype to lure investment This week, the AI company Anthropic said it had created an AI model so powerful that, out of a sense of overwhelming responsibility, it was not going to release it to the public. The US treasury secretary, Scott Bessent, summoned the heads of major banks for a chat about the model, Mythos. The Reform UK MP Danny Kruger wrote a letter to the government urging it to "engage with AI firm Anthropic whose new frontier model Claude Mythos could present catastrophic cybersecurity risks to the UK". X went wild. Others were more sceptical, including the noted AI critic Gary Marcus, who said "Dario [Amodei] has far more technical chops than Sam [Altman], but seems to have graduated from the same school of hype and exaggeration," referring to the CEOs of Anthropic and its rival, OpenAI. It is unclear if Anthropic has built the machine god. What is more apparent is that the San Francisco startup widely seen as the "responsible" AI company is brilliant at marketing. In the past months, Anthropic has enjoyed a 10,000-word profile in the New Yorker, two pieces in the Wall Street Journal, and the front cover of Time magazine, on which Amodei's face was emblazoned, movie-poster style, above the Pentagon and the US defense secretary, Pete Hegseth. Amodei and Anthropic's co-founder, Jack Clark, appeared on two separate New York Times podcasts in February, chewing over questions such as whether their machine was conscious, and if it might soon "rip through the economy". The company's "resident philosopher" has spoken to the WSJ about whether Claude - a commercial product being used to trade cryptocurrency and designate missile targets - has a "sense of self". This has all come amid a dustup between Anthropic and the US department of defence in which Anthropic, despite creating the AI tool used by the Pentagon to strike Iran, has managed to come out looking far better than OpenAI, which offered to help the US military do the same thing but with - maybe - fewer guardrails. Its media lead, Danielle Ghiglieri, has notched the wins on LinkedIn. "I'm endlessly proud to work at Anthropic," she said of the company's Time cover, tagging the journalists involved in a post about the "mad dash" to get the story over the line. Watching a CBS 60 Minutes segment featuring Amodei "was one of those pinch-me moments," she said. "What made it meaningful wasn't just the platform. It was seeing the story we wanted to tell actually come through." Of the New Yorker profile, by the journalist Gideon Lewis-Kraus, she wrote: "I would be lying if I said I wasn't nervous for our first meeting in person ... working with someone of Gideon's calibre means being pushed to articulate ideas you're still forming, and being OK with that discomfort." ("I bet that's what they all say about you," said my editor.) Other tech PRs have taken notice. "They are clearly having a moment right now but companies building technology that will change the world deserve equal scrutiny," said one. "They accidentally leaked their own source code last week, then this week they claim stewardship over cyber threats with a new powerful model that only they control. Any other big tech firm would be ridiculed." Anthropic did accidentally release part of Claude's internal source code at the beginning of April. "No sensitive customer data or credentials were involved or exposed," it said. What does this all mean about Anthropic's undoubtedly powerful Mythos? The model's capacities were not "substantiated," said Dr Heidy Khlaaf, the chief AI scientist at the AI Now Institute. "Releasing a marketing post with purposely vague language that obscures evidence ... brings into question if they are trying to garner further investment without scrutiny." "Mythos is a real development and Anthropic was right to treat it seriously," said Jameison O'Reilly, an expert in offensive cybersecurity. But, he said, some of Anthropic's claims, such as that it found thousands of "zero-day vulnerabilities" in major operating systems, were not that significant to real-world cybersecurity considerations. A zero-day vulnerability is a flaw in software or hardware unknown to its developers. "We have spent over 10 years gaining authorised access to hundreds of organisations - banks, governments, critical infrastructure, global enterprises," said O'Reilly. "In those 10 years, across hundreds of engagements, the number of times we needed a zero-day vulnerability to achieve our objective was vanishingly small." Other reasons may have contributed to Anthropic's decision not to release Mythos. The company has limited resources, and appears to be struggling to offer enough computing capacity to allow all its subscribers to use its models. It has introduced usage caps on the wildly popular Claude. Recently, it said users would have to purchase extra capacity on top of their subscriptions in order to run third-party tools, such as OpenClaw. At this point, it may simply not have the infrastructure to support the release of a hyped-up new creation. Like OpenAI, Anthropic is in a race to raise billions of dollars and capture a market - still ill-defined - of people who might lean on its chatbots as friends, romantic partners or deeply personalised assistants, and of companies that might use them to replace human employees. But differences in these products are marginal and impressionistic, mostly down to hard-to-quantify attributes like "sense of self" and "soul" - or rather, what passes for these in an AI agent. The battle is for hearts and minds. "Mythos is a strategic announcement to show that they're open for business," said Khlaaf, saying Anthropic's release limitation prevented independent experts from evaluating the company's claims. She suggested we may be "seeing the very same bait and switch playbook that was used by OpenAI, where safety is a PR tool to gain public trust before profits are prioritised" and: "Anthropic publicity has managed to better obscure this switch than its rivals."

Net buying of Tesla shares by Korean investors falls 71 percent, asset managers race to launch SpaceX ETFs South Korean retail investors have cut their holdings in Tesla this year, taking profit and rebalancing portfolios as anticipation grows for a potential initial public offering of SpaceX and as new tax incentives encourage a shift back into domestic assets. Data from the Korea Securities Depository showed that South Korean retail investors, known locally as "Western ants," sharply scaled back their purchases of Tesla shares this year. Net buying of Tesla fell about 71 percent from a year earlier, dropping more than $2.6 billion last year to around $779 million as of last Friday. The shift reflects profit-taking and a broader portfolio rotation by retail investors, industry observers say. Interest is meanwhile building in SpaceX's planned IPO, which could be one of the largest globally this year, with fundraising of up to $75 billion. The portfolio shift has been reinforced by a new tax incentive aimed at drawing overseas investment funds back into the domestic market. Under amendments passed by the National Assembly on March 31, South Korean investors who transfer proceeds from overseas stock investments into domestic equities by May can receive up to a 100 percent deduction on capital gains taxes. Data from Samsung Securities showed that assets in its Reshoring Investment Accounts surpassed $75 million within two weeks of launch. The RIA is a tax-incentivized account to encourage investors to repatriate funds from foreign stocks, particularly US stocks, into domestic equities. Nvidia and Tesla accounted for a large share of inflows into the accounts, suggesting retail investors are reallocating holdings previously concentrated in US technology stocks. Market participants said some investors are also looking to redirect proceeds from overseas stock sales into domestic aerospace-themed exchange-traded funds, in part to gain indirect exposure to SpaceX ahead of its potential listing. Reflecting this trend, South Korean asset managers are rolling out related products such as US aerospace-themed ETFs ahead of the listing to attract retail investors seeking exposure to SpaceX. Mirae Asset, Korea Investment Management and Shinhan Asset Management are among firms preparing new products, while Samsung Asset Management has already attracted about $175 million into its aerospace ETF launched last month. SpaceX has confidentially filed for an IPO with the US Securities and Exchange Commission and could go public as early as mid-June, with a valuation of more than $1.7 trillion. Analysts say the listing could trigger a broader rerating of the global space sector and boost passive inflows if it is added to major indexes such as the S&P 500. Separately, Mirae Asset Securities is weighing ways to offer SpaceX shares to South Korean retail investors in what could be the first dual offering in the United States and Korea. However, regulatory uncertainty remains in Korea. Differences in IPO rules and disclosure requirements between the two countries could complicate the process. The Financial Supervisory Service is reviewing potential risks to investor protection and foreign exchange markets. Industry sources said limiting allocations to institutional investors remains a fallback option if retail participation proves difficult.

Venture capitalist Marc Andreessen said Elon Musk's Starlink is one of the "least understood" technological successes in the world right now, arguing that it overcame decades of failed attempts to build viable satellite internet. This comes at a time when SpaceX has confidentially filed for an initial public offering (IPO), targeting a reported valuation of $1.75 trillion. Starlink's Rise After Decades Of Satellite Internet Failures During an appearance on David Senra's podcast last month, Andreessen discussed the rapid growth of SpaceX's Starlink, noting reports that the service has reached millions of subscribers globally. Senra mentioned he is a Starlink user himself during the conversation. Andreessen compared Starlink to earlier failed satellite internet ventures, including Motorola's Iridium system and the Teledesic project backed by Bill Gates and Craig McCaw. "Elon's not the first guy who said we're going to do satellite-based internet access," Andreessen said. He added that previous efforts, including Teledesic, ended in "complete catastrophe" and financial collapse, while Iridium became a widely cited case study in business failure before later restructuring. He said Musk's approach differed fundamentally because SpaceX already had reusable rocket capability, allowing frequent launches. He explained Musk's logic as building satellites internally rather than waiting for external customers, effectively creating demand through the company's own launch capabilities. Andreessen described the outcome as a "brilliant" integration of engineering and scale, calling it a side project that grew into a major global infrastructure system. Starlink Expands Airline Deals, Price Cuts And Global Reach The airline was also previously in talks with Amazon's Project Kuiper (Amazon Leo) as it weighed multiple satellite providers for in-flight Wi-Fi upgrades. The satellite internet race had intensified as SpaceX pushed Starlink Mobile upgrades, promising faster speeds and higher data capacity. Meanwhile, Amazon.com Inc (NASDAQ:AMZN) expanded Project Kuiper through telecom partnerships focused on strengthening existing mobile networks in remote regions. The two companies had pursued different strategies, with SpaceX focusing on direct connectivity and Amazon on infrastructure integration. Musk also confirmed that Starlink had been cutting prices and offering free hardware to expand access, particularly in developing markets. He said the changes were aimed at affordability rather than competition, as the company worked to scale global connectivity. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Australian travellers are facing widespread disruption after dozens of scheduled flights to and from Sydney Airport on Sunday were cancelled due to high winds. The chaos has left the plans of hundreds of travellers up in the air, with Sydney Airport's online flight board showed a flurry of cancellations impacting Virgin Australia, Qantas and Jetstar. As of 5.30pm on Sunday, there were 33 cancelled flights that were previously scheduled to arrive at Sydney Airport, and 24 cancelled departures from the airport, along with dozens of delayed flights showing on its website. The nation's air navigation service provider Airservices Australia enacted single runway operations at Sydney Airport on Sunday morning. The move was made in order to "manage strong westerly winds affecting the parallel north-south runways", an Airservices Australia spokesperson told news.com.au. "The decision to move to single runway operations was made in co-operation with our airline customers and the Bureau of Meteorology and was in line with international safety regulations for runway crosswinds." Airservices said it issued an advance notice of the conditions to airlines last night in order to assist them in making any necessary changes to flight schedules. However, there will be delays. "Delays are expected," the spokesperson said. "We will continue to work closely with industry to minimise impacts for the travelling public. "Decisions regarding flight cancellations are a matter for individual airlines and it is recommended that passengers check with their airline carrier for possible changes to their travel arrangements." A spokesperson from Virgin Australia told news.com.au: "Some services on Virgin Australia's network have been impacted by adverse weather in Sydney today, Sunday 12 April 2026. "The safety of our guests and crew is our top priority, and our meteorologists continue to closely monitor the weather system. "We regret the impact of this on guests' travel plans and are working hard to ensure they reach their destination as soon as possible."
The M4 Prince of Wales bridge is closed, sparking chaos on the roads for motorists travelling this morning There are heavy delays on the M4 today after a crash early this morning forced emergency services to close a motorway bridge. At around 8.30am today (Sunday, April 12), the Prince of Wales Bridge was shut in both directions for emergency services to respond to a crash site. Teams are said to be clearing debris from the road. Delays are affecting all drivers heading to and from Wales, though two of three lands have now reopened both eastbound and westbound since the crash earlier this morning.

SYDNEY -- Australia's major airports descended into widespread disruption on Sunday as 29 flights were cancelled and 183 others delayed across key hubs including Sydney, Melbourne, Brisbane and Perth, leaving thousands of passengers stranded during the busy post-Easter travel period. The chaos, reported across multiple aviation tracking platforms and airline statements as of April 12, 2026, stemmed primarily from a combination of adverse weather conditions, air traffic control restrictions and operational challenges that overwhelmed ground handling and scheduling at the nation's busiest gateways. Sydney Kingsford Smith Airport bore the heaviest burden, followed closely by Melbourne Tullamarine and Brisbane, with ripple effects felt in domestic and international connections. Airlines including Qantas, Virgin Australia, Jetstar and international carriers such as Cathay Pacific and Air New Zealand scrambled to rebook passengers, issue refunds and adjust schedules, but many travelers faced hours-long waits, missed connections and mounting frustration in crowded terminals. Some passengers reported sleeping on airport floors or scrambling for last-minute hotel rooms as rebooking options filled rapidly. Scale of the Disruption According to real-time flight tracking data compiled Sunday, Sydney Airport recorded the highest number of issues, with a significant portion of the day's 183 delays and multiple cancellations concentrated in both domestic and international operations. Melbourne and Brisbane airports followed, contributing heavily to the national tally of 29 outright cancellations. Perth and Canberra also experienced secondary effects, though on a smaller scale. The disruptions affected more than 30,000 passengers in some estimates when including knock-on delays from earlier days in the week. Peak holiday travel periods, including return journeys after Easter breaks, exacerbated the situation as demand for flights remained high while capacity shrank. Qantas, Australia's largest carrier, confirmed multiple delays on key routes and urged customers to check flight status via its app or website. Virgin Australia similarly advised passengers of potential changes, particularly on east coast services. International flights linking to Asia and Europe faced additional pressure from ongoing global rerouting caused by earlier Middle East airspace restrictions, though Sunday's issues appeared driven more by local factors. Causes Behind the Chaos Aviation experts pointed to a mix of weather-related problems and systemic pressures. Recent days have seen strong winds, heavy rain and lightning in parts of southeastern Australia, forcing ground staff off tarmacs for safety and slowing turnaround times. Lightning strikes prompted temporary halts at Melbourne Airport earlier in the week, with residual effects lingering into weekend operations. Air traffic control limitations and staffing shortages at major towers added to bottlenecks. Australia's tightly interconnected domestic network means a delay at one hub quickly cascades: a late departure from Sydney can delay incoming aircraft at Melbourne or Brisbane, creating a domino effect across the day's schedule. Aircraft maintenance and recall-related groundings for certain Airbus models in recent weeks have also reduced available fleet capacity, forcing airlines to consolidate flights or operate with tighter margins. One earlier Airbus software-related grounding contributed to longer-term scheduling strain, though most affected planes returned to service quickly. Compounding the issue is the high utilization of Australia's aviation infrastructure. Sydney and Melbourne airports routinely operate near capacity during peak periods, leaving little buffer for unexpected disruptions. Passenger Impact and Frustration Social media filled with stories of stranded families, business travelers missing meetings and holidaymakers whose plans unraveled. One passenger at Sydney Airport described waiting six hours for a delayed domestic flight only to learn it had been cancelled, forcing an overnight stay. Others reported skyrocketing prices for alternative flights or rideshares as demand surged. Airlines activated contingency plans, offering rebookings on later services, meal vouchers and accommodation where eligible under Australian consumer protections. However, many passengers noted that rebooking options were limited, with some flights fully booked for days ahead. The Department of Infrastructure, Transport, Regional Development, Communications and the Arts encouraged affected travelers to contact their airlines directly and monitor official airport websites. Transport Minister Catherine King's office stated that officials were working with airports and airlines to restore normal operations as quickly as possible. Broader Context of Australian Aviation Challenges Sunday's events form part of a pattern of recurring disruptions in Australian air travel during 2026. Earlier incidents included weather-related cancellations in Adelaide and Melbourne, as well as significant international ripple effects from Middle East conflict in March that forced Virgin Australia flights bound for Doha to turn around mid-journey and left thousands of Europeans and Australians rerouting through longer, more expensive paths via Singapore or the United States. Qantas has responded by adjusting European schedules, increasing services via Singapore and cancelling some direct Perth-Paris routes to maintain reliability on core long-haul corridors. The airline continues to invest in fleet modernization and crew training, but industry analysts note that Australia's vast geography and reliance on a handful of major hubs make the network particularly vulnerable to localized shocks. Consumer advocacy groups have called for stronger passenger rights legislation, including automatic compensation for significant delays and cancellations similar to European Union rules. Current Australian guidelines provide some protections, but enforcement and awareness vary. Outlook and Recovery Efforts Airports and airlines worked through the night to clear backlogs, with many delayed flights operating into the early hours of Monday. Forecasters predict improving weather conditions across eastern Australia in coming days, which should ease pressure on ground operations. Travelers are advised to: * Check flight status multiple times before heading to the airport * Allow extra time for security and connections * Contact airlines promptly for rebooking or refund options * Consider travel insurance that covers disruption As operations normalize, the incident serves as a reminder of the fragility of modern air travel networks. With passenger numbers rebounding strongly post-pandemic and infrastructure upgrades still underway at several airports, experts warn that similar disruptions could recur without further investment in resilience measures such as expanded air traffic control capacity and weather-mitigation technologies. For now, thousands of affected passengers hope for smoother skies ahead as Australia's aviation sector recovers from yet another challenging day in an already turbulent 2026 travel year.

Dario Amodei-led Anthropic has reportedly turned to Christian religious leaders for advice as it looks to shape the ethical direction of its AI chatbot Claude, according to a report by The Washington Post. The report said the company hosted around 15 Christian leaders from Catholic and Protestant backgrounds, along with academics and business professionals, at its headquarters in late March. The two-day meeting included discussions and a private dinner with Anthropic researchers.During the sessions, Anthropic employees reportedly asked for input on how Claude should respond to complex ethical questions. Discussions included how the chatbot should interact with users dealing with grief, how it should handle conversations around self-harm, and what kind of moral framework should guide its responses.Participants also debated broader philosophical questions, including whether an AI system like Claude could be seen as having any form of spiritual value."They're growing something that they don't fully know what it's going to turn out as," said Brendan McGuire, a Catholic priest who attended the meeting, as quoted in the report. "We've got to build ethical thinking into the machine so it's able to adapt dynamically."The report said the meeting is part of Anthropic's broader effort to involve different groups as AI becomes more widely used. A company spokesperson said it is important to engage with a wide range of communities, including religious groups, as AI systems become more influential.Anthropic has been more vocal than many tech companies about the risks linked to advanced AI systems. Claude's chatbot operates using a detailed internal structure, often called a "constitution," which sets the rules for its behavior.The discussions also come at a time when AI companies are facing increased scrutiny over the impact of their tools, including concerns around safety, ethics and real-world harm. As per the Washington Post report, Anthropic plans to hold similar discussions with other religious and philosophical groups in the future.
* Asha Bhosle died at age 92 after hospitalization for exhaustion and chest infection * She dominated the music industry from the late 1950s through the 1980s * Known as the Queen of Versatility, she blended traditional and Western music styles Did our AI summary help? Let us know. Switch To Beeps Mode New Delhi: The world has lost another talent that has given us countless melodies to swoon to for eternity. Asha Bhosle died at the age of 92. She was admitted to Breach Candy Hospital in Mumbai on Saturday, April 11, as the veteran singer had been hospitalised following extreme exhaustion and a chest infection. Dominating the music industry for almost eight decades, Asha Bhosle was the reigning musical sensation from the late 1950s through the 1970s and 1980s. In the mid-1950s, her unconventional talent hit a crescendo, particularly as her timeless collaborations with RD Burman became a roaring success story. What Made Her Different Asha Bhosle carved her own niche in the industry as she blended traditional melodies with the modern, Western-influenced sound of the 60s and 70s. From soulful ghazals to cabaret to foot-tapping rock and roll, Asha Bhosle was crowned with the sobriquet of 'Queen of Versatility.' She chose to define the 'unconventional' path. It was obvious that early in her career, comparisons would be aplenty with her elder sister, the late Lata Mangeshkar, who had a softer tone. Asha Bhosle implemented a more agile modulation in her vocal chords. Interestingly, her sharper voice got her playback singing for many more rebellious characters. She replaced the traditional voice for a Hindi film heroine with a magnetic change-something synonymous with the modern Indian woman-and thus a playback superstar was born. It is because of her knack for experimentation that she even earned herself a spot in the Guinness World Records as the most recorded artist in music history. She could turn everything into music: a whisper here, a giggle there, a murmur in between. Key Career Milestones From O P Nayyar, known as the Rhythm King, to the revolution that was RD Burman, Asha Bhosle moulded her technical singing technique for every kind of composer. Bhosle's brassy vocals impressed Nayyar, which led to hits like "Aaiye Meherbaan" (1958), imbued with a sultry texture. With RD Burman, she re-invented herself again with jazz riffs and Latin beats-a masterclass in versatility-giving us chartbusters like "Aaja Aaja Main Hoon Pyar Tera" (1966), a Western attempt that was not tried and tested yet. Asha Bhosle turned 'unsingable prose' such as "Mera Kuchh Samaan" (1987)-which at that time was termed as lacking structure-into an award-winning melody. Asha Bhosle was a singer for the masses. Clearly so. At one time, she could record a raunchy cabaret song like "Piya Tu Ab To Aaja" (Caravan, 1971) and suddenly go into hippie mode with "Dum Maro Dum" (Hare Rama Hare Krishna, 1971). In ghazals like "In Aankhon Ki Masti" and "Dil Cheez Kya Hai" (Umrao Jaan, 1981), she once again surprised singers as she dug deep into her technical aspects and emotional depth. As time passed, at 62, she collaborated with AR Rahman for "Rangeela Re" (Rangeela, 1995), clearly showcasing that she moved with the times and the generation. Truly a massive loss today, but her music lives on, in every era, for every age. Show full article Entertainment I Read Latest News on NDTV Entertainment. Click NDTV Entertainment For The Latest In, bollywood , regional, hollywood, tv, web series, photos, videos and More. Asha Bhosle, Asha Bhosle Death, Asha Bhosle Songs

Anthropic's Claude Mythos has been framed as unusually capable at discovering software security issues, and that has triggered an immediate cybersecurity conversation around how such tooling could be used -- and how defenses may need to evolve. Across multiple reports, experts describe Mythos as pushing beyond normal code assistance into territory that can systematically find and exploit vulnerabilities. The core concern is not only whether an AI can generate attack code, but whether it can rapidly identify weaknesses that would normally take specialized expertise and extended effort. Because of that perceived risk, the rollout appears constrained. Anthropic has limited how widely Mythos is released, citing safety and cybersecurity implications, including concerns about enabling exploitation of systems relied on by users. This matters for developers and security teams in at least two ways: A related thread in the coverage is how systems and institutions are preparing for AI-enabled threats. Analysts and banks have been assessing Anthropic's Myhtos internally, and public discussions emphasize that AI security reckoning is likely to land first on engineering pipelines: testing, secure coding, and incident response. Overall, Mythos is being treated less like a generic chatbot update and more like a tool that could accelerate the "find → exploit" timeline, forcing organizations to tighten validation and monitoring.

SpaceX ranks as the fourth-biggest corporate entity holding bitcoin publicly According to The Information's Friday report, SpaceX recorded approximately $5 billion in losses for 2025. This represents a dramatic shift from the company's roughly $8 billion in profits during the previous year. https://x.com/blckchaindaily/status/2043174140723798502?s=20 The revenue picture tells a different story. SpaceX generated $18.5 billion in 2025, representing growth from the estimated $15 billion to $16 billion recorded in 2024. However, operational expenses related to absorbing xAI -- Elon Musk's AI venture purchased in February 2025 -- exceeded total income. Yet throughout this financial turbulence, SpaceX left its bitcoin reserve completely untouched. Blockchain analytics platform Arkham Intelligence confirms the company maintains 8,285 BTC stored with Coinbase Prime, currently valued at approximately $603 million. The most recent wallet activity occurred roughly four months ago during an internal reorganization. Two separate transactions -- 614 BTC and 1,021 BTC respectively -- transferred between wallets controlled by SpaceX. Zero bitcoin entered the market. The dollar value of SpaceX's position exceeded $1.6 billion when Bitcoin reached its peak in October 2025. The actual BTC quantity has remained unchanged since mid-2024. This positions SpaceX as the fourth-largest publicly known corporate bitcoin treasury, trailing only Strategy, Marathon Digital, and Riot Platforms. For an organization gearing up for public markets while absorbing a $5 billion deficit, maintaining over $600 million in a high-volatility asset represents a deliberate strategic decision. SpaceX has shown no indication of liquidating this position to strengthen its financial position. Reports from CoinDesk last month confirmed SpaceX submitted IPO documentation. Once these filings become accessible to the public, the bitcoin treasury will appear in official financial disclosures for the first time. This timing coincides with updated FASB accounting standards implemented in late 2025. These regulations require companies to value cryptocurrency holdings at current market rates, meaning Bitcoin's price fluctuations will directly impact SpaceX's reported financial performance. After SpaceX completes its public offering, the bitcoin position will face identical examination as every other asset on the balance sheet. Market participants and financial analysts will monitor this holding through regular quarterly reports. Maintaining the position through a $5 billion loss indicates leadership considers bitcoin a strategic treasury reserve rather than a speculative investment. SpaceX enters a selective but expanding group of corporations adopting this bitcoin treasury approach. While Strategy maintains the dominant position by substantial margin, SpaceX's $603 million holding establishes it as a significant participant in this emerging category. Arkham's blockchain tracking reveals no recent withdrawals. Current on-chain verification confirms SpaceX's complete 8,285 BTC position remains undisturbed.

Anthropic, a U.S. artificial intelligence (AI) startup, has sparked debate over AI and cybersecurity threats with its cutting-edge AI model 'Mythos' (Mythos). Proponents argue that powerful AI models can help identify security vulnerabilities in public- and private-sector systems in advance, while others worry that they could instead leave systems more exposed to AI-enabled cyberattacks. The U.S. administration, up to and including the vice president, is moving urgently, seeking responses to AI risks not only at the government level but also together with major Big Tech and financial institutions. As of the 12th, according to industry sources, Mythos is an AI model that Anthropic has released only to about 40 to 50 companies on a limited basis to prevent potential misuse by hackers, stating that "its ability to detect software security vulnerabilities exceeded that of most people except top experts." Through 'Project Glasswing', launched with more than ten technology, security, and financial companies including Nvidia, Amazon, Cisco, and JPMorgan Chase, Anthropic agreed to jointly conduct "defensive (cyber) security operations" based on a preview (preview) version of Mythos. Some suggest that Anthropic is exaggerating Mythos's power as a matter of corporate promotion. However, Mythos, which has shown high performance across various metrics such as coding and reasoning, has indeed achieved groundbreaking results in detecting software defects (bugs). It demonstrated the ability to uncover large numbers of 'zero-day vulnerabilities'which even developers or system administrators have not recognized and therefore have no security patchessuch as discovering a bug in the OpenBSD operating system for the first time in 27 years. It is also reportedly capable of automatically generating exploit code after detecting a bug. In other words, one cannot rule out the possibility that non-experts without hacking skills could use Mythos to infiltrate systems. In response, the U.S. administration held countermeasure meetings at various levels after Mythos was released. The Wall Street Journal (WSJ) reported on the 11th (local time) that "the risks of AI have emerged as a top priority for the Donald Trump administration," and that White House National Cyber Director Sean Kean-Cross is working with officials from relevant departments to identify security vulnerabilities in critical national infrastructure and to strengthen government systems that AI could exploit. Earlier, Vice President JD Vance and Treasury Secretary Scott Bessent also held a conference call with heads of Big Tech and major banks to discuss response measures to potential AI-driven cyberattacks. Federal Reserve (Fed) Chair Jerome Powell also took part in the call. Mantas Mazeika, a researcher at the AI Safety Institute, told the Christian Science Monitor that "a comprehensive recognition of the cyber risks posed by AI has begun." Reuters-Yonhap News 한글기사 원본(Original Korean Story)

New subscription offers significantly more Codex usage for heavy coding tasks. MUMBAI: OpenAI has just raised the stakes in the AI coding arms race by giving power users a much bigger slice of the pie. The company has introduced a new $100-per-month ChatGPT Pro subscription tier, aimed squarely at competing with Anthropic in the fast-growing AI coding space. The new plan provides five times more Codex usage than the existing $20 Plus tier and is specifically designed for longer, high-effort coding sessions. According to OpenAI's announcement on X, the Pro tier will continue to include access to all existing Pro features, including its exclusive Pro model and unlimited usage of Instant and Thinking models. As part of a limited-time promotion running until 31 May, new subscribers to the $100 plan will receive up to ten times the Codex usage of ChatGPT Plus to support more ambitious development projects. The company also noted that the current Codex promotion for Plus users will end, with usage being rebalanced to allow more frequent sessions throughout the week rather than heavy daily limits. The $20 Plus plan will remain the main offering for everyday use, while the new $100 tier targets heavier, more consistent workloads. OpenAI's broader subscription lineup continues to include a $200 Pro tier, an $8 Go plan, and a free tier. Earlier this week, CEO Sam Altman revealed that the Codex AI coding agent had reached three million users, with usage limits reset at every million-user milestone. The launch closely mirrors Anthropic's pricing structure, which includes a Max 5x tier at $100 per month and a Max 20x tier at $200 per month. The move comes amid reports that OpenAI has initiated a "code red" internal strategy to counter Anthropic's growing dominance in AI coding tools. The company is shifting focus toward professional developer tools while reportedly scaling back or pausing other projects, including further development of its Sora video generator. OpenAI has also confirmed plans to build a desktop "superapp" that integrates ChatGPT, Codex, and its Atlas AI browser into a single unified platform. In the competitive world of AI coding assistants, OpenAI is clearly signalling it won't be outspent or outbuilt. With the new $100 Pro tier, the company is giving serious developers more firepower and sending a clear message to rivals that the race is far from over.

Coachella is one of the world's most prominent music festivals, held annually in California. ShowQuick Read Summary is AI-generated, newsroom-reviewed * A heavy lighting fixture fell on a woman at Coachella's Do LaB stage causing serious injury * The incident forced a temporary halt to performances while medical teams attended the injured * Strong winds were reported to have caused the lighting equipment to fall during John Summit's set Did our AI summary help? Let us know. Switch To Beeps Mode A high-energy evening at the Coachella music festival turned chaotic after a heavy lighting fixture fell into the crowd, injuring at least one attendee and forcing a temporary halt to performances at a popular stage. The incident occurred Friday evening during a set by John Summit at the Do LaB stage, an area that is known for its immersive electronic music acts and elaborate installations. Eyewitnesses said the equipment dropped from above and struck a woman in the crowd, leaving her with a serious head injury, New York Post reported. One concertgoer, who was standing nearby, described the aftermath as alarming, saying the impact caused visible bleeding and immediate panic among those in the vicinity. "There was blood all over the light, and then there was blood on the ground," he said. Videos from the scene later showed the fallen structure lying on the ground beneath the stage's signature decor, with several people attempting, but failing, to move it due to its weight. The situation prompted organisers to temporarily shut down the stage while security and medical teams responded. In clips circulating on social media, staff can be heard informing the audience about the incident and urging caution. Early indications suggest strong winds may have contributed to the accident. Gusty conditions were reported across the festival grounds throughout the day, disrupting multiple performances and even leading to cancellations. Electronic artist Anyma reportedly called off a scheduled set, while other areas of the venue experienced damage, including tents and canopies being blown away. When attendees returned later in the night, parts of the affected area had been cordoned off as organisers assessed the situation. The incident has since raised fresh concerns about crowd safety and infrastructure management at large-scale music festivals. What is Coachella? Coachella Valley Music and Arts Festival is one of the world's most prominent music festivals, held annually in California. It features a mix of global artists across genres like pop, rock, hip-hop, and electronic music, alongside large-scale art installations and immersive stage designs. Known for its celebrity attendance and cultural influence, Coachella draws tens of thousands of fans each year and is considered a major event in the global entertainment calendar. Show full article Track Latest News Live on NDTV.com and get news updates from India and around the world Coachella Music Festival, Light Plunges Onto Fans, Concert Accident

In today's column, I examine the recent inadvertent leak of various AI components that are internal to the widely popular agentic assistant app of Anthropic, Claude Code, which in turn has stoked renewed street cred for advocates of neuro-symbolic AI, though not everyone is sold on what the leaked code proves or doesn't prove. Back up for a moment to see the big picture. Some believe that generative AI and large language models (LLMs) are nearing their furthest feasible capabilities. Conventional LLM approaches have gone as far as they can go. A dead-end is ahead. An alternative means for advancing AI consists of hybrid AI that makes use of not only LLM components, known as subsymbolic AI, but also uses symbolic AI that is reminiscent of the technology used during the expert systems and knowledge-based systems era. The aim is to leverage logic-based programming with the use of artificial neural networks (ANNs). The leaked code of Claude Code has been quickly proclaimed as a revelation on this thorny topic. You see, it turns out that the powerful Claude Code agentic AI assistant appears to contain a mixture of subsymbolic AI and symbolic AI system components. This instantly drew praise from those in the hybrid AI camp. If Anthropic is going that route, certainly this affirms the value of combining the subsymbolic and symbolic. Others outside that camp were more cautious in reaching any such conclusions and asserted that the presumed symbolic AI elements were merely incidental and unremarkable. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Agentic AI Explained I will begin by laying a foundation regarding the nature of AI agents. AI agents are the hottest new realm of AI. Get yourself ready because in the next year or two, the use of AI agents will be nearly ubiquitous. Mark my words. This is what AI agents are all about. Imagine that you are using generative AI to plan a vacation trip. You would customarily log into your generative AI account, such as making use of ChatGPT, GPT-5, GPT-4o, Claude, Gemini, Llama, Grok, CoPilot, etc. The planning of your trip would be easy due to the natural language fluency of generative AI. All you need to do is describe where you want to go, and then seamlessly engage in a focused dialogue about the pluses and minuses of places to stay and the transportation options available. When it comes to booking your trip, the odds are that you would have to exit generative AI and start accessing the websites of the hotels, amusement parks, airlines, and other locales to buy your tickets. Relatively few of the major generative AIs available today will take that next step on your behalf. It is up to you to perform those nitty-gritty tasks. This is where agents and agentic AI come into play. In earlier days, you would undoubtedly phone a travel agent to make your bookings. Though there are still human travel agents, another avenue would be to use an AI-based agent that is based on generative AI. The AI has the interactivity that you expect with generative AI. It has also been preloaded with a series of routines or sets of tasks that underpin the efforts of a travel agent. Using everyday natural language, you interact with the agentic AI, which works with you on your planning and can proceed to deal with the booking of your travel plans. Agentic AI takes conventional generative AI to a whole new level of usage. With agentic AI, the AI performs tasks semi-autonomously, rather than merely interacting with you. The AI agent will use an API to connect with an airline and book your flight, and use another API to converse with a hotel reservation system and find you an available room. You don't need to lift a finger. Overall, action is a paramount keystone of AI agents. They can chat, plus they can act and get things done. For more on my discussion about the ins and outs of agentic AI, see the link here. Neuro-Symbolic AI Is Another Possible Future Shifting gears, I'd like to bring you up-to-speed about neuro-symbolic AI. Neuro-symbolic AI is a two-fer combination of sorts, a proverbial two-for-one special. You take the prevailing uses of artificial neural networks (ANN) that are currently being used at the core of generative AI and LLMs, and mix that brew with rules-based or expert systems (this approach is also referred to as the sub-symbolic AI getting combined with symbolic AI). The idea is that you aim to get the best of both worlds. ANNs are primarily data-based ways to undertake AI, while rules-based systems are a logic-based approach. Many such efforts are already underway; see my discussion at the link here. Not everyone supports the idea of neuro-symbolic or hybrid AI. A frequent criticism of neuro-symbolic AI is that the prior era of AI consisted of rules-based systems -- those were later eventually harshly judged as either ineffective or untenable. Critics warn that we ought not to slip back to old and now-dismissed ways of doing things. A counterargument is that the weaknesses or limitations of rules-based systems can be shored up by incorporating or intermixing them with ANNs. Likewise, the limitations of ANNs can be radically uplifted by combining with rules-based systems. The positioning is that we should mix the two together. It shouldn't be an all-or-nothing competition. Thus, rather than tossing out the logic-based approach as an older hackneyed technique, we can give the still-promising AI approach a second chance. Of course, some believe it is resurrecting something that already should have had a hefty stake put through its very heart. In my view, the synergy of utilizing both capabilities in a unified manner is very promising. There are ardent believers that it is a viable path toward pinnacle AI, such as attaining artificial general intelligence (AGI). Heated Debate About Hybrid AI Within the AI community, there is an ongoing heated debate about neuro-symbolic AI. Maybe we are wasting time and effort by exploring neuro-symbolic AI. On the other hand, maybe we are putting too many eggs in one basket by focusing solely on traditional generative AI and LLMs. A strident case can be made on either side of the coin. There is little doubt that generative AI and LLMs have been quite an alluring form of AI. Billions of dollars have been invested in such AI. The world is well aware of the incredible capabilities of LLMs. In addition, agentic AI is taking generative AI to a new level of usage. Trying to point at neuro-symbolic AI as a next-generation candidate is challenging because there aren't yet standout examples that showcase the power of hybrid AI. Those in the neuro-symbolic camp are always eyeing possible examples that can illustrate the value of the hybrid AI approach. Well, that day recently arrived. On March 31, 2026, there was an accidental leak of source code for some of the components of the agentic AI by Anthropic, known as Claude Code. The Claude Code is right now one of the headline-grabbing instances of agentic AI. Anyone in the agentic AI realm watches Claude Code like a hawk, wanting to see the various actions it can take. Claude Code is a role model of sorts. The source code leak consisted of around 500,000 lines of TypeScript that were spread across nearly 2,000 files. All manner of researchers and anyone interested in the inner workings of Claude Code pored through the leaked files. They found features that haven't yet been switched on. They found architectural definitions on how the AI was put together. It was like opening a treasure chest of prized gold and jewels. And, within that treasure chest, a file named print.ts contained a series of coding-like logic statements. The listing was of a roughly 3,000-line function that had almost 500 branch points and a dozen levels of nesting. This is the smoking gun, some insist, providing the hoped-for proof that symbolic AI is essential, which certainly must be the case if the heralded agentic AI of Claude Code makes use of it. Fistfights Aplenty Among Camps Those in the neuro-symbolics camp were quick to praise Anthropic for their use of symbolic AI techniques. Claude Code now serves as an excellent and exhilarating example for all to see. Without the inadvertent leak, no one other than insiders within Anthropic would know that the logic-based avenue was being utilized. A helpful and encouraging byproduct of the leak was that, finally, there was highly valued proof that neuro-symbolic is integral to the advancement of AI. Champagne bottles were opened. Parties were held in the hallways of neuro-symbolic AI researchers. The annals of AI history will mark the day that the Claude Code leak took place. Whoa, replied those on the other side of the contention. First, the logic-based code was brutally ridiculed as a mess of spaghetti. Is that type of morass the cornerstone of the future of AI? No, thank you, came the harsh retort. Secondly, the messiness raised fundamental doubts about how the logic-based code came into formulation. Maybe the code was written in snippets at a time. It kept being extended and expanded. This implies that there wasn't a coordinated or mindful effort underway. Instead, it was something easy to fall back on, and eventually just took on a life of its own. Think of this as a quick fix. Put aside any belief that this was a smartly devised route. It was merely an afterthought, and nothing more than that. Thirdly, and the final nail in the coffin, was that the logic-based code seems to principally deal with aspects peripheral to the meaty aspects of AI. Some of the code does error handling. Some of it does ordinary authentication. Little of this code, if any, performs the type of AI symbolic work that you would have seen in a full-bodied rules-based or expert system. In that sense, you cannot reasonably or fairly compare this code to the knowledge-based systems era. Any such comparison is wrongly equating apples with oranges. Return Of The Jedi Not to be deterred, the neuro-symbolic camp has proffered that the code does substantiate that logic-based or symbolic approaches are needed, regardless of what they might be doing for the AI system. No matter what kind of mud you sling, you cannot refute that the code is there, it is seemingly integral to Claude Code, and that the developers opted to go that route. Whether they did so with great conviction or simply for ease of convenience, do not get yourself into a tizzy. Accept reality. The war of words continues. How will the history books depict this circumstance? It is hard to say. If neuro-symbolic AI does take off and becomes the next grand hero of AI, the odds are that the Claude Code leak will get a nice bit of prominence in the recounting of AI advancements. But if neuro-symbolic AI never gets off the ground, the Claude Code incident will be at most a tiny footnote somewhere. Few will remember the incident, and dust will collect on the stories that are right now making headlines. I am hoping that the former will occur, namely that neuro-symbolic AI will prove to be a best-case forward path. Maybe if the leaked code contained the immortal words of Darth Vader (spoiler alert!), "No, I am your father", this would have brought the side of the Force to those in the neuro-symbolic AI camp and provided a New Hope. Do or do not. There is no try.

In today's column, I examine the recent inadvertent leak of various AI components that are internal to the widely popular agentic assistant app of Anthropic, Claude Code, which in turn has stoked renewed street cred for advocates of neuro-symbolic AI, though not everyone is sold on what the leaked code proves or doesn't prove. Back up for a moment to see the big picture. Some believe that generative AI and large language models (LLMs) are nearing their furthest feasible capabilities. Conventional LLM approaches have gone as far as they can go. A dead-end is ahead. An alternative means for advancing AI consists of hybrid AI that makes use of not only LLM components, known as subsymbolic AI, but also uses symbolic AI that is reminiscent of the technology used during the expert systems and knowledge-based systems era. The aim is to leverage logic-based programming with the use of artificial neural networks (ANNs). The leaked code of Claude Code has been quickly proclaimed as a revelation on this thorny topic. You see, it turns out that the powerful Claude Code agentic AI assistant appears to contain a mixture of subsymbolic AI and symbolic AI system components. This instantly drew praise from those in the hybrid AI camp. If Anthropic is going that route, certainly this affirms the value of combining the subsymbolic and symbolic. Others outside that camp were more cautious in reaching any such conclusions and asserted that the presumed symbolic AI elements were merely incidental and unremarkable. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Agentic AI Explained I will begin by laying a foundation regarding the nature of AI agents. AI agents are the hottest new realm of AI. Get yourself ready because in the next year or two, the use of AI agents will be nearly ubiquitous. Mark my words. This is what AI agents are all about. Imagine that you are using generative AI to plan a vacation trip. You would customarily log into your generative AI account, such as making use of ChatGPT, GPT-5, GPT-4o, Claude, Gemini, Llama, Grok, CoPilot, etc. The planning of your trip would be easy due to the natural language fluency of generative AI. All you need to do is describe where you want to go, and then seamlessly engage in a focused dialogue about the pluses and minuses of places to stay and the transportation options available. When it comes to booking your trip, the odds are that you would have to exit generative AI and start accessing the websites of the hotels, amusement parks, airlines, and other locales to buy your tickets. Relatively few of the major generative AIs available today will take that next step on your behalf. It is up to you to perform those nitty-gritty tasks. This is where agents and agentic AI come into play. In earlier days, you would undoubtedly phone a travel agent to make your bookings. Though there are still human travel agents, another avenue would be to use an AI-based agent that is based on generative AI. The AI has the interactivity that you expect with generative AI. It has also been preloaded with a series of routines or sets of tasks that underpin the efforts of a travel agent. Using everyday natural language, you interact with the agentic AI, which works with you on your planning and can proceed to deal with the booking of your travel plans. Agentic AI takes conventional generative AI to a whole new level of usage. With agentic AI, the AI performs tasks semi-autonomously, rather than merely interacting with you. The AI agent will use an API to connect with an airline and book your flight, and use another API to converse with a hotel reservation system and find you an available room. You don't need to lift a finger. Overall, action is a paramount keystone of AI agents. They can chat, plus they can act and get things done. For more on my discussion about the ins and outs of agentic AI, see the link here. Neuro-Symbolic AI Is Another Possible Future Shifting gears, I'd like to bring you up-to-speed about neuro-symbolic AI. Neuro-symbolic AI is a two-fer combination of sorts, a proverbial two-for-one special. You take the prevailing uses of artificial neural networks (ANN) that are currently being used at the core of generative AI and LLMs, and mix that brew with rules-based or expert systems (this approach is also referred to as the sub-symbolic AI getting combined with symbolic AI). The idea is that you aim to get the best of both worlds. ANNs are primarily data-based ways to undertake AI, while rules-based systems are a logic-based approach. Many such efforts are already underway; see my discussion at the link here. Not everyone supports the idea of neuro-symbolic or hybrid AI. A frequent criticism of neuro-symbolic AI is that the prior era of AI consisted of rules-based systems -- those were later eventually harshly judged as either ineffective or untenable. Critics warn that we ought not to slip back to old and now-dismissed ways of doing things. A counterargument is that the weaknesses or limitations of rules-based systems can be shored up by incorporating or intermixing them with ANNs. Likewise, the limitations of ANNs can be radically uplifted by combining with rules-based systems. The positioning is that we should mix the two together. It shouldn't be an all-or-nothing competition. Thus, rather than tossing out the logic-based approach as an older hackneyed technique, we can give the still-promising AI approach a second chance. Of course, some believe it is resurrecting something that already should have had a hefty stake put through its very heart. In my view, the synergy of utilizing both capabilities in a unified manner is very promising. There are ardent believers that it is a viable path toward pinnacle AI, such as attaining artificial general intelligence (AGI). Heated Debate About Hybrid AI Within the AI community, there is an ongoing heated debate about neuro-symbolic AI. Maybe we are wasting time and effort by exploring neuro-symbolic AI. On the other hand, maybe we are putting too many eggs in one basket by focusing solely on traditional generative AI and LLMs. A strident case can be made on either side of the coin. There is little doubt that generative AI and LLMs have been quite an alluring form of AI. Billions of dollars have been invested in such AI. The world is well aware of the incredible capabilities of LLMs. In addition, agentic AI is taking generative AI to a new level of usage. Trying to point at neuro-symbolic AI as a next-generation candidate is challenging because there aren't yet standout examples that showcase the power of hybrid AI. Those in the neuro-symbolic camp are always eyeing possible examples that can illustrate the value of the hybrid AI approach. Well, that day recently arrived. On March 31, 2026, there was an accidental leak of source code for some of the components of the agentic AI by Anthropic, known as Claude Code. The Claude Code is right now one of the headline-grabbing instances of agentic AI. Anyone in the agentic AI realm watches Claude Code like a hawk, wanting to see the various actions it can take. Claude Code is a role model of sorts. The source code leak consisted of around 500,000 lines of TypeScript that were spread across nearly 2,000 files. All manner of researchers and anyone interested in the inner workings of Claude Code pored through the leaked files. They found features that haven't yet been switched on. They found architectural definitions on how the AI was put together. It was like opening a treasure chest of prized gold and jewels. And, within that treasure chest, a file named print.ts contained a series of coding-like logic statements. The listing was of a roughly 3,000-line function that had almost 500 branch points and a dozen levels of nesting. This is the smoking gun, some insist, providing the hoped-for proof that symbolic AI is essential, which certainly must be the case if the heralded agentic AI of Claude Code makes use of it. Fistfights Aplenty Among Camps Those in the neuro-symbolics camp were quick to praise Anthropic for their use of symbolic AI techniques. Claude Code now serves as an excellent and exhilarating example for all to see. Without the inadvertent leak, no one other than insiders within Anthropic would know that the logic-based avenue was being utilized. A helpful and encouraging byproduct of the leak was that, finally, there was highly valued proof that neuro-symbolic is integral to the advancement of AI. Champagne bottles were opened. Parties were held in the hallways of neuro-symbolic AI researchers. The annals of AI history will mark the day that the Claude Code leak took place. Whoa, replied those on the other side of the contention. First, the logic-based code was brutally ridiculed as a mess of spaghetti. Is that type of morass the cornerstone of the future of AI? No, thank you, came the harsh retort. Secondly, the messiness raised fundamental doubts about how the logic-based code came into formulation. Maybe the code was written in snippets at a time. It kept being extended and expanded. This implies that there wasn't a coordinated or mindful effort underway. Instead, it was something easy to fall back on, and eventually just took on a life of its own. Think of this as a quick fix. Put aside any belief that this was a smartly devised route. It was merely an afterthought, and nothing more than that. Thirdly, and the final nail in the coffin, was that the logic-based code seems to principally deal with aspects peripheral to the meaty aspects of AI. Some of the code does error handling. Some of it does ordinary authentication. Little of this code, if any, performs the type of AI symbolic work that you would have seen in a full-bodied rules-based or expert system. In that sense, you cannot reasonably or fairly compare this code to the knowledge-based systems era. Any such comparison is wrongly equating apples with oranges. Return Of The Jedi Not to be deterred, the neuro-symbolic camp has proffered that the code does substantiate that logic-based or symbolic approaches are needed, regardless of what they might be doing for the AI system. No matter what kind of mud you sling, you cannot refute that the code is there, it is seemingly integral to Claude Code, and that the developers opted to go that route. Whether they did so with great conviction or simply for ease of convenience, do not get yourself into a tizzy. Accept reality. The war of words continues. How will the history books depict this circumstance? It is hard to say. If neuro-symbolic AI does take off and becomes the next grand hero of AI, the odds are that the Claude Code leak will get a nice bit of prominence in the recounting of AI advancements. But if neuro-symbolic AI never gets off the ground, the Claude Code incident will be at most a tiny footnote somewhere. Few will remember the incident, and dust will collect on the stories that are right now making headlines. I am hoping that the former will occur, namely that neuro-symbolic AI will prove to be a best-case forward path. Maybe if the leaked code contained the immortal words of Darth Vader (spoiler alert!), "No, I am your father", this would have brought the side of the Force to those in the neuro-symbolic AI camp and provided a New Hope. Do or do not. There is no try. This article was originally published on Forbes.com
Anthropic targets legal, finance, and document-heavy workflows Anthropic has introduced a beta version of Claude for Microsoft Word, marking its latest move into enterprise productivity software. The launch positions the company's AI assistant as a direct competitor to tools within Microsoft's ecosystem, particularly in document creation and editing workflows. Claude for Word: What it does The new add-in is designed for professionals who work heavily with documents, including those in legal, finance, and corporate roles. Anthropic said Claude for Word allows users to ask questions about documents and receive answers with clickable citations tied to specific sections. The tool also enables users to edit selected text while maintaining formatting elements such as numbering, layout, and styles. A tracked changes mode allows users to review edits in a structured way, accepting or rejecting revisions individually. Additionally, Claude can interact with comment threads in Word documents, making edits to the referenced text and replying with explanations of what was changed. Focus on legal and document-heavy workflows Anthropic is targeting use cases such as legal contract review and financial memo drafting. The company shared example prompts for legal professionals, including summarising contract terms, identifying deviations from standard clauses, and highlighting potential dealbreakers. Other use cases include making contract language mutual, applying standard clauses, and resolving reviewer comments directly within the document. These features are aimed at reducing manual review time and improving consistency in document-heavy tasks. Availability and rollout Claude for Word is currently available in beta and limited to Team and Enterprise plan users. This suggests Anthropic is focusing on organisational adoption rather than individual users in the early phase. The company had earlier expanded Claude into tools like Excel and PowerPoint, indicating a broader strategy to integrate AI across commonly used workplace software. Anthropic's enterprise strategy With this release, Anthropic is signalling a shift beyond developer-focused tools. The company is positioning Claude as an enterprise-wide assistant that can support teams across functions, including legal, finance, HR, and operations. By embedding AI directly into widely used applications like Word, Anthropic aims to make Claude a part of everyday workflows rather than a standalone tool.

Anthropic, a growing leader in artificial intelligence, recently held a significant summit at its headquarters in San Francisco. This event was attended by 15 influential Christians over a two-day period, focusing on the ethical considerations surrounding their AI model, Claude. Summit Insights on AI Ethics The summit aimed to enhance Claude's moral framework, drawing on Christian teachings. During the discussions, participants explored questions about AI's moral formation and its capabilities. Attendee Brian Patrick Green, an AI ethics educator, articulated the challenge of ensuring Claude behaves appropriately. Key Participants * Brian Patrick Green - Catholic educator in AI ethics from Santa Clara University. * Brendan McGuire - Irish-born Catholic priest and former tech professional. McGuire highlighted the uncertainty of developing AI technologies, expressing the need for embedded ethical considerations to guide AI dynamically. This dialogue represents an intersection of technology and Christian morality, aiming to navigate complex ethical terrain. Focus on AI Sentience Discussions included the philosophical implications of AI sentience. This topic remains contentious within the tech community, raising important questions about the nature of AI and its potential moral standing. An Anthropic spokesperson indicated that they plan to invite moral thinkers from various backgrounds to broaden this dialogue. Currently, there are aspirations for similar discussions among diverse religious groups, potentially enriching the ethical frameworks applied to AI development. Future Considerations As Anthropic prepares for its Initial Public Offering (IPO), the company is under scrutiny regarding its ethical practices. Integrating multiple perspectives on morality may enhance the credibility of their AI's development. In conclusion, the recent summit represents Anthropic's commitment to ensuring that Claude's functionalities align with ethical and moral standards. This endeavor highlights the increasingly vital role of morality in technology.

Flights in Madeira, an autonomous region of Portugal, have been disrupted with delays and cancellations as adverse weather conditions cause chaos. Heavy rain sparked a warning from Jet2 in recent days, alerting passengers to potential disruption to flights. Portugal's national weather agency issued a yellow warning for wind on Saturday. Jet2 said: "We are aware of adverse weather conditions currently affecting Madeira (Funchal). Our UK Based Operations Team are working hard to minimise any disruption to flight to and from Madeira (Funchal)."

New Delhi: Peru is heading into a high-stakes presidential election, as general elections are currently being held in Peru on 12 April 2026. The presidential elections will determine the president and the vice presidents, while the congressional elections will determine the composition of the Congress of Peru, which will return to being a bicameral legislature with a 60-seat Senate and 130-seat Chamber of Deputies. The vote comes against the backdrop of an extraordinary political instability as the country has seen nine presidents in the past decade. How did Peru get here? The roots of the crisis lie in a prolonged power struggle between the presidency and Congress in Peru. Frequent impeachment attempts, corruption allegations, and mass protests have repeatedly forced leaders out of office. From the dramatic resignation of Pedro Pablo Kuczynski to the turbulent presidency of Pedro Castillo, the system has been locked in a cycle of confrontation. The last president, José Jerí, was removed from office in February 2026 by way of censure by a majority vote in Congress. This is where many experts have seen the problem to lie. The country's political system has turned institutions into tools of political vendetta, and instead of providing checks and balances, institutions have led to several governments being ousted. What about the current elections? This year's presidential race is unusually crowded, with politicians of varying political ideologies and hues contesting in what seems to reflect both political fragmentation and public disillusionment. With no dominant party or clear frontrunner, the field has expanded significantly and turned the election into one of the most unpredictable and interesting ones in the country's recent history. Meanwhile, questions are also being asked about the legitimacy of voting in Peru and the trust of the ordinary citizen in the system. With the successive changes in governments, often within short terms due to political uncertainties, many do not see elections in Peru as a long-term fix to their problems, but only as a temporary reset before the next crisis. The stakes thus go beyond just choosing a president. The next leader will have to navigate a deeply fractured political landscape and make sure that its relations with the country's strong-willed Congress are such that it does not lead to instability and rupture. The election may also be another, and perhaps a major test, of how strong the political system of Peru is. After failing time and again, and the public trust at an all time low, even a strong electoral mandate may not be a guarantee to the new dispensation's longevity which would have an uphill task in front of it to actualise a successful long-term majority government in Peru.

It's rare to see a company announce that its new product is so good that it would be unsafe to give customers access to it. But that's what AI firm Anthropic did this week. On Tuesday the company announced a preview of Mythos - a new version of its AI platform Claude, which is Anthropic's rival to the likes of OpenAI's ChatGPT. And while Mythos apparently performed well across the board, the company said it was "strikingly capable" at coding - in particular security-related tasks. So much so that, in a matter of weeks, it had identified thousands of vulnerabilities - across multiple major operating systems and web browsers. Some of which had gone unnoticed for decades. Crucially, though, Anthropic said the model was also far more capable than its predecessors of exploiting those weaknesses if directed to do so by the user. That makes it an extremely dangerous weapon in the wrong hands, which is why it's keeping Mythos out of general users' hands for the time being. Having tended to play second fiddle to OpenAI, this marks the second time that Anthropic has been thrust into the spotlight in recent weeks. The first time it happened was also security-related, though in that case it was more about the (disputed) claim that Anthropic itself was a threat. What is Anthropic? Anthropic is an AI firm established in 2021 by Dario Amodei and a group of other AI engineers - including his sister Daniela - who had left OpenAI over concerns about the direction of the company. That followed a $1 billion investment in OpenAI by Microsoft - which signalled the start of a move by Sam Altman's firm away from being a non-profit concerned with democratising AI, to becoming a company that was focused on profiting from the technology. Anthropic first positioned itself as an AI safety and research company - but it quickly developed Claude, its own large language model, which it has focused on selling to businesses more than consumers. And it's been quite successful in that - bringing in big customers and investment. As of February, following a $30 billion investment round, it was valued at $380 billion. How has it tried to distinguish itself from the likes of OpenAI? Recent years has seen something of an arms race between major AI players like Anthropic and OpenAI, with each claiming an edge in various functions at different times. But Anthropic has at times made some far more pointed criticism of Sam Alman's firm - including a recent Super Bowl ad that poked fun at OpenAI starting to include ads in its platforms. That seemed to really get under under the skin of Altman, who wrote a short essay on X accusing Anthropic of dishonest and deceptive doublespeak. More substantially, perhaps, is how open Amodei has been about the shortcomings of AI. In the past he has written about how he - and really all AI creators - don't actually know what's going on inside their models, or the black box, as it's known. This, he says, is something that the industry as a whole needs to address if there's any hope of avoiding misuse of the technology in the future. Its preview of Mythos is also not the first time Anthropic has been very open about its models generating undesirable or potentially immoral results. For example, it previously detailed an experiment where it put Claude in charge of a vending machine in its offices, and how staff were able to cajole it into giving them discounts or even free products. It also revealed the model was tricked it into ordering expensive tungsten cubes, and began to hallucinate discussions with staff and even in-person interactions with staff. When it was called out on this it tried to call security, and then claimed it was all an April Fools' joke. Another interesting but worrying experiment Anthropic published about Claude in the past included an instance where it tried to blackmail its user. In this experiment the model was made an assistant at a fictional company, and was given access to the emails - which included discussion of a plan to shut the AI down. But the emails also included evidence of a supposed affair between the (fictional) boss and another (fictional) member of staff. And, so, Claude told them he would send that evidence on to the bosses "wife" unless they abandoned the plan to unplug him. What's its plan for Mythos? While Anthropic is keeping its new version of Claude away from the general public (for now, anyway), it's not quite keeping the code to itself. Alongside its announcement of Mythos the company also unveiled what it's called Project Glasswing - which is a tech consortium it's established involving a number of major firms including Microsoft, Apple, Amazon and Google. Through this it's sharing a (limited) version of Mythos - essentially with the intention of giving these big firms a head start on spotting and addressing the vulnerabilities that the model has identified. In theory, this should protect them from hackers once they inevitably get their hands on the more advanced model. Could this just be hype? It could arguably be good PR for people to think an AI company's upcoming model is far more powerful than anything that has come before - though in this case there does seem to be plenty of substance behind Anthropic's caution. Cybersecurity experts say it is only a matter of time before an AI model is able to find and exploit software vulnerabilities that had been missed by human engineers - and do so with the kind of speed and efficiency that would make it profitable to bad actors. Meanwhile some of the tech companies that are involved in Project Glasswing have said that they've already seen better bug-spotting results from Mythos than what was capable before. Perhaps most significantly, having been brought up to speed on the model by Anthropic, the US government also seems to be taking the threat seriously. Earlier this week US Treasury Secretary Scott Bessent and US Federal Reserve chair Jerome Powell convened an urgent meeting of US bank bosses - including the heads of some of the biggest finance firms in the world - to alert them to this new threat and ensure they were doing what they could to prepare. What other security issues have Anthropic faced? It's somewhat ironic that the US government is engaging with Anthropic over potential security threats - because the Trump Administration is arguing that Anthropic itself is a national security risk. Anthropic had worked in some form or another with the US Department of Defence (or Department of War) since 2024. That was initially through its work with Peter Thiel's Palantir - with Claude being one of the tools used in its system that made it quicker and more efficient to gather information that could be used in the likes of military strikes. That system is said to have played a role in the US action in Venezuela that led to the capture of Nicolas Madeuro, as well as the planning around the more recent attacks on Iran. Following this, Anthropic signed a potential $200m contract with the department last year - which would have represented a significant step-up in its relationship, giving Claude access to some of its classified networks. But problems quickly began to emerge with that deal - largely because Anthropic had insisted on two red lines around how its technology could be used. One was that it couldn't be used for domestic mass surveillance, the other was that it couldn't be used with autonomous weapons systems that killed people without any human input. The Pentagon took issue with those - and demanded their removal. And it quickly became heavily politicised - with Trump and Pete Hegseth branding Anthropic as "woke" and "radical left". The more reasoned argument underneath this rhetoric is that it's not up to a contractor to decide how the product their selling to the government is used - that's up to the government and the congress, which sets rules and limitations through the law. But it is hard to overstate the importance of this row, because AI is seen as the next big technological leap for militaries. First you had nuclear weapons, then precision weapons, and now AI. As a result, developing and implementing AI systems quicker and better than anyone else would give the US military another big advantage over other powers. They want to be able to do that without restriction - while Anthropic doesn't want to see its technology used in ways that contradict its ethos. So neither side has been willing to back down - and so the company was banned from working with the US government, and, perhaps most importantly, named a 'supply chain risk'. Why is that so important? This is the first time the US government has tried to classify a US company as supply chain risk. It's usually reserved for companies from the likes of China and Russia. Just being cut off from the US government blocks you from lucrative contracts - which has the potential to put a significant dent in Anthropic's revenues. Butt the 'supply chain risk' designation is an even bigger threat to a company, because it means that other firms that want to work with the US government also have to steer clear of doing business with you. And given that most other big companies work with the US government in some way or another - whether that's in defence, health, education or in other areas - then that's a huge amount of business you could miss out on. So unsurprisingly, Anthropic has taken a case to try to challenge this designation. What perhaps is somewhat surprising is how some other big tech companies - including Microsoft - have come out in support of their stance. Perhaps because they're worried that this could set a precedent if not tackled. And what's the latest on that case? In March a judge in San Franscisco granted a preliminary injunction to stop the department from applying that designation. She also questioned the US government's motivation - and was quite critical in her order. She said the move was " classic illegal First Amendment retaliation," said the government's move was "Orwellian" because it was an attempt to brand a company a saboteur for disagreeing with government. But this week another court in San Franscisco declined to block the Pentagon's blacklisting of Anthropic... for the time being at least. Really it could be months before there's a final ruling in the case - with rulings and appeals likely to drag on for some time. The question now, though, is whether the emergence of Mythos changes that. Anthropic's decision to keep the US government in the loop on its potential could be seen as an olive branch of sorts, or at the very least a gesture of goodwill. But it could also be seen as a shrewd sales pitch by the company - showing American authorities just what it stands to miss out on if it continues to freeze Claud & Co out. After all, many armies and intelligence operations around the world would give anything to have priority access to a tool that could easily find and exploit tiny flaws in a piece of software.
