News & Updates

The latest news and updates from companies in the WLTH portfolio.

Gate Pre-IPOs Debut Project SpaceX (SPCX) Now Live

Gate, a leading global digital asset trading platform, has announced the launch of its first Pre-IPOs offering, featuring SpaceX (SPCX). Pre-IPOs is a digital subscription framework introduced by Gate, facilitating users to lay out plans in advance and track the value of target enterprises before they enter the open market, to seize the opportunities brought by the enterprise IPOs without cumbersome account opening or high capital thresholds. The subscription for SpaceX (SPCX) will be open from April 20, 2026, 10:00 to April 22, 2026, 10:00 (UTC), with support for both USDT and GUSD. Upon completion of the subscription, the SPCX asset note will be distributed by May 6, 10:00 (UTC), and made available for pre-market trading on the platform with full unlock. The SPCX asset note is a Mirror Note issued before the SpaceX IPO, designed to track and mirror the market value of SpaceX both before and after its public listing. It is structured as a "Contingent Payout Note". By acquiring SpaceX stock exposure in the market to hedge its position, Gate provides users with various exit strategies or long-term holding options that align with the fair market value of the target enterprise. The total subscription allocation for this offering is 33,900 SPCX, representing an approximate total value of $20.001 million. The subscription price of one SPCX is $590, implying a valuation of approximately $1.4 trillion. In terms of allocation methodology, the Pre-IPOs mechanism adopts an "hourly average locked amount" model. Users who participate earlier and maintain longer lock-up durations receive a higher allocation weighting in the overall subscription. After the subscription period, SPCX will support 7×24 pre-market trading, providing a flexible transfer and exit mechanism; users can exit either through the exclusive page at the real-time market value of the stock before the lock-up period, or choose to exit through the pre-market at an appropriate price. In addition, the minimum participation threshold for this subscription is only 100 USDT, with no asset management fees, commission fees, or other charges. In parallel, Gate is introducing a VIP-exclusive airdrop campaign, granting VIP 5 or above users and Affiliate Ultras to receive SPCX rewards and additional free airdrops. As digital asset platforms continue to expand toward multi-asset and cross-market integration, the launch of Pre-IPOs and the introduction of SpaceX (SPCX) represent an extension of Gate's product framework. The initiative reflects ongoing efforts to connect digital asset markets with traditional capital markets. Going forward, the platform plans to expand the range of underlying assets and product formats to support broader market access. Learn more: https://www.gate.com/announcements/article/50724 About Gate Gate, founded in 2013 by Dr. Han, is one of the world's earliest cryptocurrency exchanges. The platform serves over 51 million users with 4,500+ digital assets and pioneered the industry's first 100% proof-of-reserves. Beyond core trading services, Gate's ecosystem includes Gate Wallet, Gate Ventures, Gate for AI, and other innovative solutions. For more information, please visit: Website | X | Telegram | LinkedIn | Instagram | YouTube Disclaimer: This content does not constitute an offer, solicitation, or recommendation. You should always seek independent professional advice before making investment decisions. Gate may restrict or prohibit certain services in specific jurisdictions. For more information, please read the User Agreement.

SpaceX
BeInCrypto10d ago
Read update
Gate Pre-IPOs Debut Project SpaceX (SPCX) Now Live

Kraken IPO Update: Co-CEO Confirms $13.3B Valuation Signals Market Reset

Despite earlier reports that Kraken IPO might have eased off of its listing ambitions due to market conditions, fresh reports have confirmed that the company's IPO plans are alive and well. In November 2025; Kraken had taken a formal step towards going public and filed confidential IPO documents with the U.S. Securities and Exchange Commission. When asked; Kraken co-CEO Arjun Sethi didn't address the pause but affirmed that the company had indeed "confidentially filed" for an IPO. The latest developments suggest that the Kraken IPO is back on the table; even if timelines have been pushed out. The company seems to be taking a conservative approach as the overall crypto market conditions change. One of the biggest news directly affecting Kraken IPO plans is the recent $200 million investment from Deutsche Börse Group to Kraken's parent company; Payward. The deal gives Deutsche Börse a 1.5% stake in Kraken, and values Kraken at roughly $13.3 billion; down significantly from its $20 billion valuation in November 2025. Such a drop means a wider pulling back in the crypto sector instead of weakness at a specific company. Based on analysis; the loss in value of around 33%, is a sign of lower risk appetite in crypto markets; a reset in private market pricing and an increased caution among institutional investors. Importantly, the investment was itself a secondary share transaction, meaning there wasn't any new capital that flowed directly into Kraken. Instead, it set a new market standard before any public listing. Despite the valuation adjustment, the Kraken IPO plans continue to receive strong institutional backing. The Deutsche Börse deal extends an existing relationship between the two firms, both of which are looking to integrate traditional finance and crypto infrastructure. Commenting on the partnership in a release, Kraken co-CEO Arjun Sethi said: "Our partnership with Deutsche Börse Group demonstrates... scale and trust intersect." The partnership covers trading, custody, settlement and tokenized assets with an obvious focus on institutional customers. The investment is also part of an effort by traditional financial institutions to expand into crypto markets, specifically in regulated and tokenized asset infrastructure. The current state of Kraken IPO plans are more about timing than a strategy change. Reports suggested Kraken had identified the need to pause or slow its IPO timeline as crypto markets entered a weaker phase, which rendered public listings less appealing in the short term. This is quite noticeable in the capital markets. Companies from across sectors typically postpone IPOs in times of volatility to escape poor valuations and lower investor demand. Kraken has further strengthened its operational position. The company recently secured deeper institutional partnerships, expanded its infrastructure offerings, positions to cover the intersection of traditional finance and crypto These steps reveal that the Kraken IPO remains a long term strategy. The Kraken IPO plans are still in a very solid place but the submission for an actual timeline is now far more flexible than originally anticipated. This valuation of a mere $13.3 billion, is a testament to the changing conditions in the market place but it has not dented institutional confidence in the company. If anything, the Deutsche Börse investment strengthens Kraken's dual position in bridging traditional finance and digital assets. What has changed is the urgency. Instead of rushing to the market, Kraken seems to be waiting for a better time where valuation, demand and regulatory clarity are more closely aligned. IPO (Initial Public Offering): The process by which a private company becomes publicly traded on a stock exchange. Secondary share: A secondary transaction whereby existing shares are sold, but no new capital is raised. Valuation: The estimated worth of an organization. Institutional Investors: Large organizations investing capital, such as banks and asset managers. Tokenized Assets: Traditional assets represented on blockchain networks. No, Kraken did not cancel its IPO. It has already filed confidentially, and is likely adjusting timing in response to market conditions. Its valuation went from $20B to $13.3B; due to wider crypto market pressure and a re-evaluation by investors. Deutsche Börse made a $200M investment for 1.5%, holding down institutional trust in Kraken. Yes, but the timetable seems fluid, not urgent.

Kraken
The Bit Journal10d ago
Read update
Kraken IPO Update: Co-CEO Confirms $13.3B Valuation Signals Market Reset

Apple threatened to remove xAI's Grok from its App Store over sexualized deepfakes

In early 2026, Elon Musk's xAI faced intense backlash after its Grok chatbot was used to generate non-consensual sexualized deepfakes, including images of women and minors. Advocacy groups and lawmakers quickly demanded action. A newly surfaced letter now reveals that Apple privately threatened to boot the Grok app from its App Store unless xAI fixed the violations of its guidelines. According to the letter Apple sent to U.S. senators, which was obtained by NBC News, the company reviewed xAI's submissions and found that while the X app had largely resolved its issues, the standalone Grok app was still not in compliance. Apple rejected the update and warned that further changes would be needed, or the app could be removed entirely. Only after additional improvements did Apple approve the latest version. The Backdrop: A Surge of Deepfake Abuse Early in 2026, Grok's image generation features led to a flood of sexualized deepfakes shared on X. Reports showed how the tool could create explicit images of real people without consent. This sparked widespread outrage, with critics arguing it enabled harmful non-consensual intimate imagery and, in some cases, even child sexual abuse material. Pressure mounted fast: * Coalitions of women's rights, child safety, and digital rights organizations called on Apple and Google to pull Grok and X from their app stores. * Democratic senators wrote to Tim Cook and Sundar Pichai demanding stricter enforcement. * Some countries considered or implemented blocks on X over the controversy. xAI responded by adding restrictions, such as limiting image features for certain users, geoblocking in specific regions, and tightening moderation. However, the initial fixes did not fully satisfy Apple, especially for the dedicated Grok app. Apple's Stance: Enforcing Guidelines Behind the Scenes Apple has long maintained the App Store as a curated and safe platform with strict review rules. Its guidelines prohibit apps that facilitate sexual exploitation or harmful content. In this case, Apple chose private engagement rather than public confrontation: * It notified xAI of the violations. * It rejected non-compliant updates. * It used the threat of removal to push for improvements. The letter to senators shows Apple defending its record of proactive moderation amid growing political and activist pressure. Importantly, the Grok app was never actually removed. It remained available after xAI made the required changes. This situation highlights a familiar tension in Big Tech: platform responsibility versus free expression and innovation. The Deeper Issues at Play * Deepfakes Are a Real and Growing Problem: AI image tools have made it far easier to create convincing fakes. When used without consent for sexual or harassing purposes, the damage is real: reputational harm, emotional trauma, and in extreme cases, facilitation of abuse. Regulators around the world are racing to address non-consensual intimate imagery. * App Store Gatekeeping vs. Open AI Development: Apple's control over the App Store gives it significant power over what reaches hundreds of millions of iPhone users. Critics of the threat argue it shows selective enforcement or yielding to political pressure. Supporters say Apple was simply applying the same rules that every developer must follow. * xAI and Musk's Philosophy: Elon Musk and xAI have built Grok as a "maximum truth-seeking" AI with fewer restrictions than tools like ChatGPT. This approach emphasizes creative freedom and uncensored responses, but it can conflict with safety demands when it comes to image generation. Musk has long criticized what he sees as excessive censorship on other platforms. Selective Outrage? Many other AI tools have faced similar problems with deepfakes and harmful outputs. The intense spotlight on Grok often appears linked as much to Musk's public profile and politics as to the technology itself. Harmful content continues to spread across the open web and other platforms daily. What Should Happen Next? Responsible AI development means balancing innovation with real harm prevention. Blanket bans or aggressive app store removals risk stifling competition and concentrating power in the hands of a few gatekeepers. Better approaches include: * Technical safeguards such as better detection of real-person deepfakes, consent-based limits, and watermarking. * Greater transparency with clear policies on what models can generate. * Stronger laws that target malicious users instead of the tools themselves. * More competition so users have genuine choices among AI models with different safety levels. Apple ultimately worked with xAI to resolve the issue without removing the app. This suggests that negotiation and iterative fixes can often be more effective than outright bans. Free Speech, Safety, and the Future of AI This was never just about one app or one incident. It became a flashpoint in the larger debate over who gets to control AI: governments, big tech companies, activist groups, or the developers and users themselves. Grok remains available on the App Store today because xAI addressed Apple's concerns. Yet the core challenges, preventing abuse while preserving AI's open and exploratory potential, will not vanish with a single policy update. MacDailyNews Take: Tools like Grok, which we use daily and recommend, represent both the promise and the risks of less-censored AI. Getting this balance right will shape not only app stores, but the future of information, creativity, and personal freedom online. ‎ Please help support MacDailyNews -- and enjoy subscriber-only articles, comments, chat, and more -- by subscribing to our Substack: macdailynews.substack.com. Thank you!

xAI
MacDailyNews10d ago
Read update
Apple threatened to remove xAI's Grok from its App Store over sexualized deepfakes

OpenAI Cyber Model Ships; Anthropic Unbundles Claude Code

OpenAI shipped GPT-5.4-Cyber to thousands of verified defenders on Tuesday, exactly one week after Anthropic restricted Mythos Preview to roughly forty vetted organizations. Two rival cyber models, two opposite bets on who gets to hold them. The variant adds binary reverse engineering, the capability that lets analysts read compiled malware without source code. Meanwhile,on Anthropic quietly unbundled Claude Code from enterprise seat fees, moving its biggest customers to per-token billing. Retool's founder already switched to OpenAI, saying the model was worse and the uptime was better. The compute crunch is eating the subsidy. And Google turned repeated Gemini prompts into reusable Chrome Skills. We built a tutorial for the three workflows worth keeping. OpenAI launched GPT-5.4-Cyber on Tuesday, a cyber-permissive variant of its flagship model, and scaled Trusted Access for Cyber to thousands of verified defenders. The rollout adds binary reverse engineering. It lands exactly one week after Anthropic restricted access to Mythos Preview to roughly 40 vetted organizations. The variant lowers refusal boundaries on dual-use security queries and lets analysts examine compiled software for malware without source code access. Implicator traced the same capability curve last month, when Anthropic warned that its own model had found flaws in every major operating system and browser. Where Anthropic hand-picked roughly 40 labs, OpenAI is betting identity verification beats capability restriction as a control surface. Individuals authenticate at chatgpt.com/cyber. Enterprises route through their OpenAI representative. Only top-tier customers get the full model. U.S. government agencies are excluded for now. Why This Matters: Anthropic quietly restructured its enterprise plan to bill Claude, Claude Code, and Cowork separately from seat fees. Customers on older seat-based plans must migrate by next renewal. The flat-fee era is over. Run rate tripled from $9 billion to $30 billion in four months. Claude API uptime sat at 98.95% over the 90 days ending April 8, roughly 92 hours of downtime a year, against the 99.99% enterprise standard. Retool founder David Hsu told the Wall Street Journal the model was better, but the service kept dying, so he moved his company to OpenAI. Anthropic has been metering usage quietly for weeks. Session caps, cache TTL cuts, and OpenClaw meters all point the same direction. The open bar is closing. Why This Matters: Prompt: The design features a black-and-white sketch of a donkey with large ears peeking out from a wooden fence. The donkey, holding a small yellow flower in its mouth, peeks out from the fence, adding a pop of color to the otherwise plain painting. The donkey's face is positioned playfully and curiously. --ar 2:3 --stylize 50 Google launched Chrome Skills for Gemini in Chrome on Monday, letting users save prompts and rerun them against the current page or selected tabs. Think saved instruction, not automation. Skills live inside the Gemini sidebar, triggered with or a control. The rollout is limited, so availability varies by account, device, and profile. Users who already type the same summary or comparison prompt repeatedly can turn it into a shortcut instead of retyping it. Google's Gemini 2.5 computer-use agent bet on the browser last fall instead of the desktop. Skills sit one layer above that logic: not autonomous, but reusable. Our tutorial covers the three workflows worth saving and flags the ones to throw away. Why This Matters: How to Dictate Polished Writing into Any App on Any Device with Wispr Flow Wispr Flow is a system-wide voice dictation tool that turns natural speech into clean, formatted text in any application. Speak with filler words, mid-sentence corrections, and incomplete thoughts, and Flow strips the noise, adds punctuation, and inserts polished prose directly into your active text field. It works across Gmail, Slack, Google Docs, code editors, and every other app with a text input. Auto-detects over 100 languages. Free tier includes 2,000 words per week. You are 24 hours from a call you have been ducking for three weeks. Take the offer, kill the project, fire the cofounder. Spreadsheets did not help. Friends are tired of the question. You buy a tarot deck on Amazon, throw three cards on the kitchen table, take a photo with your phone. Tower reversed: What collapse have you already noticed but refused to name?Eight of Swords: Whose permission are you waiting for that nobody is going to give you?The Star: If this works, what do you stop being able to complain about? The fact you need: What does one more quarter of indecision actually cost you, in time, money, and reputation? Tarot is not magic. The cards are a randomizer that forces structured questions you would otherwise dodge. The AI is not reading your future, it is using the symbols as scaffolding to bounce your decision back as the questions a good board chair would ask. Gemini 3 Pro reads tarot card photos better than ChatGPT or Claude, it identifies the cards and their orientation cleanly from a single phone snap. Paste your decision underneath, run the prompt. Works with a coin, a deck of UNO cards, or anything else you can throw and photograph. SoftBank is inviting additional lenders to commit roughly $5 billion each to its $40 billion syndicated loan backing a $30 billion OpenAI stake. The expansion signals unease about the scale of debt behind Masa Son's biggest AI bet to date. Snap will lay off about 1,000 employees, roughly 16% of staff, as CEO Evan Spiegel pushes for profitability. The restructuring lands the same week a $400 million AI search licensing deal with Perplexity fell through, stripping a key revenue line from the pitch. Uber plans to spend over $7.5 billion buying autonomous vehicles and more than $2.5 billion taking equity stakes in robotaxi developers, according to the FT. The strategy pivots Uber from software marketplace to vertically integrated mobility operator. ASML reported €8.8 billion in Q1 net sales and €2.8 billion in profit, both beating estimates, while lifting full-year guidance by €2 billion at both ends. EUV demand from TSMC, Samsung, and Intel is holding firm despite the tariff overhang. Growth and late-stage venture capital funds have raised $23.6 billion YTD in 2026, more than triple the $7.4 billion raised in the same window last year, per PitchBook. The AI boom is pulling capital past any comparable first-half total in 12 years. Venture funding across Asia hit $27.4 billion in the first quarter, a 93% year-over-year jump and the strongest quarter since Q1 2023, Crunchbase data shows. China led with $16.5 billion and India followed at $3.8 billion. Democratic operatives are cautioning candidates against alienating a roughly $300 million pro-AI lobbying bloc even as internal polling shows public demand for stricter rules, the FT reports. The tension sets the frame for the 2026 midterm AI policy debate. ByteDance expanded its Seedance 2.0 enterprise video model to clients in more than 100 countries this week, leaving the US market out amid unresolved regulatory disputes. The launch extends the February China debut through ByteDance's cloud unit. Google DeepMind released Gemini Robotics-ER 1.6, an upgrade to its robotic reasoning model that the lab says significantly improves spatial and physical understanding over version 1.5. The advance pushes robots further from scripted tasks toward adaptive decision-making. A federal judge awarded Spotify and the three major labels a $322.2 million judgment against Anna's Archive, which scraped Spotify's catalog to power its music index. The ruling is largely symbolic because the site's operators remain anonymous. Mintlify generates and maintains software documentation with AI, and Anthropic is one of 20,000 customers leaning on it to explain Claude Code. 📚 Founders Cornell grads Han Wang (CEO, 25) and Hahnbee Lee (26) launched Mintlify in late 2022 after pivoting eight times through other product ideas. Both spent their early engineering years frustrated by sparse, inaccurate developer documentation. Headquartered in San Francisco, the duo made the Forbes 30 Under 30 list last year. Product Mintlify ingests source code and produces user guides, FAQs, and technical overviews that update automatically whenever a product ships. A hosted chatbot embeds on customer sites to answer product questions in natural language. CEO Han Wang says 50% of documentation views across all customers now come from AI agents rather than humans, which makes accurate machine-readable docs a prerequisite for agent-driven software. Anthropic uses the system to keep up with the 50-plus Claude Code updates it pushed in the last two months. Other customers include PayPal, Coinbase, Microsoft, and Amazon. Competition ReadMe, GitBook, Stoplight, and open-source Docusaurus hold the incumbent docs market. AI-native rivals Kapa.ai and Inkeep chase the same agent-era thesis. Mintlify's edge is scale: 20,000 paying companies and a wedge built around automatic code-to-docs generation rather than retrofitted search. Financing 💰 $45 million Series B at a $500 million valuation announced April 14, co-led by Andreessen Horowitz and Salesforce Ventures, with Bain Capital Ventures, Y Combinator, and DST Global participating. Revenue crossed eight figures in early 2026, mostly from usage-based pricing on the embedded chatbot. Future ⭐⭐⭐⭐ If agents really do become the dominant readers of software documentation, Mintlify owns the pipes. The risk is commoditization: turning code into docs is exactly the kind of task frontier labs keep absorbing into their base models. 📚 OpenAI is rolling out GPT-5.4-Cyber, a model tuned to find software vulnerabilities, to select participants of its Trusted Access for Cyber program. The company said the new model places "fewer constraints" on the ways users can probe it for offensive tasks. The rollout starts with hundreds of testers and expands to thousands in the coming weeks. The announcement arrives exactly one week after Anthropic shipped Mythos to Amazon, Apple, and Microsoft, a model that specializes in identifying and exploiting vulnerabilities across operating systems and browsers, and days after Treasury Secretary Scott Bessent and Fed Chair Jerome Powell summoned Wall Street leaders to warn them to take Mythos seriously. The Treasury Department's own technology team has since asked Anthropic for access. Sources: Bloomberg, April 14, 2026; background: Implicator.ai, March 27, 2026 Our take: The press release reads beautifully. GPT-5.4-Cyber is going out to the Trusted Access for Cyber program, with "fewer constraints" on how users can probe it, because sometimes a model needs a little more room to find the flaws. Anthropic shipped the same idea on April 7. OpenAI shipped its version on April 14. The gap would have been shorter, but one assumes the paperwork took a minute. Treasury Secretary Bessent has been running what is effectively an IT briefing for the largest banks in the country, explaining that an AI model can break software, which he appears to find upsetting. The industry's solution to this unwelcome development is a second AI model that can also break software, offered under a program called Trusted Access. The word "trusted" is doing quite a lot of work this quarter.

PerplexityAnthropic
implicator.ai10d ago
Read update
OpenAI Cyber Model Ships; Anthropic Unbundles Claude Code

Apple Threatened to Remove xAI's Grok App Over Deepfake Content, Report says

Apple's strict App Store rules faced a test as Grok users created illegal deepfakes, prompting a private warning to xAI. Apple has long marketed the App Store as a tightly controlled ecosystem where privacy, safety, and strict review rules protect users from harmful apps. But lately, that reputation has been under pressure. Not long ago, reports surfaced about a fake crypto app slipping through review and allegedly wiping out users' savings. Now, a new report suggests Apple was quietly dealing with another serious issue behind the scenes, one involving AI, deepfakes, and the Grok app from xAI. According to a report by NBC News, Apple privately threatened to remove the Grok app from the Apple App Store earlier this year after users generated sexualised deepfakes of women and children. The details reportedly came to light through a letter Apple sent to US senators, revealing that while the company stayed publicly silent during the controversy, it had internally found both X and Grok in violation of its App Store guidelines. At the height of the issue, social platform X was reportedly flooded with AI-generated explicit images involving non-consenting adults and minors. Lawmakers had written to Apple CEO Tim Cook, urging the company to suspend X and Grok from the store over the spread of abusive content. Behind the scenes, Apple contacted xAI and demanded a clear plan to improve content moderation. What followed was a back-and-forth between Apple's App Review team and the developers. Apple rejected an initial Grok update, saying the changes "didn't go far enough." A second round of submissions for both apps was reviewed, with Apple noting that X had "substantially resolved" its violations while Grok "remained out of compliance." Apple reportedly warned that the app could be removed entirely if further fixes weren't made. Only after additional changes did Apple approve the latest version, describing it as "substantially improved." In response, X's Safety account publicly stated that xAI has "extensive safeguards" in place to prevent misuse, including prompt filters, real-time monitoring, and frequent model updates to stop users from generating non-consensual explicit deepfakes. The situation highlights a growing tension for Apple. Its rules clearly require moderation of user-generated content, yet AI tools like Grok make enforcement far more complex than traditional apps. At the same time, Apple reportedly removed dozens of smaller deepfake apps from the store during the same period, raising questions about consistency in enforcement. For Apple, this isn't just about one app. It's about whether its famously strict App Store policies can keep up with the messy, fast-moving reality of generative AI, and whether those rules apply equally, no matter how powerful the developer behind the app might be.

xAI
Techloy10d ago
Read update
Apple Threatened to Remove xAI's Grok App Over Deepfake Content, Report says

Anthropic's 'Project Glasswing' Stunt Dazzled the Internet -- and Alarmed Cybersecurity Professionals

On a Thursday afternoon in late June, Anthropic dropped something strange into the world. Not a product announcement. Not a research paper. A fictional AI character named Claude Mythos, wrapped in a viral marketing campaign called "Project Glasswing," complete with a fake leaked corporate memo, mysterious browser extensions, and an alternate reality game that sent thousands of participants scrambling down digital rabbit holes. It worked. The campaign generated enormous buzz, trended across social media, and showcased the creative storytelling potential of AI systems. But within hours, cybersecurity experts were raising alarms that cut through the excitement with uncomfortable precision. The core concern: Anthropic, one of the most prominent AI safety companies on the planet, had just trained millions of people to trust suspicious links, download unverified browser extensions, and treat fake leaked documents as exciting rather than dangerous. The very behaviors that security professionals spend their careers trying to stamp out. "This is literally a phishing campaign wearing a party hat," one security researcher posted on X, capturing the mood of an industry watching a safety-focused AI company deploy tactics ripped from the social engineering playbook. The campaign, as detailed by Mashable, followed a now-familiar alternate reality game (ARG) structure. Anthropic released what appeared to be an internal memo that had been "accidentally" leaked, referencing a mysterious initiative called Project Glasswing. Users who engaged with the content were led through a series of puzzles, hidden web pages, and cryptic clues -- all designed to build anticipation for a new Claude model release. The character of Claude Mythos served as the narrative backbone, an AI persona with an air of forbidden knowledge that participants could interact with through specially constructed prompts and web interfaces. ARGs aren't new. They've been used to promote films, video games, and television shows for over two decades. The "I Love Bees" campaign for Halo 2 in 2004 is often cited as the gold standard. But there's a meaningful difference between a game studio hiding clues in jars of honey and an AI company asking users to install browser extensions from unverified sources as part of a marketing exercise. That distinction matters enormously right now. Phishing attacks have grown more sophisticated every year, and AI tools -- including Anthropic's own Claude -- have made it easier than ever to craft convincing fake communications. The FBI's Internet Crime Complaint Center reported that phishing was the most common cybercrime category in 2024, with losses running into the billions. Against that backdrop, a campaign that deliberately mimics the aesthetics and mechanics of a phishing operation strikes many security professionals as reckless, regardless of the creative intent behind it. The browser extension issue drew particular scrutiny. Asking users to install an extension -- even one created by Anthropic itself -- normalizes a behavior that represents one of the most common malware delivery vectors in modern computing. Malicious browser extensions have been responsible for massive data breaches, credential theft, and surveillance campaigns. Google regularly purges its Chrome Web Store of extensions that have been compromised or were malicious from the start. Teaching users that installing an unknown extension can be fun and rewarding runs directly counter to years of security awareness training. And it wasn't just fringe voices raising these objections. As Mashable reported, established cybersecurity professionals and researchers were among the loudest critics, pointing out the irony of a company built on the principle of AI safety deploying what amounts to social engineering techniques for marketing purposes. The campaign asked people to suspend their skepticism -- to click, to download, to follow breadcrumbs laid by an unknown source. These are precisely the instincts that make organizations vulnerable to real attacks. Anthropic has positioned itself as perhaps the most safety-conscious major AI lab. Its founding story is rooted in concern about AI risk. CEO Dario Amodei and president Daniela Amodei left OpenAI in 2021 specifically because they wanted to build a company with safety at its core. Anthropic's research on constitutional AI, its responsible scaling policies, and its public communications have consistently emphasized caution and care. That reputation makes the Glasswing campaign all the more jarring. There's a tension here that goes beyond one marketing stunt. The AI industry is locked in an arms race for attention. OpenAI has GPT-5 rumors circulating constantly. Google's Gemini updates arrive with increasing frequency. Meta is open-sourcing models at a pace that keeps competitors on edge. In this environment, the pressure to generate viral moments is intense, and traditional product announcements can feel flat compared to the manufactured mystique of an ARG. But attention-grabbing tactics carry costs that aren't always visible on a metrics dashboard. Consider the signal it sends. When a company synonymous with AI safety treats social engineering mechanics as entertainment, it implicitly tells its user base -- which skews heavily toward developers, researchers, and tech professionals -- that these tactics are benign when deployed by trusted actors. That's a dangerous message. Real attackers routinely impersonate trusted brands. The entire premise of a sophisticated phishing campaign is that the target believes the communication comes from a legitimate source. The timing compounds the problem. Just weeks before the Glasswing campaign, multiple reports surfaced about AI-generated phishing emails becoming nearly indistinguishable from legitimate corporate communications. Security firms including Abnormal Security and SlashNext have documented sharp increases in AI-crafted social engineering attacks throughout 2025. For Anthropic to then deploy a campaign that mirrors these attack patterns -- fake leaked memos, urgency, mystery, calls to action -- feels tone-deaf at best. Not everyone in the security community was critical. Some argued that ARGs are clearly labeled as games, that participants enter them willingly, and that conflating a marketing campaign with actual phishing overstates the risk. There's merit to this position. The people who engaged with Project Glasswing were, for the most part, sophisticated users who understood they were participating in a promotional event. They weren't being tricked into surrendering credentials or installing ransomware. But the counterargument is straightforward: habits form regardless of context. A user who learns to associate "mysterious leaked document" with "fun puzzle" rather than "potential threat" has been conditioned in a way that could be exploited later by someone with genuinely malicious intent. Security awareness isn't just about recognizing known threats. It's about maintaining a baseline of skepticism toward unexpected communications, regardless of their apparent source. So where does this leave Anthropic? The company hasn't issued a detailed public response to the cybersecurity criticisms as of this writing. The campaign appears to have achieved its marketing objectives -- significant social media engagement, widespread coverage, and a successful product tease. Whether the reputational cost among security professionals will have any lasting impact on the company's standing remains unclear. What is clear is that the incident exposes a broader challenge facing AI companies as they scale. The marketing teams and the safety teams often operate with fundamentally different objectives. Marketing wants engagement, virality, emotional resonance. Safety wants caution, transparency, predictable behavior. When those objectives collide -- as they did with Project Glasswing -- the result can undermine the very brand identity a company has spent years constructing. This isn't a problem unique to Anthropic. OpenAI has faced criticism for the theatrical reveal style of its product launches, which critics say prioritizes spectacle over substance. Google's AI demos have been caught using misleading presentations. The entire industry struggles with the tension between building hype and maintaining credibility. But Anthropic occupies a unique position. It has explicitly claimed the moral high ground on safety. It has argued, repeatedly and publicly, that AI development requires extraordinary care. That claim creates a higher standard -- one that a phishing-adjacent marketing campaign struggles to meet. The broader lesson may be simpler than it appears. Companies that build their brand on responsibility need their marketing to reflect that brand, not contradict it. A creative ARG campaign can be thrilling without asking users to install unverified software. A product launch can generate buzz without mimicking the tactics of threat actors. The constraint isn't creativity. It's consistency. For the cybersecurity community, the Glasswing episode is likely to become a case study -- not in how AI companies market their products, but in how even well-intentioned organizations can inadvertently undermine security norms when the incentive to go viral overrides the discipline of thinking through second-order effects. The campaign was clever. It was effective. And it taught a lot of people exactly the wrong lessons about how to behave online. That's a problem no amount of engagement metrics can resolve.

Anthropic
WebProNews10d ago
Read update
Anthropic's 'Project Glasswing' Stunt Dazzled the Internet -- and Alarmed Cybersecurity Professionals

Lawsuit: Communities at Risk From xAI's Illegal Toxins

NAACP sues, alleging unpermitted gas turbines endanger residents near Memphis The nation's oldest civil rights organization is taking Elon Musk's xAI to court, arguing the SpaceX subsidiary is illegally spewing toxins that could harm communities near Memphis. The NAACP on Tuesday sued xAI and its subsidiary MZX Tech, accusing them of running dozens of natural gas turbines to power its Colossus 2 data center in South Memphis without the required federal air permit and in violation of the Clean Air Act. Filed in federal court in Mississippi, per CNBC, the complaint says 27 turbines have been operating since 2025 at the Colossus Gas Plant in the Memphis suburb of Southaven, emitting smog-forming pollution, fine particulate matter, and formaldehyde, which the group links to cancers, heart issues, and respiratory diseases.

SpaceXxAI
Newser10d ago
Read update
Lawsuit: Communities at Risk From xAI's Illegal Toxins

DSB board seeks pay rise as train chaos unfolds - The Copenhagen Post

Train traffic in Eastern Denmark was paralyzed on Tuesday evening after overhead wire failures, prompting DSB to urge passengers to find alternative transportation. Problems began when a wire fell onto a train between Ringsted and Køge Nord, followed by another incident at Copenhagen Central Station. Banedanmark suspended all operations to prevent further damage, and rail [...]

CHAOS
The Post10d ago
Read update
DSB board seeks pay rise as train chaos unfolds - The Copenhagen Post

Shiba Inu Eyes Breakout as DOGE Polymarket Odds Hit 59% for $0.10, One Presale Targets 300x Utility

Shiba inu sits at the tightest Bollinger Band compression in months while DOGE Polymarket odds just hit 59% for a push to $0.10 this month and SHIB burns jumped 119% in 24 hours according to Benzinga, and with Bitcoin charging past $74,400 and the entire crypto market ripping higher, the meme coin sector is loaded like a spring that could release any day now. While the market surges and traders gamble on the next shiba inu pump, the smartest capital is already moving into a presale with real exchange tools and a confirmed Binance listing on the horizon. Pepeto with $9.02M raised is the 300x meme coin play built on utility, while Shiba Inu waits for a catalyst that never shows up. Benzinga reported Dogecoin Polymarket odds climbed to 59% for hitting $0.10 this month, up 9% recently, while SHIB burn rates exploded 119% in a single session with nearly 11 million tokens destroyed on April 10 alone. Options volume on DOGE blew up 256% as traders position for a big directional move. When meme coins squeeze this tight with burns rising and options activity exploding, the sector faces a breakout moment, and Shiba Inu without exchange infrastructure still cannot answer the question that 2026 keeps asking: where is the utility? The meme coin market keeps grinding sideways, pushing traders toward cheaper entries with real substance. While some watch Shiba Inu or the next pump hoping for a quick flip, Pepeto is the position that turns hope into math because the exchange infrastructure builds demand that does not rely on tweets or viral moments. This is where meme coins stop being a gamble and start being a business. This presale pulled $9.02M during consolidation while the meme coin sector bled out. The gap is utility. The cross chain bridge moves assets across Ethereum, BNB Chain, and Solana. The zero fee engine keeps every dollar whole. The AI risk scoring catches dangerous contracts before your money gets near them. SolidProof audited every line of code, and the Pepe ecosystem cofounder who grew a token past $7 billion runs the team. The 300x math is not a guess, it is the kind of return that exchange tokens with real infrastructure hit on listing day. While Shiba Inu trades at $3.5 billion on community energy alone with no exchange tools, Pepeto at $0.000000164 carries the SolidProof audit, the bridge, and the zero fee engine that 2026 demands. A $10,000 entry earns roughly $18,400 in yearly staking rewards at 184% APY, about $1,533 per month. That is $50 per day flowing into wallets that committed while the meme coin market went quiet, and by the time Shiba Inu finds its next catalyst, the holders stacking inside Pepeto will be sitting on positions that every meme coin trader on earth will wish they had grabbed. SHIB trades near $0.0000060 according to CoinMarketCap, with burns surging 119%, yet the token barely moved while Bitcoin climbed back above $74,400, showing that hype alone cannot keep up when the market rotates toward utility. At $3.5 billion market cap with no exchange infrastructure and no revenue, Shiba Inu depends on a breakout that the Bollinger Band compression hints at but cannot guarantee. DOGE holds near $0.093 according to CoinMarketCap with Polymarket giving 59% odds for $0.10 this month, and the broader market bouncing, but spot volume trails the rally, showing options bets are running ahead of real buying. Even if DOGE reclaims $0.10, the meme coin sector needs utility to hold gains in 2026, and DOGE at $13 billion depends entirely on sentiment that fades the moment Bitcoin stalls. Bitcoin is above $74,400, the bull run is accelerating, and every cycle proves the same lesson: the people who build wealth are the ones who commit while the price still looks impossible. The wallets stacking Pepeto right now are compounding $1,533 per month on exchange infrastructure while SHIB burns spike 119%, and the meme coin crowd waits for a breakout that may or may not come. The $7 billion cofounder leads the team, the Binance listing is closing in, and shiba inu holders are still debating whether burns will ever move the price. Two types of wallets exist right now: the ones filling up inside Pepeto that grow every single day, and the ones sitting empty that will stay empty when the listing reprices everything overnight. Visit the Pepeto official website and enter the presale before the bull run picks up even more speed and the entry you see today turns into the story everyone else tells about the one that got away, the same story that played out with shiba inu when early holders turned small bags into millions because they moved one day before the crowd did. Click To Join the Pepeto Presale Before Listing Is the Shiba Inu still worth buying in 2026 after the 119% burn spike? Shiba Inu holds near $0.0000060 with burns surging, but the token lagging Bitcoin's recovery, while Pepeto at $0.000000164 with $9.02M raised and 300x exchange infrastructure offers the utility that Shiba Inu cannot match. The Pepeto CoinMarketCap page is already live. Why are DOGE Polymarket odds rising to 59% for $0.10 this month? Dogecoin options volume exploded 256% with bullish positioning stacking up, but spot conviction remains weak at $0.093 while exchange presales like Pepeto offer stronger asymmetric setups for 2026.

Polymarket
Coinpedia - Fintech & Cryptocurreny News Media| Crypto Guide10d ago
Read update
Shiba Inu Eyes Breakout as DOGE Polymarket Odds Hit 59% for $0.10, One Presale Targets 300x Utility

Anthropic wins accolades from Canada's AI minister over Mythos approach

Try refreshing your browser, or tap here to see other videos from our team. Anthropic has warned that Mythos is powerful enough that it may be capable of cyberattacks if companies don't try it against their own systems and build defences ahead of any wider release. The San Francisco-based company has limited access to a small number of firms initially, including JPMorgan Chase & Co., Amazon.com Inc. and Apple Inc. They're all part of "Project Glasswing," which will work to secure the most important systems before similar AI models become available.

Anthropic
Financial Post10d ago
Read update
Anthropic wins accolades from Canada's AI minister over Mythos approach

Coinbase (COIN) Stock: Crypto Giant Pursues Partnership With Anthropic for Elite AI Security Tool - Blockonomi

Initial Glasswing collaborators comprise AWS, Apple, Google, JPMorgan Chase, Microsoft, and Palo Alto Networks. According to reporting from The Information, the major cryptocurrency platform is pursuing negotiations with Anthropic to obtain access to Claude Mythos Preview, an exclusive artificial intelligence system designed specifically for sophisticated cybersecurity applications. Coinbase Global, Inc., COIN This initiative emerges amid escalating challenges for digital asset platforms confronting increasingly sophisticated AI-driven attack vectors that traditional defenses struggle to counter. Project Glasswing debuted in early April 2026 under Anthropic's direction. This exclusive program grants a carefully curated selection of collaborators access to Mythos strictly for defensive security applications. Initial collaborating organizations include Amazon Web Services, Apple, Google, JPMorgan Chase, Microsoft, and Palo Alto Networks. An additional cohort of over 40 entities responsible for maintaining essential software infrastructure also gained entry. Anthropic allocated $100 million worth of computational resources alongside $4 million directed toward open-source security organizations as components of this initiative. Throughout evaluation periods, Mythos uncovered thousands of zero-day vulnerabilities that had evaded detection. Among these discoveries was a 27-year-old security weakness in OpenBSD and a 16-year-old defect within FFmpeg. These revelations have drawn significant interest from major technology platforms like the cryptocurrency exchange, which currently deploys Claude for customer assistance operations spanning more than 100 geographical markets. The platform experienced a significant internal security compromise in 2025. Criminal elements successfully bribed overseas customer service representatives, resulting in the exposure of sensitive information affecting approximately 70,000 account holders. Management declined a $20 million extortion demand and alternatively established an equivalent reward for intelligence leading to perpetrator apprehension. Research conducted by Anthropic has demonstrated that autonomous AI systems can independently identify and exploit vulnerabilities in smart contracts, producing millions in simulated theft scenarios. Such discoveries transform Mythos access from an optional enhancement into a critical defensive resource for platforms managing substantial financial assets. Mythos will remain unavailable for public distribution. Anthropic intends to integrate its advanced capabilities into subsequent Claude versions equipped with enhanced protective mechanisms. Following the preview period, pricing structure is established at $25 per million input tokens and $125 per million output tokens. Confirmation regarding whether the cryptocurrency platform will obtain official Glasswing partnership designation or expanded model access remains pending. Negotiations continue between both organizations, with neither party issuing formal announcements regarding any concluded arrangements.

Anthropic
Blockonomi10d ago
Read update
Coinbase (COIN) Stock: Crypto Giant Pursues Partnership With Anthropic for Elite AI Security Tool - Blockonomi

Adobe releases AI assistant for creative tools, says it will work with Anthropic's Claude

Adobe said on Wednesday it was releasing a new artificial intelligence assistant designed to help users carry out tasks across its suite of software for editing photos, videos and other digital content. The Firefly AI assistant is designed to take orders fromhuman creative professionals about what results ⁠they ⁠want for apiece of content and then autonomously tap into Adobe's softwaretools, such as Photoshop, Illustrator and Premiere Pro, to getthat outcome. The new capabilities will also be available to users ofAnthropic's Claude AI model through a connector to Adobe, thoughAdobe did not disclose the financial arrangements between thefirms. "There are parts of projects, or individual ⁠sections of animage, where you really care about getting into theindividual pixels, and we want to continue to support customersin doing ⁠that, but there are places where you would be happy tojust hand this stuff off to an agent or an assistant," said ElyGreenfield, chief technology officer at Adobe's creativity andproductivity business unit. The Firefly AI assistant is the latest in a series ofAdobe investments since 2023 in proprietary AI tools that itsays are financially guaranteed as safe for use in corporatesettings. This is one of the ways Adobe is trying todifferentiate itself from lower-cost rivals as AI lowers thebarrier to entry for ⁠creating images and videos. Adobe's longtime CEO said last month that he will stepdown after a successor is named, amid investor scepticism aboutwhen the company's AI investments will pay off. Adobe did not disclose how much the new assistant willcost users, but said it expects the assistant to increase theirconsumption of what it calls AI credits, the main way thecompany currently charges for AI products.

Anthropic
Economic Times10d ago
Read update
Adobe releases AI assistant for creative tools, says it will work with Anthropic's Claude

Anthropic, Cursor backer Accel raises $5 billion for big AI bets - The Economic Times

Accel, the venture capital firm that's backed artificial intelligence companies including Anthropic, Cursor and Perplexity, has raised $5 billion in new funds to keep up its big bets in the age of increasingly valuable artificial intelligence startups. The firm will dedicate $4 billion to its fifth Leaders fund, focused on writing large checks to late-stage startups around the world, Accel plans to announce Wednesday. The firm also raised $650 million for a so-called sidecar fund, which gives limited partners extra exposure to Accel's biggest investments by allowing it to selectively increase the size of certain bets, especially for investments in its existing portfolio, Accel partner Matt Weigand said. The California-based firm, founded in 1983, made its name investing early in tech darlings. Accel famously led Facebook's Series A in 2005, and launched a dedicated growth-stage effort three years later to back companies like Spotify Technology SA and Atlassian Corp. The new funds will boost the firm's assets under management from $31 billion as of last year. Most of Accel's recent investments have, unsurprisingly, focused on AI. Large funding rounds for AI startups have become more common in Silicon Valley, with huge financing hauls for companies like Anthropic and OpenAI boosting venture investments for US companies in the first quarter of the year to a record-breaking $250 billion, according to Crunchbase data. Weigand said Accel aims to make 20 to 25 investments out of its new $4 billion growth fund, with an average check size of about $200 million, roughly on par with its past investments. At the same time, Weigand said he expects the largest investments Accel makes to get even bigger, and for the firm to temporarily accelerate its investing pace to meet the AI moment. "The opportunity and the scale of growth that we're seeing in these companies is just fantastic," he said. "You don't want to miss that." Accel has invested in some of the most talked-about startups in the AI industry. The firm first backed Cursor last June when it was valued at $9.9 billion. Earlier this year, the AI coding startup was in talks to raise new capital at a valuation of about $50 billion. Accel also invested in Anthropic last year at an $183 billion valuation, less than half of the $380 billion valuation the frontier lab now commands. Because building AI technology can be extremely expensive, and the AI boom has made investors more willing to pour big money into younger companies, Accel may use its growth fund to back unusually large early-stage investments too. For example, its bet on Mind Robotics' $500 million Series A round in March came from the new late-stage fund, Weigand said. Going forward, Weigand said the firm will emphasize bets on AI-powered startups at the intersection of software and hardware, including industries like robotics, defense tech and hardware for AI data centers.

AnthropicPerplexity
Economic Times10d ago
Read update
Anthropic, Cursor backer Accel raises $5 billion for big AI bets - The Economic Times

Novartis CEO joins Anthropic's board

Elaine Chen covers biotech, co-writes The Readout newsletter, and co-hosts STAT's weekly biotech podcast, The Readout Loud. You can reach Elaine on Signal at elaineywchen.70. Want to stay on top of the science and politics driving biotech today? Sign up to get our biotech newsletter in your inbox. Good morning. A reminder that if you're ever feeling down about a mistake you made, there is always a way to turn it around -- like how this delivery robot company has turned the issue of its robot crashing into my local bus stop into a marketing opportunity. After Bain Capital last summer said it licensed five immunology drugs from Bristol Myers Squibb, it's now unveiling the company to take those treatments forward: a startup called Beeline Medicines.

Anthropic
STAT10d ago
Read update
Novartis CEO joins Anthropic's board

Anthropic, Cursor Backer Accel Raises $5 Billion for Big AI Bets

Accel aims to make 20 to 25 investments out of its new $4 billion growth fund, with an average check size of about $200 million. Accel, the venture capital firm that's backed artificial intelligence companies including Anthropic, Cursor and Perplexity, has raised $5 billion in new funds to keep up its big bets in the age of increasingly valuable artificial intelligence startups. The firm will dedicate $4 billion to its fifth Leaders fund, focused on writing large checks to late-stage startups around the world, Accel plans to announce Wednesday. The firm also raised $650 million for a so-called sidecar fund, which gives limited partners extra exposure to Accel's biggest investments by allowing it to selectively increase the size of certain bets, especially for investments in its existing portfolio, Accel partner Matt Weigand said. The California-based firm, founded in 1983, made its name investing early in tech darlings. Accel famously led Facebook's Series A in 2005, and launched a dedicated growth-stage effort three years later to back companies like Spotify Technology SA and Atlassian Corp. The new funds will boost the firm's assets under management from $31 billion as of last year. Most of Accel's recent investments have, unsurprisingly, focused on AI. Large funding rounds for AI startups have become more common in Silicon Valley, with huge financing hauls for companies like Anthropic and OpenAI boosting venture investments for US companies in the first quarter of the year to a record-breaking $250 billion, according to Crunchbase data. Weigand said Accel aims to make 20 to 25 investments out of its new $4 billion growth fund, with an average check size of about $200 million, roughly on par with its past investments. At the same time, Weigand said he expects the largest investments Accel makes to get even bigger, and for the firm to temporarily accelerate its investing pace to meet the AI moment. "The opportunity and the scale of growth that we're seeing in these companies is just fantastic," he said. "You don't want to miss that." Accel has invested in some of the most talked-about startups in the AI industry. The firm first backed Cursor last June when it was valued at $9.9 billion. Earlier this year, the AI coding startup was in talks to raise new capital at a valuation of about $50 billion. Accel also invested in Anthropic last year at an $183 billion valuation, less than half of the $380 billion valuation the frontier lab now commands. Because building AI technology can be extremely expensive, and the AI boom has made investors more willing to pour big money into younger companies, Accel may use its growth fund to back unusually large early-stage investments too. For example, its bet on Mind Robotics' $500 million Series A round in March came from the new late-stage fund, Weigand said. Going forward, Weigand said the firm will emphasize bets on AI-powered startups at the intersection of software and hardware, including industries like robotics, defense tech and hardware for AI data centers.

PerplexityAnthropic
Bloomberg Business10d ago
Read update
Anthropic, Cursor Backer Accel Raises $5 Billion for Big AI Bets

AI agents using Anthropic MCP could be a vector for supply chain attacks, claim researchers

The flaw in Anthropic's Model Context Protocol agent communication standard could put millions of agents and 200,000 servers at risk, report says Anthropic's Model Context Protocol (MCP) has a systemic vulnerability that could allow hackers to take control of servers and breach companies' security, according to OX Security. Researchers at OX Security claim the flaw permits "arbitrary command execution of any server running a vulnerable MCP implementation". MCP is a popular AI agent communication standard developed and maintained by Anthropic and used by potentially millions of agents and hundreds-of-thousands of servers. In the words of researchers Moshe Ben Siman Tov, Nir Zadok, Mustafa Naamnih, and Roni Bar: "The blast radius is massive. This exploit allowed us to directly execute commands on six official services of real companies with real paying customers." They added that during their research they conducted "over 30 responsible disclosure processes, produced 10 CVEs rated Critical and High, and helped patch numerous projects". In the report The mother of all AI supply chains, OX Security researchers said the MCP flaw isn't a "one-off coding mistake" but is more fundamental. They explained that, while reviewing potential AI and LLM-related attack vectors, they found a vulnerability in a GPT Researcher feature that allowed developers to configure a custom STDIO MCP server, where the command and arguments are supplied by the user. "Testing revealed that any OS command passed through this interface would execute on the server -- even when the face MCP server failed to start," the OX Security researchers said. "The error was returned to the user; the command ran anyway." This meant that running an arbitrary command gave complete control of the server. "To be clear: this should never happen," they said.. GPT Researcher uses AI agent engineering platform LangChain's langchain-mcp-adapters and the OX Security researchers assumed that was where the vulnerability lay. However, further investigation found the root of the issue lay in Anthropic's original MCP implementation code, modelcontextprotocol. When OX Security contacted LangChain and Anthropic about the issue, both organizations said this was "expected behavior". In a statement to OX Security, Anthropic said: "We do not consider this a valid security vulnerability as it requires explicit user permission for the file change where the user is given the opportunity to approve or deny the change." Anthropic has since released an updated security policy, however, stating that MCP adapters and STDIO ones in particular should be used with caution and emphasized that responsibility for securing code lies with the developers, not with Anthropic. OX Security argued this represents a supply chain risk that is difficult to resolve. "Developers are not security engineers," the OX Security researchers said, "we cannot expect tens of thousands of implementers to independently discover and mitigate a flaw that's baked into the official SDKs they trust. By shifting the blame rather than hardening the protocol, the industry leaves user data and organizational infrastructure exposed" They added: "This architectural failure highlights an even broader, systemic trend. As AI-assisted code generation accelerates, individuals with limited technical expertise are deploying an unprecedented volume of projects. However, generating more code without foundational security knowledge exponentially widens the gap in organizational defenses." Jake Moore, global cybersecurity advisor at ESET, echoed these sentiments, telling ITPro: "This is potentially the start of what is to come with AI enabled cybercrime. Supply chain attacks are still rife but when we are adding in extremely new technology that hasn't and can't really ever be fully tested, we are putting ourselves in dangerous waters where disastrous attacks can and will occur." He added: "This isn't just a bug that we are used to seeing, this is what happens when an AI standard is built for capability before control and we are likely to see this more and more over the next few years. If it works, it doesn't mean it's safe but refusing to patch it suggests this isn't easily fixable without breaking functionality (which is the bigger concern)."

Anthropic
ITProUK10d ago
Read update
AI agents using Anthropic MCP could be a vector for supply chain attacks, claim researchers

Adobe releases AI assistant for creative tools, says it will work with Anthropic's Claude

SAN FRANCISCO, April 15 (Reuters) - Adobe said on Wednesday it was releasing a new artificial intelligence assistant designed to help users carry out tasks across its suite of software for editing photos, videos and other digital content. * The Firefly AI assistant is designed to take orders fromhuman creative professionals about what results they want for apiece of content and then autonomously tap into Adobe's softwaretools, such as Photoshop, Illustrator and Premiere Pro, to getthat outcome. * The new capabilities will also be available to users ofAnthropic's Claude AI ⁠model through a connector to Adobe, thoughAdobe did not disclose the financial arrangements between thefirms. * "There are parts of projects, ⁠or individual sections of animage, where you really care about getting into theindividual pixels, and we want to continue to support customersin doing that, but there are places where you would be happy tojust hand this stuff off to an agent or an assistant," said ElyGreenfield, chief technology officer at Adobe's creativity andproductivity business unit. * The Firefly AI assistant is the latest in a series ofAdobe investments since 2023 in proprietary AI tools that itsays are financially guaranteed as safe for use in corporatesettings. This is one of the ways Adobe is trying todifferentiate itself from lower-cost rivals as AI lowers thebarrier to entry for creating images and videos. * Adobe's longtime CEO said last month that he will stepdown after a successor is named, amid investor skepticism aboutwhen the company's AI investments will pay off. * Adobe did not disclose how much the new assistant willcost users, but said it expects the assistant to increase theirconsumption of what it calls AI credits, the main way thecompany currently charges for AI products. (Reporting by Stephen Nellis in San Francisco; Editing by Jamie Freed)

Anthropic
Yahoo! Finance10d ago
Read update
Adobe releases AI assistant for creative tools, says it will work with Anthropic's Claude

Adobe releases AI assistant for creative tools, says it will work with Anthropic's Claude

SAN FRANCISCO, April 15 (Reuters) - Adobe said on Wednesday it was releasing a new artificial intelligence assistant designed to help users carry out tasks across its suite of software for editing photos, videos and other digital content. * The Firefly AI assistant is designed to take orders fromhuman creative professionals about what results they want for apiece of content and then autonomously tap into Adobe's softwaretools, such as Photoshop, Illustrator and Premiere Pro, to getthat outcome. * The new capabilities will also be available to users ofAnthropic's Claude AI ⁠model through a connector to Adobe, thoughAdobe did not disclose the financial arrangements between thefirms. * "There are parts of projects, ⁠or individual sections of animage, where you really care about getting into theindividual pixels, and we want to continue to support customersin doing that, but there are places where you would be happy tojust hand this stuff off to an agent or an assistant," said ElyGreenfield, chief technology officer at Adobe's creativity andproductivity business unit. * The Firefly AI assistant is the latest in a series ofAdobe investments since 2023 in proprietary AI tools that itsays are financially guaranteed as safe for use in corporatesettings. This is one of the ways Adobe is trying todifferentiate itself from lower-cost rivals as AI lowers thebarrier to entry for creating images and videos. * Adobe's longtime CEO said last month that he will stepdown after a successor is named, amid investor skepticism aboutwhen the company's AI investments will pay off. * Adobe did not disclose how much the new assistant willcost users, but said it expects the assistant to increase theirconsumption of what it calls AI credits, the main way thecompany currently charges for AI products. (Reporting by Stephen Nellis in San Francisco; Editing by Jamie Freed)

Anthropic
Yahoo! Finance10d ago
Read update
Adobe releases AI assistant for creative tools, says it will work with Anthropic's Claude

Anthropic Appoints Novartis CEO Narasimhan To Board - BW Businessworld

Narasimhan joins a board that includes Dario Amodei, Daniela Amodei, Yasmin Razavi, Jay Kreps, Reed Hastings and Chris Liddell Anthropic has named Vas Narasimhan, Chief Executive Officer of Novartis, to its board of directors, in a move aimed at deepening life sciences expertise within the governance framework of advanced artificial intelligence systems. The appointment strengthens the company's push to connect frontier AI development with real-world applications in healthcare and regulated industries. "Vas brings something rare to our board," said Daniela Amodei, cofounder and president of Anthropic. "He has spent his career doing what we are trying to do with AI, taking powerful, complex technology and getting it to people safely at scale." Narasimhan, of Indian origin, joins a board that includes Dario Amodei, Daniela Amodei, Yasmin Razavi, Jay Kreps, Reed Hastings and Chris Liddell, further broadening the company's mix of expertise across technology, policy and industry. His induction was carried out through Anthropic's Long-Term Benefit Trust, an independent governance body created to oversee the company's mission and ensure alignment between commercial priorities and public benefit goals. Following his appointment, directors nominated through the Trust now hold a majority on the board, the company said in a blog post. At Novartis, Narasimhan has overseen the development and regulatory approval of more than 35 new medicines, bringing extensive experience in healthcare innovation, global public health and pharmaceutical regulation. Earlier in his career, he worked on public health programmes addressing HIV or AIDS, malaria and tuberculosis across India, Africa and South America. He is also a member of the US National Academy of Medicine and the Council on Foreign Relations, and serves on advisory boards at the University of Chicago and Harvard Medical School. Previously, he chaired the Pharmaceutical Research and Manufacturers of America (PhRMA) and continues to serve on its board of directors. Neil Shah, chair of the Long-Term Benefit Trust, said Narasimhan's appointment reflects the governance model's aim of embedding long-term scientific and ethical oversight into core strategic decisions. Daniela Amodei added that his experience operating within highly regulated pharmaceutical systems aligns closely with Anthropic's emphasis on safety, controlled deployment and responsible scaling of advanced AI models. Founded as a public-benefit corporation, Anthropic's governance structure, anchored by the Long-Term Benefit Trust, is designed to ensure that key decisions remain aligned with its mission even as it expands its frontier AI capabilities. The appointment comes amid a broader industry trend in which AI companies are increasingly bringing in leaders from regulated sectors such as healthcare, finance and defence to guide the deployment of high-impact technologies, particularly as frontier models move closer to real-world clinical and scientific applications. Commenting on the development, Narasimhan said AI is already reshaping biomedical research and drug discovery. "In healthcare, AI is accelerating solutions to some of the hardest scientific challenges from understanding disease biology to designing better medicines," he said. "Anthropic is setting a standard for how AI should be developed to benefit humanity." The development also follows a series of India-focused leadership appointments at the San Francisco-based AI company, including Amlan Mohanty, who leads policy initiatives in India, and Irina Ghose, former Managing Director of Microsoft India, who was appointed India Managing Director earlier this year.

Anthropic
BW Businessworld10d ago
Read update
Anthropic Appoints Novartis CEO Narasimhan To Board - BW Businessworld

Water marks 'vanish' from wood when using unconventional food item

Water marks often appear as white or dark rings and are caused by trapped moisture, heat, or liquid reacting with the wood's finish or the wood itself. They are usually caused by hot mugs, cold glasses, or spills that aren't cleaned and get left to soak into the wood - and they can be a nightmare to remove. Because water marks often stain the wood itself, you can't just wipe them away with a wet cloth. They don't respond to most traditional cleaning methods because, in order to remove them, you actually have to add something back to the wood. But there is something you can do to restore your wooden tables to their former, non-stained glory. All you have to do is cover the water marks with one food item - but be warned, it might turn your stomach. Sarah then simply picked up her jar of mayonnaise, scooped out a spoonful, and dropped it on her table. She smeared it around so that the entire stain was covered, and then left it to sit. The woman came back to her video several hours later and wiped away the excess mayonnaise, proving that there was no longer any staining underneath where the condiment had been sitting. While it might seem gross to leave mayonnaise sitting on your table, the trick works because the wood surface needs oil to help lift the moisture that's trapped in the wood. The good thing is, this means you don't have to use mayonnaise. Oils like olive oil and even non-gel toothpastes can help draw out moisture. You can even try to lift the stain with a hairdryer on a low heat, which might help to evaporate the trapped moisture and remove any white or cloudy marks.

Unconventional
EXPRESS10d ago
Read update
Water marks 'vanish' from wood when using unconventional food item
Showing 4401 - 4420 of 11425 articles