News & Updates

The latest news and updates from companies in the WLTH portfolio.

ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure - The Boston Globe

OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It's part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of AI products like ChatGPT and Claude, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has already imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen.

Anthropic
The Boston Globe7d ago
Read update
ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure - The Boston Globe

Did Anthropic's New AI Model Just Create a Massive Buying Opportunity?

In today's video, I discuss recent updates affecting Synopsys (NASDAQ: SNPS) and other AI stocks. To learn more, check out the short video, consider subscribing, and click the special offer link below. Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue " *Stock prices used were the post-market prices of April 12, 2026. The video was published on April 12, 2026. The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now... and Microsoft wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $573,160!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $1,204,712!* Now, it's worth noting Stock Advisor's total average return is 1,002% -- a market-crushing outperformance compared to 195% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors. Jose Najarro has positions in CrowdStrike, Microsoft, and Synopsys. The Motley Fool has positions in and recommends Autodesk, Cadence Design Systems, Cloudflare, CrowdStrike, Microsoft, and Synopsys. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

Anthropic
NASDAQ Stock Market7d ago
Read update
Did Anthropic's New AI Model Just Create a Massive Buying Opportunity?

ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure

"I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It's part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities.

Anthropic
9NEWS7d ago
Read update
ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure

ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It's part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of AI products like ChatGPT and Claude, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has already imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

Anthropic
AP NEWS7d ago
Read update
ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure

ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It's part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of AI products like ChatGPT and Claude, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has already imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

Anthropic
Beaumont Enterprise7d ago
Read update
ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure

ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure

"I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It's part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities.

Anthropic
WHAS 11 Louisville7d ago
Read update
ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure

Adobe releases AI assistant for creative tools, says it will work with Anthropic's Claude

Adobe said on Wednesday it was releasing a new artificial intelligence assistant designed to help users carry out tasks across its suite of software for editing photos, videos and other digital content. The Firefly AI assistant ⁠is designed to take orders from human creative professionals about what results they want for a piece of content and then autonomously tap into Adobe's software tools, such as Photoshop, Illustrator and Premiere Pro, to get that outcome.

Anthropic
The Hindu7d ago
Read update
Adobe releases AI assistant for creative tools, says it will work with Anthropic's Claude

ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure

The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. The same ChatGPT chatbot that gave OpenAI's chief financial officer Sarah Friar a tilapia recipe for a recent Sunday night dinner at home is also now doing her most mundane tasks at work like summarizing her emails and Slack messages. Friar and other company executives are banking OpenAI's future on more of the latter as it shifts its focus to business-oriented products while shedding some of its consumer offerings as a pathway to profitability. OpenAI says it will introduce a new artificial intelligence model for "high-value professional work" as the company faces heightened competition with rival Anthropic in attracting corporate customers to adopt AI assistants in their workplaces. "You'll see a new model coming from us in short order. We feel very excited about it," Friar said in an interview with The Associated Press. OpenAI boasts of more than 900 million weekly users of its core ChatGPT product, and Friar said about 95% of them "don't pay anything" for the popular chatbot. But while all those interactions build habits and reliance, they also strain the costly computing resources needed to power the company's AI systems and highlight the need for big business customers to help pay the bills. OpenAI, valued at $852 billion, and Anthropic, valued at $380 billion, both lose more money than they make, putting the privately-owned San Francisco-based AI research laboratories in a fierce competition to generate more revenue as they race toward becoming publicly traded on Wall Street. A push to improve performance and sales of OpenAI's business-oriented products -- already Anthropic's bread and butter -- has driven OpenAI to abandon some consumer initiatives, like the AI video generator app Sora. "I think it was a little heartbreaking, but we're like, OK, it's not the main event right now," Friar said. "We need to make sure that our new model that's coming has enough compute." Codenamed Spud, OpenAI says its "smartest model yet" offers "stronger reasoning, better understanding of intent and dependencies, better follow-through and more reliable output in production." It's part of OpenAI's answer to Anthropic's new Claude Mythos, which Anthropic claims is so "strikingly capable" that it is limiting its use to select customers because of its apparent ability to surpass human cybersecurity experts in finding or exploiting computer vulnerabilities. Friar, the former CEO of neighborhood social platform Nextdoor, said business customers accounted for about 20% of OpenAI's revenue when she was hired in 2024 as chief financial officer. She said it's now 40% and expected to account for half of OpenAI's sales by the end of the year. It's a sharp turnaround from late last year, when OpenAI co-founder and CEO Sam Altman was promoting a now-shuttered Sora partnership with Disney, launching a plan to sell ads on ChatGPT and floating the idea of letting ChatGPT engage in erotica with paid adult users. Altman said on the "Mostly Human" podcast earlier this month that a sharper focus was needed -- and Friar agrees. "Tech companies, when they're growing, it's just this natural thing that happens. There's so many cool things you could do," she said, adding that companies can end up doing "really badly" if they do too many things, while "great companies are very good at, in a reasonable period of time, kind of doing that winnowing down and refocusing and it's super painful." Signaling that shift was the hiring three months ago of Slack CEO Denise Dresser to be OpenAI's first chief revenue officer. Dresser said in a recent AP interview that she has been laser-focused on meeting with corporate leaders and positioning OpenAI as the go-to platform for workplaces employing AI agents to automate a variety of computer-based job tasks. "It's really clear to me that companies are past the experimentation phase and they're into using AI to do real work," Dresser said. "Leaders at companies are recognizing that AI is probably the most consequential shift of their lifetime." But those leaders also have a choice, namely Anthropic's Claude that has become widely used by software professionals. Founded in 2021 by a group of ex-OpenAI leaders who said they wanted to prioritize AI safety, Anthropic has positioned itself as the more responsible AI vendor. The distinction drew attention when President Donald Trump's administration punished the startup after a contract dispute over AI use in the military, and Altman used the opportunity to cement OpenAI's own deal with the Pentagon. Consumer interest in Anthropic surged and the company said its annualized revenues hit $30 billion, a higher number than what OpenAI has reported, though they measure it differently. Friar and Dresser declined to reveal OpenAI's latest sales but both have suggested that Anthropic's number is inflated because it doesn't account for revenue it must share with cloud computing providers Amazon and Google. Even so, it remains a tight competition that's also tied to the health of the stock market and the future of the economy. "They're likely quite close," said Luke Emberson, a researcher at nonprofit institute Epoch AI. "Certainly the trends show Anthropic is growing much faster than OpenAI. If that continues, they're likely to cross soon." The urgency led Dresser to send a memo to OpenAI employees on Sunday, first reported by The Verge, that asserted that Anthropic's coding focus "gave them an early wedge" but expressing confidence that OpenAI has the "real structural advantage" as AI usage expands beyond software developers and OpenAI builds enough computing capacity to operate its AI systems. "Their story is built on fear, restriction, and the idea that a small group of elites should control AI," Dresser's memo said of Anthropic. "Our positive message will win over time: build powerful systems, put in the right safeguards, expand access, and help people do more." But for skeptics of the financial viability of AI products like ChatGPT and Claude, the trajectory of both money-losing companies is alarming as smaller startups increasingly become dependent on their AI tools. Anthropic has already imposed rate limits on heavy users, forcing some to wait for hours to use Claude, and both companies have set up service tiers that reward premium payers, said author and AI critic Ed Zitron. "It's what I call the subprime AI crisis," Zitron said. "People built their lives and they built their businesses on top of these companies that, as they try and save money, will start turning the screws." One thing that both AI leaders and critics agree on is that it is an expensive technology, though whether it is worth the cost in electricity-hungry AI computers remains to be seen. "People will say, well, 'Once they go public, they're safe.' That's not true," Zitron said. "Public companies can and will die, especially ones that are dependent on $100 billion to $200 billion every year or so, just to keep breathing."

Anthropic
WRAL7d ago
Read update
ChatGPT maker OpenAI shifts its focus to business users amid Anthropic pressure

Amazon-backed X-energy files to raise up to $800M in IPO - RocketNews

Nuclear startup X-energy began its investor roadshow Wednesday as it works toward its IPO, setting its target price between $16 and $19 per share, according to documents filed with the U.S. Securities and Exchange Commission. If it lists at the high end, the startup could net about $814 million. X-energy and its peers have been riding a renewed wave of interest in fission power as demand for electricity has surged on the back of AI data centers and societywide electrification. Amazon is one of X-energy's biggest backers. The tech giant led a $500 million Series C-1 round and has pledged to buy as much as 5 gigawatts of nuclear power from the company by 2039. The IPO is sure to come as a relief to X-energy's investors, which have put about $1.8 billion into the company, according to PitchBook. The startup had previously attempted to go public via reverse merger with a special purpose acquisition company, but the two parties canceled the deal in 2023 as the SPAC craze petered out. X-energy's reactor is what's known as a high-temperature, gas-cooled reactor. Inside, uranium encased in spheres of ceramic and carbon is cooled by helium gas. The gas then transfers heat to a steam turbine loop to generate electricity. The fuel design, known as TRISO, is expected to be safer than previous fuel arrangements, though it's not widely used today. The startup said in its SEC filing that it's already embroiled in a patent dispute with another company that recently went bankrupt. Ultra Safe Nuclear Corporation (USNC) went bankrupt in 2024, and its assets were purchased in bankruptcy to form Standard Nuclear. X-energy alleges that USNC infringed on its fuel fabrication patents and that the matter hasn't been resolved to its satisfaction during the course of the bankruptcy proceedings. Outside of China, development of new nuclear reactors has all but stalled, stymied by delays and cost overruns. A new breed of startups hopes that by shrinking reactors, they'll be able to overcome some of the challenges that have beset traditional designs. None of the small modular reactor startups have built a power plant yet, though several are racing to meet a deadline of July 4 set by the Tr ...

X-energy
RocketNews | Top News Stories From Around the Globe7d ago
Read update
Amazon-backed X-energy files to raise up to $800M in IPO - RocketNews

SpaceX Successfully Static Fires V3 Booster | NextBigFuture.com

Successful static fires of SpaceX V3 booster follows yesterdays successful V3 starship static fire. Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology. Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels. A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.

SpaceX
Next Big Future7d ago
Read update
SpaceX Successfully Static Fires V3 Booster | NextBigFuture.com

No ID, no AI? Anthropic starts asking Claude users for government ID and KYC-style selfie verification

Anthropic now needs your government ID if you want to access certain Claude features. (Representational image made with AI) Anthropic wants your government ID. The company has updated its identity verification page for Claude, mandating verification with official ID and a selfie for using certain features. While we are used to verifying our ID for bank accounts, SIM cards and whatnot, this marks the first time that a company wants a sort of know-your-customer (KYC) to use an AI tool. Do note that Anthropic recently witnessed a big influx of Claude users after it walked away from a deal with the US Department of Defense, fearing that its AI models may be used for mass domestic surveillance. On the website, Anthropic states that it's trying to be "responsible" with this verification step as it gets to know "who is using" its powerful AI tools. The Dario Amodei-led firm claims that this will help it "prevent abuse, enforce our usage policies, and comply with legal obligations." Though verification is only required for a few use cases, Anthropic does not define what these cases may be. The AI startup states that it accepts original passports, driver's licenses, state/provincial ID cards or national identity cards for this process. Users also need to upload a selfie. When asked to verify your identity, you will need to show the original government-issued ID, and take a life selfie. Anthropic states that it does not accept photocopies or digital IDs. As per the company, the process will take only about five minutes. This data is processed by Persona Identities, a company known for secure verification technology. Anthropic claims that the information will not be used for model training and that it remains on Persona's servers, not on its own. Anthropic says that you can have multiple attempts to verify in case the initial attempt fails. However, the company may ban your account in some cases, such as if Claude is unavailable in your location or if you are under 18. The fact that Anthropic, a company that is often considered as one of the more ethical firms in the AI industry, wants your government ID to use Claude has raised suspicions online. Do note that this verification is not mandated by any law, but has been introduced voluntarily by Anthropic. One user claimed that this may pave the door for laws which track all AI use. The person wrote, "Next up will be laws: No AI without gov-issued ID, All AI use tracked to individual - no private AI." Another user reckoned that this may backfire on Anthropic as no other AI company, such as OpenAI and Google want such verifications. The user wrote, "Anthropic just handed their competitors a gift."

Anthropic
India Today7d ago
Read update
No ID, no AI? Anthropic starts asking Claude users for government ID and KYC-style selfie verification

Anthropic introduces identity checks that could require Claude users to submit ID and live selfie for access

Anthropic's identity checks reflect a broader shift towards tighter controls in AI platforms. While framed as a safety measure, the move raises concerns over privacy, transparency and data handling. As verification expands, users may increasingly face trade-offs between accessing advanced tools and sharing sensitive personal information online. For a company that has long positioned itself as a privacy-first alternative in the AI race, Anthropic has taken a step that is likely to test that reputation. The firm has begun rolling out identity verification requirements for its chatbot Claude, asking some users to provide a government-issued photo ID and, in certain cases, a live selfie to access parts of the platform. The change, introduced quietly via an update to its help centre this week, applies only to select scenarios for now. Anthropic says users may encounter verification prompts when accessing "certain capabilities", during routine platform integrity checks, or as part of broader safety and compliance measures. It has not specified which features are affected or what triggers the checks. Limited rollout, unclear triggers Anthropic described the move as a targeted measure rather than a universal requirement. "We are rolling out identity verification for a few use cases," the company said, adding that the data would be used solely to confirm identity. The verification process requires users to submit a valid, physical and undamaged passport, driving licence or national identity card. Photocopies, mobile IDs and student credentials are not accepted. In some cases, users may also be asked to complete a live selfie check. The company has partnered with Persona to handle the process. According to Anthropic, identity data is processed on Persona's systems rather than its own infrastructure. It says the data is encrypted in transit and at rest, will not be used for model training, and will not be shared with third parties for marketing purposes. Privacy concerns and past precedent The rollout has prompted criticism from some users, particularly those drawn to Anthropic for its emphasis on privacy. Critics note that the requirement appears to be a company-led decision rather than a response to regulatory mandates. The move comes months after a surge in user growth for Claude, partly driven by concerns around competitors such as OpenAI. Earlier this year, Anthropic reported a sharp increase in sign-ups after OpenAI entered a deal involving AI deployment on Pentagon classified networks, a contract Anthropic declined. Questions are also being raised about the risks of storing sensitive identity data with third-party providers. While Persona is widely used in financial services, past incidents have highlighted potential vulnerabilities. A breach at Discord in October 2025 exposed tens of thousands of government IDs submitted for age verification, underscoring the risks associated with centralised data storage. The verification push also aligns with Anthropic's broader efforts to tighten platform controls. In recent months, the company introduced systems to detect underage users, though some adults reported being incorrectly flagged and temporarily losing access to their accounts while appealing decisions. For now, the identity checks remain limited in scope. However, the lack of clarity around their application and expansion has left users watching closely, as Anthropic balances safety measures with the privacy expectations that helped fuel its rise.

DiscordAnthropic
Firstpost7d ago
Read update
Anthropic introduces identity checks that could require Claude users to submit ID and live selfie for access

A month and a half after big fight with Pentagon, Anthropic gets 'huge compliment' from Trump administration and the reason is China

After six weeks of bruising fight with the Pentagon, AI giant Anthropic PBC has received what amounts to a major endorsement from the Trump administration and the reason is China. According to a report by Bloomberg, speaking at a Wall Street Journal event in Washington, US Treasury Secretary Scott Bessent praised Anthropic's new Mythos model. Bessent called Mythos as a revolutionary step that will help America maintain its lead over China in the artificial intelligence race. "This Anthropic Mythos model was a step function change in abilities, learning capabilities," Bessent said. "It's all logarithmic. You go from x to the 10th power to x to the 12th and then it's very difficult to catch up."Bessent also dismissed the suggestions that China was rapidly closing the gap, though he acknowledged the US advantage may only be three to six months. The praise for Anthropic comes after the Pentagon earlier this year declared the AI giant as a threat to the US supply chain, invoking powers typically reserved for foreign adversaries. Anthropic also fought back in the court and won an order that blockaded the ban on government use of its technology -- a move the company said could have cots billions in lost revenue. Mythos, Anthropic's latest model, is highly adept at finding vulnerabilities in software and computer systems. It is being released to a limited number of carefully chosen parties, a decision that has raised concerns about potential cyber risks. Just days before Bessent's remarks, he and Federal Reserve Chair Jerome Powell convened Wall Street banks to discuss those risks.At the centre of the discussion was Claude Mythos Preview -- a new, unreleased AI model from Anthropic that can find and exploit software vulnerabilities better than nearly any human.Anthropic says the model has already uncovered thousands of severe, previously unknown flaws across every major operating system and web browser. One vulnerability in OpenBSD -- widely regarded as one of the most secure operating systems in existence -- had gone undetected for 27 years. Another, in the widely used video tool FFmpeg, sat in a line of code that automated testing tools had hit five million times without catching the problem.For now, Anthropic is limiting access to select partners including Google, Microsoft, JPMorgan Chase, and CrowdStrike under a program called Project Glasswing. The initiative aims to harness Mythos‑class capabilities for defensive purposes in a controlled environment.Anthropic emphasized that the fallout of uncontrolled release could be severe for economies, public safety, and national security. Cybersecurity experts say the company's decision reflects both genuine caution and its reputation as a "safety‑first" AI firm.

Anthropic
The Times of India7d ago
Read update
A month and a half after big fight with Pentagon, Anthropic gets 'huge compliment' from Trump administration and the reason is China

One Nation chaos as uncounted votes discovered

South Australia election chaos: One Nation seat at risk as uncounted votes discovered in Narungga Max CorstorphanThe Nightly Thu, 16 April 2026 10:16AM CommentsComments Email Max Corstorphan The Electoral Commission of South Australia has ordered a recount for a seat won by One Nation after discovering ballots that had not been counted. "This morning, I have ordered a further count of the district of Narungga, following the discovery of ballot papers that were not counted in the initial count and the subsequent recount," Acting Commissioner Leah McLay said on Thursday. "The commission has identified a number of unopened, absent ordinary ballot papers and declaration ballot papers that were returned from the district of Stuart. "This included 77 absent ordinary ballot papers and four declaration papers for the district of Narungga. "An earlier recount in Narungga had declared the winning candidate by a margin of 58 votes." One Nation's Chantelle Thomas won the seat in the SA election by a margin of 58 votes over Liberal candidate Tania Stock. The unopened ballot papers have now been secured, according to the acting commissioner. Each of the candidates from the seat have now been informed a recount will take place on Friday. "The purpose of the count is for the commission to determine whether the result would have differed had those ballots been included in the initial count and subsequent recount," Ms McLay said on Thursday. If the recount finds another candidate won the seat, the SA Electoral Commission is likely going to find itself in a legal minefield. Ms McLay said the commission would need to seek legal advice and then work with the Court of Disputed Returns. More to come... Get the latest news from thewest.com.au in your inbox. Sign up for our emails

CHAOS
Countryman7d ago
Read update
One Nation chaos as uncounted votes discovered

PsiQuantum, the University of Tokyo, and Mitsubishi Chemical Corporation Announce Partnership to Bolster Quantum Workforce Development in Japan

PsiQuantum, the University of Tokyo, and the Mitsubishi Chemical Corporation today announced a partnership to provide education and training for Japan's growing quantum workforce. This initiative is supported by the Government of Japan's New Energy and Industrial Technology Development Organization (NEDO) under the Post-5G Information and Communication Systems program (2025-2027). This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260415252265/en/ The program is jointly conducted by PsiQuantum, the University of Tokyo, and Mitsubishi Chemical Corporation, combining academic education, industrial application development, and advanced quantum computing technologies. As fault-tolerant quantum computing emerges as a key technology for future industrial applications, the demand for highly skilled quantum professionals is increasing worldwide. This new partnership underscores the growing strength of the quantum ecosystem in Japan, as well as the critical role of a strong workforce in achieving the full promise of fault-tolerant quantum computing. The program is jointly conducted by PsiQuantum, the University of Tokyo, and Mitsubishi Chemical Corporation, combining academic education, industrial application development, and advanced quantum computing technologies. PsiQuantum provides expertise in fault-tolerant quantum computing and related software tools, the University of Tokyo leads the educational curriculum, and Mitsubishi Chemical Corporation contributes industrial use cases in chemistry and materials science. Together, the three partners have launched a six-month training program for participants from the private sector and academia. More than 80 participants from over 20 companies with operations in Japan have already joined the program. Attendees will learn more about the fundamentals of fault-tolerant quantum computing, explore potential use cases across a range of sectors, and gain experience using advanced tools such as Construct, PsiQuantum's secure, end-to-end platform for designing, analyzing, and optimizing algorithms for fault-tolerant quantum computing. Subsequent phases over the next two years will focus on joint research and development opportunities in chemistry and materials science applications, with the shared objective of advancing toward deployment on fault-tolerant quantum computers. "Fault-tolerant quantum computers will only reach their full potential if we are prepared to use them effectively once they are built and deployed," said Victor Peng, PsiQuantum Interim Chief Executive Officer. "We are proud to partner with Mitsubishi Chemical Corporation and the University of Tokyo to further strengthen and prepare Japan's globally recognized quantum workforce -- and we are grateful to the Government of Japan for their support." "Developing human resources capable of connecting quantum technologies with real-world challenges is essential for the advancement of quantum computing," said Takeshi Sato, University of Tokyo Associate Professor. "Through this partnership, we aim to provide students and professionals with hands-on experience in both the theoretical foundations and practical applications of fault-tolerant quantum computing." "Quantum computing has the potential to significantly accelerate innovation in chemistry and materials science," said Qi Gao, Mitsubishi Chemical Corporation Distinguished Scientist. "By collaborating with the University of Tokyo and PsiQuantum, we aim to cultivate the next generation of quantum professionals while exploring future industrial applications of fault-tolerant quantum computing." This initiative represents one of Japan's first structured training programs focused specifically on fault-tolerant quantum computing and aims to advance the long-term development of a sustainable quantum innovation ecosystem in Japan. About PsiQuantum PsiQuantum was founded in 2016 and is headquartered in Palo Alto, California. The company's mission is to build and deploy the world's first useful quantum computers. PsiQuantum's photonic approach enables it to leverage high-volume semiconductor manufacturing, existing cryogenic infrastructure, and architectural flexibility to rapidly scale its systems. Learn more at www.psiquantum.com. About the University of Tokyo The University of Tokyo is Japan's leading university and one of the world's top research universities. The vast research output of some 6,000 researchers is published in the world's top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 5,000 international students. https://www.u-tokyo.ac.jp/en/index.html About Mitsubishi Chemical Corporation Mitsubishi Chemical Corporation, established in 1933, is a comprehensive chemical manufacturer that provides a wide range of materials, from basic chemicals to performance products. The company operates globally across diverse fields including mobility, semiconductors and communications, food, medical, and infrastructure. Mitsubishi Chemical aims to be a "Green Specialty Company" committed to solving social problems and to delivering impressive results to customers with the power of materials, under our Purpose that "We lead with innovative solutions to achieve KAITEKI, the well-being of people and the planet." For further information, please visit our website: https://www.m-chemical.co.jp/en/products/ View source version on businesswire.com: https://www.businesswire.com/news/home/20260415252265/en/

PsiQuantum
mykxlg.com7d ago
Read update
PsiQuantum, the University of Tokyo, and Mitsubishi Chemical Corporation Announce Partnership to Bolster Quantum Workforce Development in Japan

PsiQuantum, the University of Tokyo, and Mitsubishi Chemical Corporation Announce Partnership to Bolster Quantum Workforce Development in Japan

PsiQuantum, the University of Tokyo, and the Mitsubishi Chemical Corporation today announced a partnership to provide education and training for Japan's growing quantum workforce. This initiative is supported by the Government of Japan's New Energy and Industrial Technology Development Organization (NEDO) under the Post-5G Information and Communication Systems program (2025-2027). This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260415252265/en/ The program is jointly conducted by PsiQuantum, the University of Tokyo, and Mitsubishi Chemical Corporation, combining academic education, industrial application development, and advanced quantum computing technologies. As fault-tolerant quantum computing emerges as a key technology for future industrial applications, the demand for highly skilled quantum professionals is increasing worldwide. This new partnership underscores the growing strength of the quantum ecosystem in Japan, as well as the critical role of a strong workforce in achieving the full promise of fault-tolerant quantum computing. The program is jointly conducted by PsiQuantum, the University of Tokyo, and Mitsubishi Chemical Corporation, combining academic education, industrial application development, and advanced quantum computing technologies. PsiQuantum provides expertise in fault-tolerant quantum computing and related software tools, the University of Tokyo leads the educational curriculum, and Mitsubishi Chemical Corporation contributes industrial use cases in chemistry and materials science. Together, the three partners have launched a six-month training program for participants from the private sector and academia. More than 80 participants from over 20 companies with operations in Japan have already joined the program. Attendees will learn more about the fundamentals of fault-tolerant quantum computing, explore potential use cases across a range of sectors, and gain experience using advanced tools such as Construct, PsiQuantum's secure, end-to-end platform for designing, analyzing, and optimizing algorithms for fault-tolerant quantum computing. Subsequent phases over the next two years will focus on joint research and development opportunities in chemistry and materials science applications, with the shared objective of advancing toward deployment on fault-tolerant quantum computers. "Fault-tolerant quantum computers will only reach their full potential if we are prepared to use them effectively once they are built and deployed," said Victor Peng, PsiQuantum Interim Chief Executive Officer. "We are proud to partner with Mitsubishi Chemical Corporation and the University of Tokyo to further strengthen and prepare Japan's globally recognized quantum workforce -- and we are grateful to the Government of Japan for their support." "Developing human resources capable of connecting quantum technologies with real-world challenges is essential for the advancement of quantum computing," said Takeshi Sato, University of Tokyo Associate Professor. "Through this partnership, we aim to provide students and professionals with hands-on experience in both the theoretical foundations and practical applications of fault-tolerant quantum computing." "Quantum computing has the potential to significantly accelerate innovation in chemistry and materials science," said Qi Gao, Mitsubishi Chemical Corporation Distinguished Scientist. "By collaborating with the University of Tokyo and PsiQuantum, we aim to cultivate the next generation of quantum professionals while exploring future industrial applications of fault-tolerant quantum computing." This initiative represents one of Japan's first structured training programs focused specifically on fault-tolerant quantum computing and aims to advance the long-term development of a sustainable quantum innovation ecosystem in Japan. About PsiQuantum PsiQuantum was founded in 2016 and is headquartered in Palo Alto, California. The company's mission is to build and deploy the world's first useful quantum computers. PsiQuantum's photonic approach enables it to leverage high-volume semiconductor manufacturing, existing cryogenic infrastructure, and architectural flexibility to rapidly scale its systems. Learn more at www.psiquantum.com. About the University of Tokyo The University of Tokyo is Japan's leading university and one of the world's top research universities. The vast research output of some 6,000 researchers is published in the world's top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 5,000 international students. https://www.u-tokyo.ac.jp/en/index.html About Mitsubishi Chemical Corporation Mitsubishi Chemical Corporation, established in 1933, is a comprehensive chemical manufacturer that provides a wide range of materials, from basic chemicals to performance products. The company operates globally across diverse fields including mobility, semiconductors and communications, food, medical, and infrastructure. Mitsubishi Chemical aims to be a "Green Specialty Company" committed to solving social problems and to delivering impressive results to customers with the power of materials, under our Purpose that "We lead with innovative solutions to achieve KAITEKI, the well-being of people and the planet." For further information, please visit our website: https://www.m-chemical.co.jp/en/products/ View source version on businesswire.com: https://www.businesswire.com/news/home/20260415252265/en/

PsiQuantum
WBOC TV-167d ago
Read update
PsiQuantum, the University of Tokyo, and Mitsubishi Chemical Corporation Announce Partnership to Bolster Quantum Workforce Development in Japan

TrendAI Expands AI Security Capabilities Through Strategic Collaboration with Anthropic

TrendAI™, the enterprise cybersecurity business from Trend Micro, announced a strategic engagement with Anthropic, embedding Claude models across its platform to power agentic workflows, automation, AI-native security operations, and develop threat research to identify vulnerabilities in AI systems and infrastructure. TrendAI™ will use Claude to advance vulnerability discovery while ensuring coordinated action in real-world risk reduction. TrendAI™'s use of Claude spans threat research, real-world risk reduction, platform innovation, and global go-to-market execution. This will operate across the full AI security lifecycle, from vulnerability discovery to automated defense and AI-native operations. TrendAI™ will use Claude to advance vulnerability discovery while ensuring coordinated action in real-world risk reduction. Focus areas include: Advancing AI Threat Research: TrendAI™ is scaling its threat research to address the growing attack surface of AI, building on proven programs like Pwn2Own Berlin under TrendAI™ ZDI. This approach brings real-world vulnerability discoveries into AI systems, helping identify and address critical weaknesses before it reaches production environments.Driving AI-Native Innovation: Anthropic's Claude models will help power TrendAI™'s platform innovation, enhancing agentic workflows, automation, and AI-native security operations. This enables organizations to reduce noise, act faster, and scale security alongside AI adoption. The announcement comes as TrendAI™ prepares to welcome over 600 cybersecurity leaders to its Spark Leadership Exchange in Phoenix, Arizona in May. Anthropic will join TrendAI™ on stage at the event alongside other industry leaders, reinforcing a shared commitment to shaping the future of AI security and engaging directly with global enterprise leaders.

Anthropic
thefastmode.com7d ago
Read update
TrendAI Expands AI Security Capabilities Through Strategic Collaboration with Anthropic

AI That Can Hack? Anthropic Tested Mythos -- Here's What It Found

This AI Can Find And Exploit Bugs -- Anthropic Reveals Findings (Image credit: AI-generated) In a world driven by artificial intelligence, the fears of AI have already started to surface. While AI can take away jobs, it might also be powerful enough to hack software on its own. Anthropic has recently introduced its latest AI model, Mythos, which can detect critical vulnerabilities in software before hackers exploit them. However, Mythos has been making headlines for entirely different reasons. AI researcher at Anthropic, Nicholas Carlini, tested the company's new AI model and realised the system could do far more than he expected, and not all of it was safe.

Anthropic
TimesNow7d ago
Read update
AI That Can Hack? Anthropic Tested Mythos  --  Here's What It Found

PsiQuantum, the University of Tokyo, and Mitsubishi Chemical Corporation Announce Partnership to Bolster Quantum Workforce Development in Japan

PsiQuantum, the University of Tokyo, and the Mitsubishi Chemical Corporation today announced a partnership to provide education and training for Japan's growing quantum workforce. This initiative is supported by the Government of Japan's New Energy and Industrial Technology Development Organization (NEDO) under the Post-5G Information and Communication Systems program (2025-2027). This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260415252265/en/

PsiQuantum
The Norfolk Daily News7d ago
Read update
PsiQuantum, the University of Tokyo, and Mitsubishi Chemical Corporation Announce Partnership to Bolster Quantum Workforce Development in Japan

NAACP sues Musk's xAI, alleging illegal operation of gas turbines

Elon Musk's artificial intelligence startup xAI has invested more than $20 billion to build the data centre in Southaven with the full backing of Mississippi Governor Tate Reeves, but the facility, as well as Colossus 1 ⁠just over the border in Memphis, Tennessee, has met heavy opposition from local communities due to their effect on local air and environmental ⁠quality. The largest U.S. civil rights group on Tuesday sued xAI and a subsidiary, claiming they illegally operated more than two dozen gas turbines in Mississippi to power its Colossus 2 data center, posing a health risk to local residents. The NAACP, represented by Earthjustice and the Southern Environmental Law Center, sued xAI and subsidiary MZX ⁠Tech, charging ⁠they violated the federal Clean Air Act by running 27 gas-fired turbines before getting necessary air permits for its massive data center that powers xAI's Grok chatbot. Elon Musk's artificial intelligence startup xAI has invested more than $20 billion to build the data centre in Southaven with the full backing of Mississippi Governor Tate Reeves, but the facility, as well as Colossus 1 ⁠just over the border in Memphis, Tennessee, has met heavy opposition from local communities due to their effect on local air and environmental ⁠quality. "By looking to evade clean air laws to operate dirty turbines that emit pollution and known carcinogens, these companies are following a shameful, familiar pattern: asking Black and frontline communities to bear the toxic brunt of 'innovation,'" said Abre' Conner, director of the Center for Environmental and Climate Justice at the NAACP. The NAACP announced its intention to sue xAI and MZX in February because the Clean Air Act requires 60 days of notice ahead of filing a lawsuit. Mississippi regulators held one public hearing that month about permits for those turbines after just ⁠a few days of public notice for the hearing, and subsequently approved the permits. xAI was not immediately available for comment. Earthjustice said that xAI's Southaven power plant has the potential to emit more than 1,700 tons of smog-causing nitrogen oxides (NOx) each year, a major source of smog in the greater Memphis area. They are also estimated to emit 180 tons of fine particulate matter, 500 tons of carbon monoxide, and 19 tons of cancer-causing formaldehyde.

xAI
ETTelecom.com7d ago
Read update
NAACP sues Musk's xAI, alleging illegal operation of gas turbines
Showing 3561 - 3580 of 10807 articles