The latest news and updates from companies in the WLTH portfolio.
All mice were housed in our specific-pathogen-free facility under 12 h-12 h light-dark cycles, maintained at 70 °F and 50% humidity, in compliance with institutional and AAALAC-accredited Animal Studies Committee guidelines at Washington University in St Louis, following all relevant ethical regulations. For CD11c-DTR BM chimeras, CD45.1 SJL recipient mice were lethally irradiated (1,050 rads X-ray). Within 12-18 h after irradiation, the recipient mice were i.v. injected with ≥5 × 10 BM cells obtained from CD11c-DTR donor mice. For WT- or MHC-I TKO-to-SJL BM chimeras, SJL recipient mice were depleted of NK cells by i.p. injection of 100 μg anti-NK1.1 antibody (PK136, Leinco Technologies, N123). The next day, the recipient mice were lethally irradiated (1,050 rads X-ray). Within 12-18 h after irradiation, the recipient mice were i.v. injected with ≥5 × 10 BM cells obtained from either WT or MHC-I TKO donor mice. For WT or Δ32-to-SJL or MHC-I TKO BM chimeras, SJL or MHC-I TKO recipient mice were lethally irradiated (1,050 rads X-ray). Within 12-18 h after irradiation, the recipient mice were i.v. injected with ≥5 × 10 BM cells obtained from either WT or Δ32 donor mice. For B6 and BALB/c allogeneic BM chimera, recipient B6 and BALB/c mice were lethally irradiated with 1,050 rads and 650 rads X-ray, respectively. Donor BM from B6 or BALB/c mice was collected and treated with ACK lysis buffer to remove erythrocytes. T cells were depleted from donor BM suspensions by incubating cells with biotinylated anti-CD4 (GK1.5, BioLegend) and anti-CD8β (YTS156.7.7, BioLegend) antibodies, followed by magnetic depletion using MagniSort Streptavidin Negative Selection Beads (Thermo Fisher Scientific). After T cell depletion, ≥5 × 10 cells of prepared BM were i.v. injected into irradiated recipient mice. All BM chimera recipients were allowed to reconstitute for at least 7 weeks before use in experiments. Flow cytometry and cell sorting were performed using either the Aurora flow cytometer (Cytek) or FACSAria Fusion (BD) system. Data acquisition was performed using BD FACSDiva software, and analyses were conducted using FlowJo v.10.10.0 (BD Biosciences). Surface staining was performed at 4 °C in the presence of Fc block (2.4G2) in magnetic-activated cell-sorting (MACS) buffer (PBS supplemented with 0.5% BSA and 2 mM EDTA). For depletion-based sort purification of OT-I T cells and splenic DCs, the following biotinylated anti-mouse antibodies were used: B220 (RA3-6B2), Ly6G (1A8), CD3ε (145-2C11), CD19 (6D5), TER119 (TER-119), CD8β (YTS156.7.7) and CD4 (GK1.5) (all from BioLegend), and CD105 (MJ7/18) (from Invitrogen). Biotinylated cells were detected with BV650-conjugated Streptavidin (BioLegend, 405231) and PE-Cy7-conjugated Streptavidin (BioLegend, 405206). For biotin- and fluorochrome-conjugated antibodies, the following anti-mouse antibodies were used. From BioLegend: AF488-conjugated B220 (RA36B2, 103225), AF647-conjugated SIGLECH (551, 129608), BV510-conjugated I-A/E (M5/114.15.2, 100752), FITC-conjugated KLRG1 (2F1/KLRG1, 138409), PE- and BV421-conjugated XCR1 (ZET, 148204 and 148216), PE-Cy7-conjugated CD24 (M1/69, 138508), APC-Cy7-conjugated SIRPα (P84, 110716), BV605- and BV510-conjugated CD8α (53-6.7, 100751 and 100752), APC-Cy7-conjugated CD45.1 (A20, 110716), PE-Cy7-conjugated CD45.2 (104, 109814), BV421-conjugated H-2Kb (AF6-88.5, 116525), PE-conjugated H-2Db (KH95, 111508), PE-conjugated Vα2 (B20.1, 127808), APC-conjugated CD44 (IM7, 103028 and), PerCP-Cy5.5-conjugated CD62L (MEL14, 104432), biotin-conjugated CD69 (FN50, 310924), FITC-conjugated CD3ε (145-2C11, 100306), FITC-conjugated CD11b (M1/70, 101206), BV711-conjugated CD4 (GK1.5, 100447), AF700-conjugated F4/80 (BM8, 123130), BV421-conjugated Ly6C (HK1.4, 128032), BV711-conjugated CD115/CSF-1R (AFS98, 135515) and APC-conjugated CD226 (10E5, 128810). From BD Biosciences: BUV395-conjugated CD45R/B220 (RA3-6B2), BUV395-conjugated KIT (2B8), BV421-conjugated CD127 (Sb/199) and PE-CF594-conjugated Flt3 (A2F10.1). From Invitrogen: APC-eF780-conjugated CD44 (IM7), APC-eF780-conjugated CD11c (N418) and PerCP-ef710-conjugated Sirpα (P84). For in vivo IFNAR1 blockade, 2 mg of anti-mouse IFNAR-1 antibody (MAR1-5A3, Leinco Technologies, I-401) was administered by i.p. injection every 7 days, beginning 1 day before immunization (day -1 and day 6). LNs were collected and enzymatically digested in complete IMDM (I10F; Iscove's modified Dulbecco's medium with 2ME, NEAA, glutamine, penicillin-streptomycin and 10% FBS) supplemented with 30 U ml of DNase I (Sigma-Aldrich) and 250 μg ml of collagenase B (Roche) for 30-45 min at 37 °C. After digestion, single-cell suspensions were filtered through 70-μm strainers, APCs were sorted as B220MHC-IICD11cXCR1CD172α (cDC1), B220MHC-IICD11cXCR1CD172α (cDC2) and B220MHC-II (B cells) cells. For cDC staining, spleen and LNs were collected and enzymatically digested in I10F supplemented with 30 U ml of DNase I (Sigma-Aldrich) and 250 μg ml of collagenase B (Roche) for 30-45 min at 37 °C. After digestion, single-cell suspensions were filtered through a 70-μm strainer and stained for flow cytometry analysis. For CD8 T cell staining, spleen, LNs and peripheral blood were collected, mechanically dissociated and passed through a 70-μm strainer for single-cell suspensions. After ACK lysis, cells were stained for flow cytometry. BM was collected from the femurs, tibias and pelvis by mechanical disruption using a mortar and pestle in MACS buffer. Cell suspensions were passed through a 70-µm strainer, erythrocytes were lysed with ACK buffer and the resulting cells were stained for flow cytometry. Mice were perfused with cold PBS containing 2 mM EDTA before tissue collection. Tibialis anterior and gastrocnemius-soleus muscles were dissected, trimmed of fat and nerves, and processed for immune-cell isolation using a Percoll gradient. Muscles were minced in IMDM and digested in collagenase D 1.0 mg ml, DNase I 30 U ml in IMDM at 37 °C for 45 min with shaking. Digestion was stopped with I10F, and suspensions were filtered through a 70-µm mesh and pelleted. Cell pellets were resuspended in 40% Percoll-RPMI and overlaid onto 80% Percoll-PBS, then centrifuged at 1,400g for 15 min without brake. Leukocytes at the 40%/80% interface were collected, washed with I10F and stained for flow cytometry. Cap 1 N1meΨ OVA mRNA was provided by Innovac Therapeutics or purchased from PackGene. OVA mRNA or dead (non-coding) mRNA LNPs were formulated in lipids at molar ratios of 50:38.5:10:1.5 (ionizable lipid SM-102:cholesterol:DSPC:DMG-PEG2000). LNP size and size distribution, encapsulation efficiency, stability and endotoxin level were rigorously tested. mLama4 mRNA was provided by R.D.S. For in vivo studies, 50 µl mRNA-LNP containing 10 µg mRNA was injected i.m. into the gastrocnemius muscle. Unless indicated otherwise, mRNA-LNP was administered on day 0 and day 7, and the immune responses were measured at day 11. Plasmid DNA encoding the full-length OVA was amplified in Escherichia coli DH5α (Invitrogen) and purified using the NucleoBond Maxi Plasmid DNA Purification kit (Macherey-Nagel). Empty pcDNA3.1(+) vector DNA was used as control. DNA vaccination was performed using a Helios gene gun (Bio-Rad). Mice were vaccinated with 4 µg of DNA at 3-day intervals (day 0, 3 and 6) for a total of three doses. DNA was delivered to non-overlapping shaved and depilated abdominal areas, with helium discharge pressure set to 400 p.s.i. Immune responses were measured 5 days after the last gene gun vaccination (day 11). Soluble ovalbumin (low endotoxin; Worthington, LS003509) was dissolved in PBS and emulsified 1:1 (v/v) with AddaVax (InvivoGen; vac-adx-10) at 4 °C by vortexing for 2 min. Mice were immunized i.m. with 50 μl of emulsion containing 10 µg OVA on days 0 and 7, into the same flank. Freeze-thawed Abelson-mOVA cells were used to standardize antigen quantity without cell proliferation, as previously described. In brief, Abelson-mOVA cells were generated by retroviral transduction of Abl-MuLV-transformed MHC-I TKO BM tumour cell line with a membrane-OVA construct (Abl-MuLV was a gift from B. Sleckman). Cells underwent three rapid freeze-thaw cycles and were stored at -20 °C until use. Mice were immunized with 3.3 × 10 freeze-thawed Abelson-mOVA cells. LNs and spleens from CD45.1 OT-I mice were collected, mechanically dissociated into and passed through 70-μm strainers to generate a single-cell suspension. Erythrocytes were lysed with ammonium-chloride-potassium bicarbonate (ACK) lysis buffer. Cells were depleted of TER-119-, I-A/E-, Ly-6G- and B220-expressing cells by incubation with biotinylated antibodies for 20 min at 4 °C, followed by depletion with MagniSort Streptavidin Negative Selection Beads (Thermo Fisher Scientific). Naive OT-I cells were sorted as B220CD45.1CD4CD8Vα2CD44CD62L, washed with PBS and labelled with CTV proliferation dyes (Thermo Fisher Scientific). For ex vivo cross-presentation assays, 2.5 × 10 CTV-labelled OT-I cells were co-cultured with sorted cDC1, cDC2 or B cells isolated from dLNs or cLNs 2 days after immunization. Co-cultures were performed in a well of U-bottom 96-well plates. After 3 days, cells were washed, surface-stained with antibodies and analysed for CTV dilution and CD44 expression. For in vivo antigen-presentation assays, 5 × 10 CTV-labelled naive OT-I cells were i.v. transferred into recipient mice. Then, 1 day later, the mice were immunized with the indicated antigens. At the indicated timepoints, spleens were collected and erythrocyte lysed with ACK buffer, and CD45.1 OT-I cells were analysed for CTV dilution and CD44 expression. In OT-I proliferation assays, the average division number was calculated as ∑(fraction of total OT-I cells in division n × n) based on the peak of the undivided control without immunization and the peak for each division automatically fit by FlowJo software. The gate boundaries were adjusted to the lowest population between two peaks. For T cell egress blockade, 1 mg per kg body weight of FTY720 (Sigma-Aldrich, SML0700) was administered by i.p. injection in 150 μl PBS 1 day after OT-I cell adaptive transfer. For blockade of naive T cell entry to lymphoid organs, splenectomy was performed by WashU Medicine Animal Surgery Core under anaesthesia using standard surgical removal of the spleen, followed by closure of peritoneum and skin. Mice were monitored daily for 4 days to ensure recovery. On day 5 after surgery, mice were injected i.p. with 200 μg anti-CD62L (MEL-14; Leinco Technologies, C2118). Then, 6 h later, the mice were adoptively transferred i.v. with CTV-labelled naive OT-I cells. The next day, mice were immunized with 0.1 μg OVA mRNA-LNP. Spleens were collected and passed through 70-μm strainers to generate single-cell suspensions. After erythrocyte lysis with ACK lysis buffer, cells were resuspended in MACS buffer. After counting with a ViCell analyser, 3 × 10 splenocytes were used for staining. APC- and PE-conjugated H-2Kb chicken ova 257-264 SIINFEKL tetramers (NIH Tetramer Core Facility) were added at a concentration of 1:100 in MACS buffer containing 10% Fc Block (2.4G2) and incubated at 37 °C for 15 min. Without washing, fluorochrome-conjugated antibodies for surface staining were then added directly and incubated at 4 °C for 30 min. The ELISpot assay was performed using the Mouse IFNγ (ALP) ELISpot Plus Kit (Mabtech) according to the manufacturer's instructions. In brief, mouse spleen cell suspensions (1 × 10-2 × 10 cells) after ACK lysis were incubated in triplicate for 20 h with or without the presence of 1 μM SIINFEKL peptide (AnaSpec). After extensive washes, biotinylated detection antibody was added followed by streptavidin-ALP and insoluble BCIP/NBT-plus substrate. Plates were scanned and analysed on an ImmunoSpot Reader (CTL). The contralateral inguinal LN was fixed for 6 h shaking at 4 °C in 4% paraformaldehyde (PFA) (Santacruz, sc-281692) that was adjusted to pH 9.0 with triethanolamine. LNs were washed out of PFA in 1× PBS with 10 U ml heparin, embedded in 4% low-melting-point agarose, and sectioned into 200-μm sagittal slices using the LeicaVT1200 vibratome. The sections were blocked in ADAPT-3D blocking buffer (Leinco, B673) for 1 h and then stained with primary antibodies against CD11c (Bio-Rad, MCA1369, N418) and F4/80 (BioLegend, 123101, BM8) diluted 1:200 from a stock concentration of 1 mg ml in ADAPT-3D blocking buffer and left shaking at room temperature overnight. The sections were washed in 1× PBS with 10 U ml heparin and 0.2% Tween-20, three times for 1 h each. The sections were stained with secondary antibodies overnight (Jackson ImmunoResearch, Cy3 goat anti-Armenian hamster IgG, 127-165-160; AF647 donkey anti-rat IgG, 712-605-153) diluted 1:300 from a stock concentration of 1.5 mg ml in ADAPT-3D blocking buffer after first passing through a 0.22-μm PVDF filter (Millex, SLGVR04NL) and with anti-CD169 (Bio-Rad, MCA947GA, MOMA-1) directly conjugated with CF488 (Biotium, 92253). The sections were washed in 1× PBS with 10 U ml heparin and 0.2% Tween-20 three times for 1 h each. For three sections per LN, a tilescan with 9-μm z stacks was acquired with a ×20 lens (air, 0.8 NA) on a Leica SP8 confocal microscope. Images were Gaussian or median filtered using Imaris v.10.1.1 and representative images were exported as a maximum-intensity projection. The 1956 tumour cell line expressing membrane-bound ovalbumin (1956-mOVA) was derived from the methylcholanthrene (MCA)-induced fibrosarcoma 1956 tumour (from R.D.S.), as previously described. The original tumour was generated in a female C57BL/6 mouse, tested for mycoplasma contamination and banked at low passage. For experiments, tumour cells were thawed from frozen stocks and cultured for 4-6 days in vitro with one intervening passage in RPMI medium supplemented with 2ME, NEAA, glutamine, penicillin-streptomycin and 10% FBS (R10F). On the day of injection, tumour cells were collected by trypsinization, washed three times with PBS and resuspended at 6.67 × 10 cells per ml. Mice were subcutaneously injected into the shaved flank with 1 × 10 cells. Tumour growth was monitored every 3-5 days using callipers. Two perpendicular diameters of tumour mass were measured and multiplied to calculate the tumour area (mm). In accordance with IACUC-approved protocol, tumours were not permitted to exceed 20 mm in maximal diameter at any point. In vivo killing assays were performed on mice 6 weeks after the second OVA mRNA-LNP immunization. Splenocytes from naive CD45.1 SJL mice were collected, ACK lysed and prepared as a single-cell suspension. Cells were resuspended in I10F at 2 × 10 cells per ml, and divided into two equal fractions and pulsed with either 1 μg ml SIINFEKL or 1 μg ml irrelevant control peptide for 30 min at 37 °C. Cells were then washed twice with PBS and stained at 5 µM for CTV or at 0.5 µM for CTV for 10 min at 37 °C, and mixed at a ratio of 1:1 immediately before transfer. Statistical analyses were performed using GraphPad Prism software v.10. Centre values represent the mean and the error bars indicate s.d. unless otherwise specified. For groups that are not assumed to have equal variances, Welch's or Brown-Forsythe one-way ANOVA was used. OVA-tetramer-specific splenic cells were isolated from WT, Δ32 and Δ1+2+3 mice and washed with 1× PBS containing 0.04% BSA. Before fluorescence-activated cell sorting, cells from each individual mouse were stained with hashtag oligonucleotides (HTOs) to enable multiplexing and improve sample throughput. cDNA was prepared after the GEM generation and barcoding, followed by the GEM-RT reaction and bead clean-up steps. Purified cDNA was amplified for 11-16 cycles before being cleaned-up using SPRIselect beads. The samples were then run on a Bioanalyzer to determine the cDNA concentration. V(D)J target enrichment (TCR) was performed on the full-length cDNA. Gene expression, enriched TCR and feature libraries were prepared as recommended by the 10x Genomics 'Chromium GEM-X Single Cell 5' Reagent Kits User Guide (v3 Chemistry Dual Index) with Feature Barcoding technology for Cell Surface Protein and Immune Receptor Mapping' user guide, with appropriate modifications to the PCR cycles based on the calculated cDNA concentration. For sample preparation on the 10x Genomics platform, the Chromium GEM-X Single Cell 5' Kit v3, 16 rxns (PN-1000699), Chromium GEM-X Single Cell 5' Chip Kit (PN-1000698), Chromium Single Cell Mouse TCR Amplification Kits (PN-1000254), Dual Index Kit TT Set A, 96 rxns (PN-1000215), Chromium GEM-X Single Cell 5' Feature Barcode Kit v3, 16 rxns (PN-1000703) and Dual Index Kit TN Set A, 96 rxns (PN-1000250) were used. The concentration of each library was accurately determined by quantitative PCR using the KAPA library Quantification Kit according to the manufacturer's protocol (KAPA Biosystems/Roche) to produce cluster counts appropriate for the Illumina NovaSeq6000 instrument. Normalized libraries were sequenced on the NovaSeqX plus S4 Flow Cell using the XP workflow and a 151 × 10 × 10 × 151 sequencing recipe according to the manufacturer's protocol. A median sequencing depth of 50,000 reads per cell was targeted for each gene expression library and 5,000 reads per cell for each V(D)J and feature library. The reads for each sequencing library were then aligned and quantitated with 10x CellRanger v.9.0.1 against the 10x standard refdata-gex-mm10-2020-A mouse gene reference and refdata-cellranger-vdj-GRCm38-alts-ensembl-7.0.0 VDJ reference according to the manufacturer's protocol. Single-cell gene expression analysis was performed in R (v.4.4.0) using the Seurat package (v.5.3.0). HTO data were first normalized individually for each sample (Supplementary Table 2) and demultiplexed using the HTODemux function; only singlet cells were retained for further analysis. Cells with >5% mitochondrial gene expression were excluded, and only those expressing between 200 and 4,000 genes were retained to remove low-quality cells and potential doublets. After quality control, data from all samples were merged and normalized. The 3,000 most-variable genes were identified, and mitochondrial, ribosomal and TCR genes were excluded from this list to avoid biases associated with highly abundant or cell-type-specific transcripts. The data were then scaled, principal component analysis was performed followed by batch correction and data integration using Harmony. Dimensionality reduction of the integrated matrix was carried out using UMAP based on the first 30 principal components. Phenotypic clusters were identified by constructing a k-nearest neighbours graph and applying the Louvain algorithm with a resolution parameter of 0.4. For TCR repertoire analysis, cell phenotype, sample identity and mouse ID information were extracted from the integrated metadata for each cell. TCR sequences were successfully annotated for 46,051 cells and used for downstream clonotype analyses. Cells sharing identical CDR3αβ amino acid sequences were defined as belonging to the same TCR clone. Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Adobe released a new artificial intelligence assistant that will connect to the Claude chatbot and will allow people to complete design projects with simple commands. The software company behind tools like Photoshop and Premiere on Wednesday said its new Firefly AI Assistant will take conversational directions from users and execute multi-step workflows throughout its Creative Cloud apps. Adobe is partnering with Anthropic and other AI companies to allow users to access Firefly and Adobe tools through Claude and other "leading third-party AI models," the company said. Firefly will be personalized to each user and can be used for AI video and image editing, with improvements in sound and color, the company said. "Adobe is leading the shift into a new era of agentic creativity, where you direct how your work takes shape and your perspective, voice and taste become the most powerful creative instruments of all," said David Wadhwani, Adobe's president of creativity and productivity business. Adobe's announcement comes as companies race to deploy more capable AI agents, which are AI models that not only chat with users but also autonomously complete tasks.

OpenAI is expanding its cybersecurity offerings with a new, purpose-trained language model. GPT-5.4-Cyber is a variant of GPT-5.4 that has been specifically optimized for defensive security applications and is now available to selected users. The release comes at a time when competitor Anthropic is also making waves with its model Claude Mythos Preview. According to OpenAI, GPT-5.4-Cyber is a version of GPT-5.4 in which restrictions have been deliberately relaxed for legitimate security work. The model is intended to enable security professionals to complete complex tasks more efficiently, without running into refusal limits that make sense for general users but can be an obstacle for professional defenders. One of the key new capabilities is binary code analysis: security professionals can use it to examine compiled software for vulnerabilities, malware potential, and security robustness without needing access to the source code. In addition, the model is designed to support extended workflows for cyber defense, including vulnerability research, security training, and defensive programming. OpenAI relies on a tiered access system within its existing "Trusted Access for Cyber" (TAC) program. Access to GPT-5.4-Cyber is not public, but is restricted to vetted security vendors, organizations, and researchers. OpenAI emphasizes that entry into the program is designed to be straightforward: Due to the elevated potential for misuse, the model is subject to special restrictions. For example, use in environments without data transparency -- such as zero-data-retention configurations -- is limited. OpenAI justifies this by noting that such usage scenarios lack visibility into the user, environment, and intended purpose. OpenAI describes its approach through three guiding principles that form the framework for GPT-5.4-Cyber and future developments: Alongside GPT-5.4-Cyber, OpenAI points to progress with Codex Security, a system for automated vulnerability analysis in codebases. Since its launch as a research preview, Codex Security has, according to the company, contributed to the remediation of more than 3,000 critical and high-severity security vulnerabilities. Through the "Codex for Open Source" program, more than 1,000 open-source projects have also been provided with free security scans. The release of GPT-5.4-Cyber coincides with the announcement of Anthropic's Claude Mythos Preview, a model that, according to the company, is capable of finding and exploiting security vulnerabilities in an almost fully autonomous manner. Anthropic has not released the model publicly due to its risk potential, and instead launched "Project Glasswing," an initiative for selected partners including Amazon Web Services, Apple, Google, Microsoft, and CrowdStrike. With GPT-5.4-Cyber, OpenAI is pursuing a comparable but broader approach in terms of reach: rather than a closed circle of partners, the company relies on a scalable access system with identity verification that is intended to eventually encompass thousands of individuals and hundreds of teams. Both companies share the assessment that AI capabilities in the cyber domain are already significant today, and that defenders must be given preferential access before attackers gain the upper hand. OpenAI announces that it will continue to expand the safety mechanisms for upcoming, even more powerful models. The company anticipates that today's safeguards are sufficient for current models, but that future generations will require more extensive defensive architectures. GPT-5.4-Cyber is to be understood as a first step in a longer-term program aimed at scaling security capabilities and protective measures in parallel.

CAPE CANAVERAL, Fla., April 15, 2026 (GLOBE NEWSWIRE) -- As SpaceX prepares for what could become the largest IPO in history with its confidential S-1 filing targeting a $1.75 trillion valuation and June roadshow, space infrastructure companies are positioning for sector-wide growth. Starfighters Space, Inc. (NYSE American: FJET), owner and operator of the world's largest commercial supersonic aircraft fleet, has announced a strategic partnership with Blackstar Orbital to advance flight testing of revolutionary reusable hypersonic space systems. The March 26 Technical Interchange Agreement (TIA), unveiled at the Satellite 2026 conference, establishes a framework for integrating Blackstar's innovative "SpaceDrone" technology with Starfighters' proven F-104 aircraft platform. The collaboration will enable progressive flight testing from supersonic captive carries beginning in Q4 FY26 through high-altitude, supersonic release operations in the Eastern Range off Florida's Atlantic Coast. Blackstar Orbital is pioneering a new class of spacecraft with its lifting-body SpaceDrone design -- reusable, hypersonic satellites that launch as conventional payloads but return to Earth like spaceplanes. This technology addresses growing demand for responsive space operations and rapid mission turnaround capabilities, priorities that have gained urgency amid increasing commercial and national security space requirements. ARTICLE CONTINUES BELOW "This partnership highlights the role Starfighters plays in bridging the gap between concept and flight for next-generation aerospace systems," said Tim Franta, CEO of Starfighters Space. "Blackstar is developing a highly differentiated approach to reusable space platforms, and our F-104 fleet provides a proven, high-performance environment to test and validate those systems in real-world conditions." SpaceX IPO Transforms Space Investment Thesis SpaceX's April 1, 2026 confidential IPO filing has fundamentally altered the space investment landscape. With reports of seeking up to $75 billion in capital at valuations approaching $1.75 trillion, the offering would eclipse all previous public debuts. Morgan Stanley, Bank of America, Citigroup, JP Morgan, and Goldman Sachs are leading a 21-bank syndicate for the June 8 roadshow, with public trading anticipated as early as July 2026. The extraordinary valuation reflects Starlink's evolution into a $16 billion annual revenue generator and the strategic integration of xAI's artificial intelligence capabilities through a $250 billion February acquisition. This convergence of space infrastructure, satellite connectivity, and AI positions SpaceX at the center of multiple high-growth technology sectors. For investors seeking diversified exposure to the space economy, publicly traded companies with established operations and expanding capabilities represent compelling alternatives to direct SpaceX investment. The sector includes established aerospace primes, emerging space technology providers, and specialized infrastructure companies like Starfighters that enable critical testing and operational capabilities. Hypersonic Flight Testing Platform The Starfighters-Blackstar partnership leverages unique infrastructure at NASA Kennedy Space Center, where Starfighters operates its fleet of modified F-104 supersonic aircraft capable of sustained MACH 2+ operations. Under the TIA, Starfighters has developed a specialized BL75 pylon that serves as the critical structural interface between the F-104 platform and Blackstar's SpaceDrone. The phased testing approach begins with captive carry operations to validate aerodynamic modeling and performance characteristics. Successful completion will enable progression to high-speed release testing over designated ocean ranges, with potential expansion to overland operations as system maturity is demonstrated. This methodology provides real-world validation while maintaining safety protocols essential for experimental aerospace operations. ARTICLE CONTINUES BELOW ARTICLE CONTINUES BELOW "Access to Starfighters' flight test platform allows us to accelerate development of our SpaceDrone and move into flight validation with confidence," said Christopher Jannette, CEO of Blackstar Orbital. "This collaboration is a critical step in demonstrating a new class of reusable, hypersonic satellite systems." Space Infrastructure Investment Opportunities The approaching SpaceX IPO has intensified investor focus on space infrastructure companies with proven capabilities and growth prospects. Recent developments across the sector highlight sustained momentum: AST SpaceMobile (NASDAQ: ASTS) secured a strategic partnership with TELUS on April 2, 2026, for expanding cellular broadband infrastructure in Canada by 2026, driving shares higher. The company reported $70.9 million in 2025 revenue -- its first year as a revenue-generating business -- with 2026 guidance of $150-200 million supported by over $1.2 billion in contracted revenue commitments. With plans to deploy 45-60 satellites by year-end 2026, ASTS is scaling its space-based cellular network that works directly with unmodified smartphones. GE Aerospace (NYSE: GE) continues to demonstrate strong fundamentals with upcoming Q1 2026 earnings on April 21 expected to build on Q4's solid performance of $1.57 EPS on $11.9 billion revenue. The company benefits from a record $190 billion backlog and recent guidance for low double-digit revenue growth in 2026, supported by strong commercial aviation aftermarket demand and defense contract momentum including a recent $1.4 billion defense award and $1 billion manufacturing investment commitment. RTX Corporation (NYSE: RTX) reported strong Q4 defense bookings of $10.3 billion, culminating in a record backlog of $268 billion with $107 billion specifically in defense programs. Major recent awards include a $1.7 billion contract for four Patriot air and missile defense systems to Spain and a $1.2 billion Tamir missile production agreement. The company's diversified defense and commercial aerospace portfolio provides stability through Pratt & Whitney engines and Collins Aerospace systems. TransDigm Group (NYSE: TDG) has demonstrated resilience through recent acquisitions including SEI Industries, Raptor Scientific, and the components business of Communications & Power Industries. Morgan Stanley analysts view the stock's 2026 underperformance as a buying opportunity given the company's attractive valuation, balance sheet strength, and position as the leading commercial airline aftermarket investment with a $1,660 price target representing significant upside from current levels. ARTICLE CONTINUES BELOW ARTICLE CONTINUES BELOW Space Sector Growth Catalysts The space economy is experiencing fundamental transformation driven by converging trends including increased government budgets exceeding $100 billion annually, expanding commercial applications, and emerging technologies like hypersonics and reusable systems. The global space economy is projected to reach $1.8 trillion by 2035, supported by applications ranging from satellite communications to space-based manufacturing. Starfighters' partnership with Blackstar Orbital positions the company at the intersection of these growth trends. As the world's only commercial operator capable of sustained MACH 2+ operations with space launch capability, Starfighters provides critical infrastructure for testing and validating next-generation space technologies that will define the industry's future. The timing of SpaceX's public debut creates a unique inflection point for space sector investment. While SpaceX's massive scale and valuation may limit individual investor access, the broader ecosystem offers diversified exposure through companies with established capabilities, strategic partnerships, and expanding market opportunities. Companies that combine operational expertise with innovative partnerships are positioned to benefit from sustained sector growth as space becomes increasingly central to both commercial and national security priorities. This is a digital media distribution. MIQ has been paid by CDMG. MIQ does not own shares of FJET but reserves the right to buy/sell. Distributed by USA News Group on behalf of MIQ. Reviewed/approved by CDMG. Please see https://equity-insider.com/fjet-profile/ for more information about our disclosure. CONTACT: EQUITY INSIDER [email protected] (604) 265-2873

Welcome to our press review of events in the United States. Every Wednesday we look at how the Swiss media have reported and reacted to three major stories in the US - in politics, finance and science. The already shaky military alliance between the United States and Europe wobbled even more last week as US President Donald Trump took aim once again at the North Atlantic Treaty Organization (NATO). This has left the Swiss media wondering whether this marriage of military might be headed for a divorce. The Tages-Anzeiger pulled no punches in its assessment of the situation. "Trump is engaging in blackmail," said the newspaper after his stormy meeting with NATO Secretary General Mark Rutte. "Nothing is normal anymore in the NATO-US relationship," added Fredy Gsteiger, diplomatic correspondent at Swiss public broadcaster SRF. "This bodes ill for NATO. Such messages are received with satisfaction in the enemy camp," Gsteiger said, referring to Russia. Trump's latest beef is the reluctance of NATO countries to join hostilities against Iran. The Swiss press views this argument with incredulity. The consensus view is that Iran poses no threat to Europe or any NATO member state. Trump has threatened to withdraw US troops from Europe in the past, without following up on these statements. "But with an erratic president who says one thing today and does the opposite tomorrow, such threats must be taken seriously," said the Tages-Anzeiger. The newspaper is clear that the US has become an unreliable partner in the future defence of Europe. This puts the onus on Europe to beef up its own military credentials. "When a US president like Trump uses the continent's security as leverage, dependence on this partner remains a risk," the Tages-Anzeiger writes. "Therefore, more serious European cooperation, greater military capabilities, more strategic independence, and faster closing of critical gaps are no longer optional, but essential." The impending stock market listing of Elon Musk's company SpaceX could inspire a new golden age of firms offering shares to the public, hopes the Neue Zürcher Zeitung (NZZ). Market observers speculate that SpaceX could be valued at up to $1.8 trillion (CHF1.4 trillion) at its initial public offering (IPO), more than double the previous record IPO by Saudi Aramco in 2019. The NZZ bemoans a recent trend of start-ups preferring venture capital funding to expand, only to be taken over by a larger rival. This means that only private equity firms and company founders get to benefit from the profits. Listing on a stock exchange spreads wealth more evenly, the business-friendly newspaper argues. "Small investors would also benefit, as nothing contributes to the democratisation of company ownership and wealth accumulation as much as publicly traded shares," writes the NZZ. But the newspaper also strikes a note of caution. The euphoria surrounding the listing plans of SpaceX, and other companies like OpenAI and Anthropic, also comes with risks. "Some of these lofty dreams could burst. The business models of the two AI specialists, in particular, are still largely untested," the NZZ says. The hype surrounding such huge company listings in the United States could also come back to haunt small investors, it says. The markets are relying on Elon Musk and other company founders playing fair rather than dumping their existing stakes for a quick profit. "Do they [IPOs] offer new, publicly traded shareholders the prospect of capital gains, or do they merely serve as an opportunity for existing owners to cash in on their shares?" asks the NZZ. The Silicon Valley firm Anthropic has created yet another stir with its new model called Mythos that can easily detect weaknesses in IT systems. Anthropic appears alarmed at the power of its creation, holding back release for fear of it falling into the hands of malicious hackers. But several Swiss media sense a familiar marketing ruse from the AI industry playbook. Companies first spread alarm at the threat posed by the mysterious new technology and then assure people about their ability to control it for public good. "With these announcements, Anthropic is achieving a double victory: the company is demonstrating the power of its upcoming models while simultaneously projecting an image of a responsible player in artificial intelligence," says Le Temps. The Geneva newspaper points out that Anthropic is planning to raise fresh funds by issuing shares in an anticipated Initial Public Offering (IPO). The company is also embroiled in a row with the White House over allowing the US Department of Defense to use its technology. Mythos could be the key to both raising public awareness about Anthropic and persuading US President Donald Trump to adopt a more positive stance towards the company. The Neue Zürcher Zeitung (NZZ) also asks whether Anthropic is scaremongering about the capabilities of Mythos. It points out that another tech company has already spotted many of the same IT vulnerabilities using older AI models. The NZZ points out that security concerns about outdated IT systems have been sounded for years - way before AI popped up. Perhaps the Mythos scare could finally prompt faster and more efficient action to patch up leaky databases, it says. "IT security professionals could use this opportunity to secure necessary funding. If they succeed, the considerable buzz surrounding Mythos would certainly have a positive effect," the newspaper concludes. The next edition of 'Swiss views of US news' will be published on Wednesday, April 22. See you then! If you have any comments or feedback, email [email protected] Are you looking for a simple way to stay updated on US-related news from a Swiss perspective? Subscribe to our free weekly newsletter and receive concise summaries of the most important political, financial, and scientific stories in the United States, as reported by Switzerland's leading media outlets - delivered straight to your inbox.

Le CEO de Novartis intègre l'organe de surveillance d'Anthropic Original +Get the most important news from Switzerland in your inbox The appointment was made in accordance with Anthropic's specific governance structure, the Long-Term Benefit Trust, as announced by the US company on Tuesday night. This is an independent body overseeing Anthropic's long-term direction, designed to protect the company's mission from short-term commercial pressures. In February, the trust announced the appointment of former Microsoft boss Chris Liddell, who also served under Donald Trump's first presidency as White House deputy chief of staff for policy coordination and director of strategic initiatives. Until now, Narasimhan has not held any other directorship. According to an article in the Wall Street Journal, Narasimhan's appointment to the board of directors is part of Anthropic's potential IPO, which could take place as early as this year, and the planned expansion of its healthcare business. According to the article, Anthropic recently acquired biotech start-up Coefficient Bio for $400 million (CHF312 million). Anthropic has been operating one of its three European offices in Zurich since autumn 2024. It is looking to strengthen its Swiss team and the company is prepared to spend significant sums to do so. According to a job advertisement from the company, the proposed annual salary for an AI specialist is between CHF280,000 and CHF680,000.

Anthropic is moving closer to a possible public listing as private-market valuation talk rises to as much as $800 billion. At the same time, OpenAI remains the other company most closely watched in the same race. Investors are now looking at revenue growth, enterprise demand, IPO readiness, and the cost of running large AI systems. Neither company has confirmed a final IPO date. However, market attention has moved beyond model launches and product demos. The focus is now on which company can show scale, stronger business demand, and a more structured path to the . A listing by either company would mark a major step for the AI sector. For now, Anthropic has drawn extra attention because of its recent valuation jump and its rising revenue. OpenAI, however, remains close in size and still holds a large position in the market.

This shows AI tools are now being priced based on actual usage, like a utility. Anthropic has made major changes to how it charges enterprise customers using its Claude AI service. The company is shifting from older fixed price plans to a system where businesses pay a smaller fee for each user seat and then pay separately for expected monthly usage. Earlier discounts for using the app have been removed, which could make some companies spend more overall. However, the basic price per user is now lower than before, so it is easier for small teams to start using it. This change shows that AI companies are moving toward charging based on actual use instead of fixed subscription fees, as more businesses around the world start using AI tools. The company says the new pricing model aims to make access to Claude more flexible for business customers while linking cost more closely to how much the service is used each month. Instead of only paying a fixed monthly amount, firms will now pay a lower fee per user and also plan for expected usage in advance. With this approach the company can match costs more fairly with how much each team uses AI tools for tasks like coding, writing, and customer support. At the same time, the company has removed earlier discounts on its API access, which could make things more expensive for big organisations. Some customers who expected low monthly use now have to agree to higher spending estimates, even if they don't actually use that much. This has made some companies worried that their overall costs might increase, especially when they do not use it consistently. Also read: 5 ways to improve AC efficiency and reduce electricity bills during summer season However, the starting price per user has been reduced compared to older plans, which makes it cheaper for small teams and new users to get started. For example, in some plans, the price per user has been cut from $30 to $15. The company is also increasing its computing power by working with big chip providers like Amazon and Google to handle the growing demand from business customers. Also read: Mark Zuckerberg moves desk to AI lab, codes alongside researchers amid AI race Moreover, recent reports also say the company's revenue has grown strongly because more industries are using AI tools. It also said that rapid usage patterns from a small number of customers can lead to faster consumption of computing resources. This is why it is adjusting pricing rules to balance demand and supply more carefully. In future updates, the company plans to improve efficiency so customers can get more stable performance at a lower cost.

You can save this article by registering for free here. Or sign-in if you have an account. CAPE CANAVERAL, Fla., April 15, 2026 (GLOBE NEWSWIRE) -- As SpaceX prepares for what could become the largest IPO in history with its confidential S-1 filing targeting a $1.75 trillion valuation and June roadshow, space infrastructure companies are positioning for sector-wide growth. Starfighters Space, Inc. (NYSE American: FJET), owner and operator of the world's largest commercial supersonic aircraft fleet, has announced a strategic partnership with Blackstar Orbital to advance flight testing of revolutionary reusable hypersonic space systems.

Crypto exchange Kraken is moving forward with a U.S. initial public offering after a brief pause, with co-CEO Arjun Sethi confirming the plan at the Semafor World Economy conference in Washington, D.C., according to CNBC. Separately, Deutsche Börse announced it would acquire a 1.5% fully diluted stake in Kraken's parent company, Payward, Inc., through a $200 million secondary transaction involving existing shares. The deal implies a valuation of $13.3 billion for Kraken, according to CoinDesk. The transaction is expected to close in the second quarter, subject to regulatory approval. The implied valuation marks a significant drop from the $20 billion figure attached to Kraken's $800 million fundraising round in November 2025, according to CNBC. Kraken said in November it had confidentially submitted a draft registration statement on Form S-1 with the U.S. Securities and Exchange Commission. At the time, the company said the number of shares and price range had not yet been determined, and that the offering would occur after the SEC completes its review, subject to market and other conditions. According to CNBC, the listing effort stalled after bitcoin slid to roughly 40% beneath the peak it set in October, souring the conditions needed to move forward. The cryptocurrency has rebounded since then, touching $76,000 for the first time since February and posting gains of roughly 9% over the course of April. Deutsche Börse and Kraken had previously announced a strategic partnership in December 2025, aimed at bridging traditional financial markets and the digital asset economy. Deutsche Börse described the arrangement as covering a broad range of functions -- among them trading, custody, settlement, collateral management, and tokenized assets -- designed to connect the two ecosystems for institutional clients. According to CoinDesk, the exchange operator had already built out a crypto trading platform aimed at institutional clients by 2024, and separately joined forces with Societe Generale-FORGE to bring euro and dollar stablecoin support into its post-trade infrastructure. Founded in 2011, Kraken offers trading in more than 450 digital assets, U.S. futures, U.S.-listed stocks and ETFs, and fiat currencies, the company said.

- Anthropic's Claude Mythos Preview is a vulnerability‑seeking AI that locates deep smart‑contract, wallet and cross‑chain bridge flaws at machine speed; security firms warn exploits could cause "hundreds of millions to billions" in irreversible DeFi/crypto losses. - Defensive response: Anthropic launched Project Glasswing with AWS, Google, Microsoft and JPMorgan and committed up to $100M in usage credits; major CEXs (Coinbase, Binance) and DEX teams (Uniswap -- >$3B TVL) are seeking early access while Fed/Treasury convened emergency talks. - Market impact: AI dramatically shortens time‑to‑exploit vs. traditional audits, creating systemic crypto security risk that markets may be underpricing today; quantum remains a longer‑term cryptographic threat. Anthropic's Mythos threat to the crypto industry can trigger hundreds of millions, if not billions, of dollars in sudden, irreversible losses. That is the stark reality facing digital asset markets following Anthropic's quiet unveiling of Claude Mythos Preview, a vulnerability-seeking AI model the San Francisco startup admits is simply too dangerous to release to the public. Deddy David, chief executive of blockchain security firm Cyvers, told CryptoSlate about the catastrophic scale of the problem, noting that the financial exposure of AI-driven exploits in crypto ranges from hundreds of millions to billions of dollars. He said: "If AI can identify vulnerabilities at scale across core internet infrastructure, crypto will be one of the first markets to feel the impact." If those estimates are correct, the scope of potential damage is staggering. Moreover, the scale of this new threat isn't just about bad actors writing slightly better phishing emails or generating malicious code snippets. Instead, it is about an autonomous system capable of finding deep, emergent logic flaws across smart contracts, wallets, and cross-chain bridges before human auditors even know where to look. For years, crypto founders and security researchers have obsessed over "Q-Day," the theoretical future date when a quantum computer becomes powerful enough to shatter blockchain cryptography. But Mythos recent launch is forcing a pivot. Security experts have noted that the most immediate threat to digital assets is no longer a future attack on cryptography. It is an AI system that can already uncover exploitable flaws in the very software layer the industry depends on. Anthropic's Mythos model fundamentally rewrites the timeline of infrastructure risk. According to the company, the model has already successfully identified vulnerabilities across every major web browser and operating system. In one alarming instance, it unearthed a 27-year-old bug buried in a critical piece of security infrastructure, alongside multiple deep-seated flaws within the Linux kernel. This was also corroborated by the UK government's AI Security Institute (AISI), which noted: "Our evaluation of Mythos Preview shows that it - and potentially future models - could be directed to autonomously compromise small, weakly defended, and vulnerable systems if given network access." The primary danger from these revelations is not simply that artificial intelligence makes cyber risk possible. Hackers have always existed. It is that AI radically compresses the time between bug discovery and exploit development. This means that vulnerability research that historically required months of painstaking human labor can now be executed at machine speed. For the traditional financial system, this represents a severe escalation in the cyber arms race. For the crypto industry, where transactions are instantaneous, irreversible, and governed entirely by autonomous code, it represents an immediate, systemic vulnerability. The architecture of the crypto ecosystem makes it uniquely vulnerable to machine-speed auditing. While traditional banks rely on siloed, proprietary networks with centralized fail-safes and circuit breakers, the digital asset sector runs almost entirely on public code. The industry is built on open-source dependencies, browser-based wallets, remote procedure call infrastructure, and smart contracts that are completely transparent to anyone or any AI model wishing to inspect them. This transparency creates a massive, publicly available attack surface. Compounding the risk is a severe structural mismatch between the value secured on-chain and the security budgets of the organizations that maintain it. Lean protocol teams frequently manage aging codebases that hold hundreds of millions of dollars in total value locked. Alex Svanevik, the chief executive of the agentic trading platform Nansen, told CryptoSlate: "Mythos is a different kind of threat: it's already finding vulnerabilities in the infrastructure crypto runs on that humans and every automated tool missed for decades." When AI-accelerated vulnerability discovery meets instant value transfer, the results can be devastating. Thus, the industry can no longer rely on traditional audits or post-incident detection. David explained: "When you combine AI-accelerated vulnerability discovery with instant, irreversible transactions, you dramatically shorten the path from bug to breach to loss. This is not just an increase in attack surface, it's an acceleration of time-to-exploit in a system where seconds matter." So what exactly is an AI model looking for? According to security experts, the most exposed layers are highly complex smart contracts and cross-chain bridges. These protocols are susceptible to emergent vulnerabilities, such as subtle state inconsistencies between upgradeable contracts or edge-case interactions across different modules. These are not simple syntax errors that a standard audit catches. Instead, they are complex interaction paths that large-scale AI simulations can easily surface. While artificial intelligence poses an immediate threat to the software layer, quantum computing remains the ultimate, looming threat to the cryptographic foundation of digital assets. Google Research has warned that future quantum computers may be able to break the elliptic-curve cryptography used in crypto systems with fewer resources than previously estimated. A sufficiently powerful cryptanalytically relevant quantum computer (CRQC) could derive private keys from public keys in minutes. With Bitcoin hovering around $70,000, the digital asset ecosystem presents a multi-trillion-dollar bounty. Current estimates suggest that up to 37% of circulating Bitcoin could be vulnerable to such a quantum hijacking before the network confirms the transaction. However, Google's public messaging remains focused on preparation and migration. The tech giant recently announced a 2029 target for a full industry transition to post-quantum cryptography. That contrast highlights the core of the industry's current dilemma. Anthropic's model represents software exploits happening right now. Quantum computing could pose a cryptographic threat later, assuming the industry fails to migrate its security standards in time. Chris Smith, chief executive of the cryptography firm Quantus, emphasized this exact distinction in his statement to CryptoSlate. He noted that while AI models are highly effective at finding and locating software bugs, quantum computing threatens the very foundations of the mathematics on which the crypto industry is built. If the underlying algorithms are broken, even flawless software becomes entirely insecure. Recognizing the sheer immediacy of the AI threat, the defensive race has officially begun. Through a new initiative called Project Glasswing, Anthropic has partnered with major tech firms and financial institutions, including Amazon Web Services, Google, Microsoft, and JPMorgan Chase, to use Mythos Preview to proactively find and fix flaws in critical systems. The company is committing up to $100 million in usage credits to help secure infrastructure before malicious actors can develop similar offensive capabilities. The threat has reached the highest levels of government. Last week, Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent convened a surprise meeting with major US bank chief executives to discuss the specific systemic risks posed by models like Mythos. Meanwhile, the crypto industry is scrambling to join this defensive perimeter. Major exchanges, including Coinbase and Binance, are reportedly in close communication with Anthropic to secure early access to the Mythos model. Decentralized platforms are also echoing the urgency, with Uniswap founder Hayden Adams publicly requesting access to test the model against the platform. Uniswap is the largest decentralized exchange protocol, with more than $3 billion in assets locked. Nansen's Svanevik argues that the crypto industry could utilize the tools in ways that would make it "the best security auditing tool ever built." According to him: "Smart contracts have historically been audited by humans -- slow, expensive, incomplete. An AI that can find a 27-year-old bug in OpenBSD can also find the reentrancy vulnerability that hasn't been caught yet in a major DeFi protocol. The question is whether defenders get access before attackers do -- and whether the crypto industry moves fast enough to use it proactively rather than reactively." Simultaneously, OpenAI has expanded access to a more cyber-permissive model, GPT-5.4-Cyber, through its Trusted Access for Cyber program, allowing vetted security vendors to stress-test their own systems. Despite the severe implications of machine-speed vulnerability discovery, crypto markets have shown remarkably little reaction to the advent of frontier cyber-offensive AI. Financial markets have spent years developing a vocabulary for quantum risk. Investors broadly understand that a quantum computer could break current encryption standards and the catastrophic impact that would have on digital ownership. However, the market appears far less prepared to price a systemic threat that operates not through a dramatic break in mathematics, but through quiet audit failures, compromised wallet dependencies, and complex exploit chains. As artificial intelligence fundamentally reshapes the speed and scale of cyber warfare, the digital asset market may significantly underestimate the fragility of the very infrastructure on which it is built.

Deutsche Börse's $200 million investment and Kraken's new Fedwire access add traditional-market backing and stronger payment infrastructure to its public-market push. Kraken is aiming for a Wall Street debut in the third quarter after confidentially filing for an IPO late last year, a step that pushes one of crypto's biggest exchanges closer to the public markets. The momentum behind the filing is not just about timing, but about how Kraken is repositioning itself as a broader financial platform. In April, a transaction involving existing shares implied a valuation of about $13.3 billion, below the company's late-2025 peak of $20 billion but still large enough to keep it among the sector's heavyweight names today. The exchange's pitch is increasingly tied to access. At the Semafor World Economy event in Washington, co-CEO Arjun Sethi said Kraken wants retail users to have access to the same types of trading tools that major institutional firms already use. That ambition gives the IPO story a clearer commercial logic: Kraken is trying to turn institutional-grade market access into a mainstream product. Sethi framed that mission around helping users do more with their own capital, signaling that the company's product strategy is clearly expanding beyond basic crypto trading services for everyday investors now. Kraken's capital story also picked up a notable endorsement from traditional market infrastructure. Deutsche Börse agreed to invest $200 million in Payward Inc., Kraken's parent company, by purchasing existing shares for a 1.5% fully diluted stake, with the deal expected to close in the second quarter pending regulatory approval. That investment matters because it links Kraken's IPO track with growing support from established financial operators, not just crypto-native backers. The transaction followed a partnership announced in December and added fresh weight to the exchange's effort to enter public markets this year. Another piece of the picture arrived in March, when Kraken received a limited purpose account from the Federal Reserve Bank of Kansas City. The approval made it the first digital asset bank with direct access to the US central bank's payment infrastructure and allows settlement on Fedwire without an intermediary bank. That operational shift strengthens Kraken's case by showing it can build closer to core financial rails. Kraken Financial plans to roll out the access in phases, beginning with client activity under its Wyoming SPDI structure.

CAPE CANAVERAL, Fla., April 15, 2026 (GLOBE NEWSWIRE) -- As SpaceX prepares for what could become the largest IPO in history with its confidential S-1 filing targeting a $1.75 trillion valuation and June roadshow, space infrastructure companies are positioning for sector-wide growth. Starfighters Space, Inc. (NYSE American: FJET), owner and operator of the world's largest... CAPE CANAVERAL, Fla., April 15, 2026 (GLOBE NEWSWIRE) -- As SpaceX prepares for what could become the largest IPO in history with its confidential S-1 filing targeting a $1.75 trillion valuation and June roadshow, space infrastructure companies are positioning for sector-wide growth. Starfighters Space, Inc. (NYSE American: FJET), owner and operator of the world's largest commercial supersonic aircraft fleet, has announced a strategic partnership with Blackstar Orbital to advance flight testing of revolutionary reusable hypersonic space systems. The March 26 Technical Interchange Agreement (TIA), unveiled at the Satellite 2026 conference, establishes a framework for integrating Blackstar's innovative "SpaceDrone" technology with Starfighters' proven F-104 aircraft platform. The collaboration will enable progressive flight testing from supersonic captive carries beginning in Q4 FY26 through high-altitude, supersonic release operations in the Eastern Range off Florida's Atlantic Coast. Blackstar Orbital is pioneering a new class of spacecraft with its lifting-body SpaceDrone design -- reusable, hypersonic satellites that launch as conventional payloads but return to Earth like spaceplanes. This technology addresses growing demand for responsive space operations and rapid mission turnaround capabilities, priorities that have gained urgency amid increasing commercial and national security space requirements. "This partnership highlights the role Starfighters plays in bridging the gap between concept and flight for next-generation aerospace systems," said Tim Franta, CEO of Starfighters Space. "Blackstar is developing a highly differentiated approach to reusable space platforms, and our F-104 fleet provides a proven, high-performance environment to test and validate those systems in real-world conditions." SpaceX IPO Transforms Space Investment Thesis SpaceX's April 1, 2026 confidential IPO filing has fundamentally altered the space investment landscape. With reports of seeking up to $75 billion in capital at valuations approaching $1.75 trillion, the offering would eclipse all previous public debuts. Morgan Stanley, Bank of America, Citigroup, JP Morgan, and Goldman Sachs are leading a 21-bank syndicate for the June 8 roadshow, with public trading anticipated as early as July 2026. The extraordinary valuation reflects Starlink's evolution into a $16 billion annual revenue generator and the strategic integration of xAI's artificial intelligence capabilities through a $250 billion February acquisition. This convergence of space infrastructure, satellite connectivity, and AI positions SpaceX at the center of multiple high-growth technology sectors. For investors seeking diversified exposure to the space economy, publicly traded companies with established operations and expanding capabilities represent compelling alternatives to direct SpaceX investment. The sector includes established aerospace primes, emerging space technology providers, and specialized infrastructure companies like Starfighters that enable critical testing and operational capabilities. Hypersonic Flight Testing Platform The Starfighters-Blackstar partnership leverages unique infrastructure at NASA Kennedy Space Center, where Starfighters operates its fleet of modified F-104 supersonic aircraft capable of sustained MACH 2+ operations. Under the TIA, Starfighters has developed a specialized BL75 pylon that serves as the critical structural interface between the F-104 platform and Blackstar's SpaceDrone. The phased testing approach begins with captive carry operations to validate aerodynamic modeling and performance characteristics. Successful completion will enable progression to high-speed release testing over designated ocean ranges, with potential expansion to overland operations as system maturity is demonstrated. This methodology provides real-world validation while maintaining safety protocols essential for experimental aerospace operations. "Access to Starfighters' flight test platform allows us to accelerate development of our SpaceDrone and move into flight validation with confidence," said Christopher Jannette, CEO of Blackstar Orbital. "This collaboration is a critical step in demonstrating a new class of reusable, hypersonic satellite systems." Space Infrastructure Investment Opportunities The approaching SpaceX IPO has intensified investor focus on space infrastructure companies with proven capabilities and growth prospects. Recent developments across the sector highlight sustained momentum: AST SpaceMobile (NASDAQ: ASTS) secured a strategic partnership with TELUS on April 2, 2026, for expanding cellular broadband infrastructure in Canada by 2026, driving shares higher. The company reported $70.9 million in 2025 revenue -- its first year as a revenue-generating business -- with 2026 guidance of $150-200 million supported by over $1.2 billion in contracted revenue commitments. With plans to deploy 45-60 satellites by year-end 2026, ASTS is scaling its space-based cellular network that works directly with unmodified smartphones. GE Aerospace (NYSE: GE) continues to demonstrate strong fundamentals with upcoming Q1 2026 earnings on April 21 expected to build on Q4's solid performance of $1.57 EPS on $11.9 billion revenue. The company benefits from a record $190 billion backlog and recent guidance for low double-digit revenue growth in 2026, supported by strong commercial aviation aftermarket demand and defense contract momentum including a recent $1.4 billion defense award and $1 billion manufacturing investment commitment. RTX Corporation (NYSE: RTX) reported strong Q4 defense bookings of $10.3 billion, culminating in a record backlog of $268 billion with $107 billion specifically in defense programs. Major recent awards include a $1.7 billion contract for four Patriot air and missile defense systems to Spain and a $1.2 billion Tamir missile production agreement. The company's diversified defense and commercial aerospace portfolio provides stability through Pratt & Whitney engines and Collins Aerospace systems. TransDigm Group (NYSE: TDG) has demonstrated resilience through recent acquisitions including SEI Industries, Raptor Scientific, and the components business of Communications & Power Industries. Morgan Stanley analysts view the stock's 2026 underperformance as a buying opportunity given the company's attractive valuation, balance sheet strength, and position as the leading commercial airline aftermarket investment with a $1,660 price target representing significant upside from current levels. Space Sector Growth Catalysts The space economy is experiencing fundamental transformation driven by converging trends including increased government budgets exceeding $100 billion annually, expanding commercial applications, and emerging technologies like hypersonics and reusable systems. The global space economy is projected to reach $1.8 trillion by 2035, supported by applications ranging from satellite communications to space-based manufacturing. Starfighters' partnership with Blackstar Orbital positions the company at the intersection of these growth trends. As the world's only commercial operator capable of sustained MACH 2+ operations with space launch capability, Starfighters provides critical infrastructure for testing and validating next-generation space technologies that will define the industry's future. The timing of SpaceX's public debut creates a unique inflection point for space sector investment. While SpaceX's massive scale and valuation may limit individual investor access, the broader ecosystem offers diversified exposure through companies with established capabilities, strategic partnerships, and expanding market opportunities. Companies that combine operational expertise with innovative partnerships are positioned to benefit from sustained sector growth as space becomes increasingly central to both commercial and national security priorities.
Claude Mythos is a meaningful moment. But the real danger isn't the explosion of CVEs. It's what attackers find when they exploit them. I've been watching the security industry react to Anthropic's Project Glasswing announcement, and what I'm seeing falls into two camps. One says the sky is falling. AI can now autonomously find and exploit vulnerabilities, and defenders can't keep up. The other says to calm down because context still favors the defender, and the threat is overblown. The conversation will continue with OpenAI's latest model release. Both camps are arguing about the door. Let's talk about what's behind it. Anthropic has built a model that can autonomously discover zero-day vulnerabilities in major operating systems and browsers. Vulnerabilities that survived decades of human review and millions of automated tests. That's a real capability jump, and it's only a matter of time before other models can do the same. Critics are right that AI attackers start context-poor. They're probing from the outside. They don't know your architecture. They can't read your data or your proprietary source code. But attackers don't stay context-poor. The switch from "outside the perimeter" to full situational awareness can flip in an instant. The security industry's response to Glasswing has been focused on CVEs. Patch faster. Reduce attack surface. Build AI into your AppSec program. This is solid advice. What's missing is what happens after a vulnerability is exploited. When a Mythos-class model finds a zero-day in the Linux kernel and chains it to privilege escalation, the exploit isn't the target; it's the foothold. The blast radius -- what data an attacker can access, exfiltrate, or poison from that position -- is what determines the damage. The average attacker already dwells inside an environment for weeks before detection, and most data that an identity can access is overprivileged. When AI compresses the time from exploit to breach from days to hours, both of those problems become critical. You can't patch your way out of them. There are two ways to make a breach survivable. One is to prevent attackers from getting in -- the door lock. The other is to make sure that getting in doesn't mean getting everything. In an AI-accelerated threat environment, the second capability isn't optional. It's the one that determines whether a breach becomes a headline. Here's what we've learned from building Varonis: the fundamentals of data security don't change when the threat landscape shifts. What changes is the cost of getting them wrong. Data oversharing has always been dangerous. Excessive permissions have always expanded the blast radius. Unmonitored access has always been how attackers move laterally undetected. AI doesn't invent these problems -- it removes the friction that used to slow attackers down while exploiting them. Today, Mythos focuses on identifying vulnerabilities in code. But the same pattern-recognition capability applied to identity graphs, permission models, and sensitive data classifications will eventually surface the toxic combinations that turn a minor foothold into a catastrophic breach. The organizations that haven't addressed their data exposure won't need an attacker to find it for them, the model will do it faster than any human red team ever could. This is why we've invested so heavily in AI security. Unless you're starving AI of the data it needs to be useful, the non-deterministic systems inside your organization are creating new attack paths to data you may not even know exists. Every AI agent you deploy has permissions. Every model you connect to training data or a RAG pipeline has a blast radius. First, know what data is exposed. In most organizations, this number is shocking. Sensitive data accessible to everyone in the company. Cloud storage with no expiration on access grants. AI service accounts with admin rights to production databases. Map it now, before an attacker does it for you. Second, reduce the blast radius before the breach, not after. If an attacker authenticated as a random employee, what could they reach? That gap is your risk. Continuous least-privilege enforcement is the holy grail. Third, an instrument for speed. As AI compresses the time from foothold to exfiltration, your detection must compress too. Behavioral baselines, anomaly detection, automated response operating at AI speed. Code is where the Glasswing story begins. Data is where the story ends. And your ending is determined long before the CVE is published -- by the decisions you make today about access, exposure, and visibility. One thing the Glasswing conversation hasn't surfaced enough: the AI systems inside your organization are themselves a new attack surface that Mythos-class models will learn to exploit. Your agents are making decisions about data access. Your RAG pipelines are retrieving documents. Your coding assistants are reading source code. Each one has a permission model designed for speed, not security. Prompt injection, data exfiltration through model outputs, and agent impersonation. These aren't theoretical. They're the frontier a Mythos-class attacker will probe once the infrastructure vulnerabilities are patched. The AI attack surface isn't just about software vulnerabilities. It's the data those systems can reach, and the paths an attacker can walk through them. That's the map. Make sure you've seen it before they have. The door is harder to defend. Make sure you know what's behind it.

I've tested a lot of AI chatbots, but Perplexity and Gemini are different beasts -- one is built to find the truth, the other to build on it. Both tools are capable. Both are widely used. But they're built on fundamentally different assumptions about what you actually need from an AI. I know because I've spent weeks running Perplexity vs. Gemini through the same research, writing, and analysis tasks I do every day, and the results weren't what I expected. Perplexity assumes you need to find and verify something. It's the AI answer engine you reach for when accuracy isn't optional. Gemini assumes you need to create or execute something. It's the AI you want living inside your workspace. To back my testing, I factored in hundreds of G2 reviews where real users have rated both tools across research depth, conversational ability, writing quality, and integrations. What I found: Gemini has the edge on creative output, deep reasoning, and working inside Google's ecosystem, while Perplexity wins on source transparency, model flexibility, and research-first workflows where you can't afford to hallucinate a citation. This comparison covers every dimension that matters if you're already evaluating both in terms of features, pricing, AI models, integrations, browser capabilities, and agentic AI, so you can make the call based on your actual use case, not a feature checklist. Note: Both Gemini and Perplexity frequently roll out new updates to these AI chatbots. The details below reflect the most current capabilities as of April 2026, but may change over time. Perplexity vs. Gemini: What's different and what's not? While I set about testing two robust question-and-answer engines, I noticed one stark difference. While Gemini integrates with the larger Google ecosystem and is available on apps like Google Docs and Google Spreadsheets, Perplexity is more of a web browsing engine that offers automated contextual follow-up questions to make your search more immersive. The key difference, honestly, lies in their DNA: Gemini is a multimodal creative and productivity powerhouse built into the Google ecosystem, while Perplexity is a specialized "answer engine" designed for cited, real-time research. This interested me enough to research deeper nuances between the two -- whether they converge and where they pull apart. Perplexity vs. Gemini: Key differences Based on my experience, these are the main differentiators between Perplexity and Gemini to keep in mind before working with them: * Multimodal capabilities: Gemini is natively multimodal. It processes and generates text, images, audio, and video in a single workflow, with Veo 3.1 for video, Nano Banana 2 for image generation and editing, and Lyria 3 for music. Perplexity has expanded into image and video generation (also powered by Veo 3.1 and models including GPT Image 1 and Seedream 4.5), but the experience is oriented around research workflows rather than creative production. If multimodal creation is central to your work, Gemini is the stronger platform. * Model flexibility: Perplexity is model agnostic. It lets you choose between the best AI models available to answer your prompt -- GPT-5.2, Claude Sonnet 4.6, Gemini 3.1 Pro, and more. Max subscribers get Model Council, which runs the same prompt through three frontier models simultaneously and synthesizes the outputs. This makes it a great "all-in-one" platform if you want to test how different AIs handle the same question. Gemini stays within Google's own model family, like Gemini 3.1 Pro or Gemini 3 Flash. You are essentially buying into the Google "brain." * Ecosystem and workspace integration: Gemini lives inside Gmail, Docs, Sheets, Drive, Meet, and Chrome. I did not need any additional setup. Perplexity integrates with tools like Notion, Linear, GitHub, and both Google and Microsoft Workspace via connectors, but each requires configuration. * AI browser: Perplexity built Comet, a standalone AI-native browser. Google embedded Gemini as a persistent sidebar inside Chrome, with deep hooks into Gmail, Calendar, and Maps. Different philosophies: one asks you to switch browsers, the other meets you where you already are. The maps and Gemini Live features are * Agentic AI: Perplexity Computer orchestrates workflows across 19 AI models in parallel. It is model-agnostic by design. Gemini's agentic tools (Deep Research, Project Mariner, Jules) are powerful but ecosystem-native, working best inside Google's own stack. * Coding and debugging: Perplexity handles code reasoning well via GPT-5.2 and Claude Sonnet 4.6 -- strong for debugging, explanation, and quick generation. It has no official native IDE integration, though third-party plugins exist for VS Code and JetBrains if you're willing to set them up. Gemini Code Assist is built into VS Code, JetBrains, and Android Studio out of the box. Perplexity vs. Gemini: Key similarities While Perplexity and Gemini offer slightly distinct research mechanisms, content style, and flow of speech, there are a variety of use cases that both of them can be collectively used for. Based on a common transformer architecture, both of these AI chatbots also have more things in common than you think. * Conversational AI: Both sustain strong multi-turn conversations with solid in-session context. You can refine, follow up, and build on previous answers in either tool. * Deep research: Both have dedicated Deep Research modes that autonomously plan, search, and return structured cited reports. Perplexity is faster; Gemini goes deeper on technical material. * Content generation: Both generate, summarize, and restructure content across formats competently. Gemini edges ahead on creative output; Perplexity keeps outputs closer to the sourced material. * Image generation: Both tools offer built-in image generation. Gemini's Nano Banana 2 is a first-party model with 4K output. Perplexity offers model choice -- GPT Image 1, Nano Banana, Seedream 4.5 -- with access scaling by plan. * Multilingual support: Both support 30+ languages. Gemini covers 70+; Perplexity supports 30+. Not a meaningful differentiator for most users since major languages are covered. * Security: Both offer enterprise-grade security. Gemini runs on Google Cloud infrastructure with ISO, SOC 2, and GDPR certifications. Perplexity adheres to SOC 2 Type II standards, with data encrypted at rest and in transit. It's also GDPR and PCI compliant. Disclaimer: AI responses may vary based on phrasing, session history, and system updates for the same prompts. These results reflect the models' capabilities at the time of testing. Perplexity vs. Gemini: How they actually performed in my tests Along with comparing both tools, it was also crucial to give a fair assessment of the benchmarks that they set in a specific task. As I evaluate these tools, I would structure my verdict in the following way. * What stood out? I'll highlight the strengths, weaknesses, or any surprises (good or bad) I noticed from both AI chatbots. * Who did it better? I'll inform you about which AI chatbot came out on top based on accuracy, efficiency, creativity, and how easy it was to use the output. * Final verdict: I'll share my honest take on which chatbot is a better choice for a particular task. Ready? Here we go! 1. Summarization For my summarization test, I asked both Perplexity and Gemini to summarize a G2 listicle (about the top construction estimating software for 2025) into a crisp TL;DR -- within 100 words -- highlighting the key shortlisting criteria. The article discussed a first-hand analysis of the seven best construction estimating software for 2025 for buyers to refine their decision-making processes. Prompt: Could you summarize the context in this G2 listicle in the form of a TLDR callout, which contains the major shortlisting parameters of software in the construction estimating software category, keeping your response under 100 words. Perplexity's response to the summarization prompt Perplexity's response to the prompt really perplexed me (in a good way). While stating the obvious (shortlisting parameters), it surfaced the citations to both the original URL and the actual software category URL. It also added the missed context around decision-making parameters that help the decision maker to quickly scan the non-negotiables without delving into a complete software paper, thus offering high-quality content services and enhancing brand credibility. Gemini, on the other hand, provided a neat and layered output, explaining what non-negotiable parameters are to keep in mind when you begin your research process for the best construction estimating software. It laid out metrics like user satisfaction, market presence, ease of administration, and implementation, which have been considered while ranking the products in the G2 listicle and are key influencing factors to invest in a worthy product. While the TLDR looks pretty decent and combines all the key parameters, it missed a major angle in the original listicle that provided more depth in the G2 listicle analysis: G2 reviews. Winner: Perplexity 2. Content creation Both Perplexity and Gemini have earned a reputation for producing high-quality, engaging, and audience-centric content that performs well across content distribution channels and improves lead generation. For this task, I thought of putting both these tools to the test for a startup idea and instructed them to brainstorm content strategies, social media captions, scripts, ad copies, and so on. The goal was to create content marketing resources for a new product campaign. I asked both products to generate marketing materials for a fictional product, "Mindgear", which is a smartwatch that monitors your pulse, heart rate, sp02 levels, and blood pressure. It also comes with a built-in AI to detect your mood and align it with therapeutic voice instructions to calm you down. Marketing materials should ideally include product descriptions, taglines, social media posts, email subject lines, and scripts- essentially everything a brand would need for a full-on marketing campaign. Prompt: Generate marketing materials for a fictional product "Mindgear", which is an smartwatch that monitors your pulse, heart rate, sp02 levels and blood pressure and comes with a built-in AI to detect your mood (happy, sad, angry or emotional) and align it with therapeutic voice instructions to calm you down. These should include product descriptions, taglines, social media posts, email subject lines, and scripts- essentially everything a brand would need for a full-on marketing campaign. Perplexity's response to the content creation prompt I really loved Perplexity's response. The content was pretty on point and hit the trigger points very well. However, I felt that it mostly reiterated what I already mentioned in the prompt and didn't have much originality. Gemini pretty well highlighted the product's USPs, such as on-site therapeutic guidance and wearable wellness, explaining its strengths and benefits. It also created video frames within the scripts, which, according to me, was a winner for launch videos. Winner: Gemini 3. Creative Writing I asked both Perplexity and Gemini to craft a short dialogue (approx 200 to 300 words) between two characters who cannot directly state their feelings or the core issue between them. Both AI models delved into the poetic essence of the topic and crafted engaging dialogues that hooked me throughout. However, they differed in their execution style and content structure. Prompt: Craft a short dialogue (approx. 200-300 words) between two characters who cannot directly state their true feelings or the core issue between them. Their entire conversation must rely on subtext, metaphor, and indirect allusions. Ensure the reader can perceive the underlying emotional tension and unspoken truths, despite the characters never articulating them explicitly. Perplexity's response to the creative writing prompt. While Perplexity didn't add scene visuals or poetic nuances, it did succeed in creating an abstract dialogue between two friends who talk about their strained relationship in the form of a garden. While it was absolutely heartfelt and engaging, in this task, Gemini showed a bit more poetic feel and creative flair than Perplexity. Gemini's response, namely "The Wilting Garden", had me almost in tears. It was refreshing to read and draw parallels between this short dialogue and our real-life stories, which provides an interesting angle for the readers. The dialogue was sweet, easy to read, engaging, and poetic in its appearance. Winner: Gemini 4. Coding Coding test is the ultimate litmus test for AI chatbots, mostly because many early coders directly copy and paste the output code without running it through a manual compiling process. For this task, I thought a simple and responsive navigation bar for the frontend UI would be the best. I instructed the AI tool to focus on code usability, responsiveness, and UI friendliness while automatically debugging the code at runtime to eliminate errors or leaks. Prompt: Can you write HTML, CSS, and JavaScript code snippets to create a user-friendly and responsive navigation bar for my website? Perplexity's response to the coding prompt for web nav bar I love how Perplexity generated three different scripts for HTML, CSS and JavaScript files and added a disclaimer on the code being just a "sample" for the user. Not just that, it also gave a integrated code editor environment to debug, execute, compile and run code successfully. Gemini's response to coding a web nav bar For Gemini, I used Google AI Studio, which offers a live integrated preview of your HTML and CSS code in an integrated data environment. To view the live preview of the navigation bar, I simply had to copy and paste the code as an HTML file and run it on my browser. While both Gemini and Perplexity generated factually accurate, responsive, and user-friendly code snippets, Gemini also analyzed the utility of classes and functions. Both Gemini and Perplexity excelled in generating complete, functional code snippets. What's more, they offered a clear and practical starting point for your web development projects. Winner: Split; Perplexity for ease of code and code continuation, Gemini for elaborating on function and class declarations. 5. Aggregating multi-source information Both Perplexity and Gemini offer exceptional web browsing capabilities that help with aggregating multi-source information for user queries. Aggregating multiple sources isn't just a form of information retrieval, it requires a special degree of synthesis, critical evaluation, and nuanced understanding drawn from disparate or conflicting sources. I asked both Perplexity and Gemini to trace the evolution of public and academic discussions around the four-day work week over the last 10 years (2015 - 2025). Identify key arguments for and against it as they emerged, noting any significant real-world trials and their reported outcomes. Conclude by summarizing the current prevailing sentiment or points of debate, citing specific examples or data points from different regions or industries where possible. Present your findings in a chronological overview with distinct arguments and their counterpoints. Prompts: Trace the evolution of the public and academic discussion around the four-day work week over the last 10 years (2015-2025). Identify key arguments for and against it as they emerged, noting any significant real-world trials or studies and their reported outcomes. Conclude by summarizing the current prevailing sentiment or points of debate, citing specific examples or data points from different regions or industries where possible. Present your findings as a chronological overview with distinct arguments and their counterpoints. Perplexity's response to the multi-source information aggregation prompt. What I loved about Perplexity's response was that it pulled the arguments from news pieces, articles, research papers, and carefully crafted the for and against arguments in a year-wise format. It was easily interpretable and gave more structure to the debate. Also, Perplexity cited 8 overall sources and pulled insightful metrics that align with user perception of a 4-day work week, which in my case was a winner! Gemini's response to the multi-source information aggregation prompt Here is what I noticed: Gemini likely stood out more due to its deeper narrative exploration of the evolving arguments and more comprehensive discussion of regional/industry nuances and specific trial outcomes over time. However, Perplexity's inclusion of recent statistics and legislative information offers a valuable snapshot of current adoption and policy discussions, complementing Gemini's narrative focus. Both are a win-win in their own ways. Winner: Split: Perplexity (for stat-based approach) and Gemini (for accurate narrative bend) 6. Deep research As part of the recent upgrade to the models, AI chatbots now claim to handle complex research queries, meaning that they can go through tons of web resources for you. I aimed to put this to the test with an advanced research prompt that you can find in the PDF attached at the end of this task. Perplexity's response to the deep research prompt. Right off the bat, I noticed how cleanly and analytically Perplexity generated the introduction and followed it into the research objectives of that proposal. While my research question didn't explicitly mention the presence of an independent and shared variable, it is evident that Perplexity browsed high-quality and accurate case studies and derived the correlation between variables, evidently in the objective section. It helped make my task extremely easy and convenient. However, it fell short on research design; it didn't explore research methodologies, risks, and other good stuff. Gemini's response to the deep research prompt. Where Gemini stood out was in the foreword. It started by searching for literature reviews, meta-analyses, and comprehensive reports discussing lawsuits against AI companies. That, according to me, is an early indication that your research proposal is headed in the right direction. Another standout factor is that Gemini crafted an entire research proposal (which can be used with minor tweaks, AP edits, and content refinements) as legitimate research to pitch to a startup investor. I was so overwhelmed with Gemini's response that I ended up working on the research proposal as an independent project for my next side hustle gig. Winner: Gemini If you're interested in knowing more about the research proposals both these chatbots created as an outcome of a deep case study analysis, click here. 7. Analyzing academic papers Be it crafting a research proposal, extracting key insights from existing academic papers, or referencing accurate citations, both Gemini and Perplexity stood out to me and crunched qualitative or quantitative data within seconds. I also want to call out the "research" and "deep research" features of both of these AI tools. These features focus on AI-powered search engines that scour the web for information in real time and synthesize findings into concise answers with cited sources. I gave both ChatGPT and Perplexity a research paper on "Attention Is All You Need" and asked them to compare "attention mechanism" and "self-attention" to check how they can be different and put the comparison in a table. Prompt: Analyze the research paper as follows: Attention is all you need. Now that you've analyzed it, based on this research paper, try to compare the attention mechanism and self-attention, and put your findings in a table. Perplexity's response was extremely succinct and to the point. It extracted key details from the research paper pretty fast and offered a structured view of the comparison I wanted. It also segregated the pointers based on multiple aspects (something I hadn't prompted it to do). The comparison pointers were well labeled and made it easy to understand the stark difference between two popular machine learning methods of content generation. While Gemini banked on explaining the technical parameters, I found it a little difficult to interpret. Although it extracted relevant information and dissected the intent quite well, it might be a little difficult to comprehend for a beginner-level analyst who wants to learn more about these technical concepts. Winner: Perplexity. 8. Multi-chat coherence Both Gemini and Perplexity maintain a full chat coherence primarily by utilizing a context window, which stores a limited history of ongoing conversations. No matter how far back you are in the chat, it would still retain the context and sentiment from earlier or previous messages. To check the multi-chat coherence of Perplexity and Gemini, I tried setting up a game with Gemini known as the quirky gadget combo challenge. Gemini's response to multi-chat coherence After storing the value of the first innovation and locking it in, I went for the second innovation, so that Gemini has a choice later in the game when I frame a particular scenario. Gemini's response to multi-chat coherence Finally, I created a fun situation that included applications of both these innovations and asked him to make sense of what was happening. We can see that Gemini retained the applications of both the innovations that I had created earlier in the article, and was able to retrieve the exact function and the "why" behind those functions. This suggests that Gemini could easily retain the context of two specific entities throughout the chat, also known as multi-chat coherence. Similar to how Gemini reacted, Perplexity could also retain the context of both the innovations and explain the exact scenario in a detailed and structured format, while offering a strong multi-chat coherence quotient and contextual understanding of technical scenarios. Winner: Split: Perplexity and Gemini both retained context window. 12. Image generation AI image generation has moved from a novelty to a genuinely useful workflow feature, and both Perplexity and Gemini now offer it natively. I used the same Mindgear product brief from the content creation task to test both: generate a product visual for Mindgear showing the smartwatch on a wrist with a calm, wellness-focused aesthetic. Gemini's output was immediately striking on first glance. The image showed a woman in a soft green shirt relaxing by a sunlit window, the round-dial Mindgear watch on her wrist, a steaming cup of tea, and a Mindfulness book in the background. It read like a lifestyle shoot -- warm, editorial, and aesthetically on-brand. But when I looked closer at the watch itself, the UI was unclear. For a product visual where the watch UI is the whole point, I felt that was a meaningful gap. The surrounding scene nailed the brief; the product needed some work. I did like the fact that I could add text or highlight something before downloading the image from Gemini. Perplexity took a different direction. The image placed a square-faced watch showing a meditation timer against a misty forest backdrop, resting on a mossy rock. The mood was cinematic and meditative -- and crucially, the watch UI was actually legible. You could read the screen, make out the interface elements, and understand what the product does. That said, the brand name on screen read "MINDFILHEGS" -- a text hallucination that's a known limitation and would need fixing before any real use. Both tools interpreted the wellness brief well and produced images that would have taken real effort to source or shoot manually. But they failed in opposite directions: Gemini nailed the scene and lost the product; Perplexity nailed the product and fumbled the text. So the honest verdict is Split. Neither nailed it first try, both are one regeneration away from a usable output, and they're strong in genuinely different ways -- Gemini for scene and atmosphere, Perplexity for product UI clarity. Winner: Split Here's a table showing which chatbot won the tasks. If you want a research companion that feels like a faster, web-connected assistant, Perplexity is hard to beat. But if you need advanced, integrated workflows and richer reasoning across formats, Gemini might just earn the top spot in your toolkit. Key insights on Perplexity vs. Gemini based on G2 Data I looked at review data on G2 to find strengths and adoption patterns for Perplexity and Gemini. Here's what stood out: Satisfaction ratings * Perplexity excels in ease of use (94%), ease of doing business with (94%), and ease of setup (96%). * Gemini excels in ease of use (94%), ease of doing business with (93%), and ease of setup (97%). Industries * Perplexity dominates computer software, information technology and services, and marketing and advertising. * Gemini dominates information technology and services, computer software, and marketing and advertising. Highest-rated features * Perplexity excels in no-code conversation design, multi-step planning, natural language understanding and intent inference. * Gemini excels in context maintenance within sessions, controlled LLM response generation, natural language understanding, and intent inference. Lowest-rated features * Perplexity struggles with fallback responses for unknown queries, web widget and SDK embedding and API flexibility. * Gemini struggles with fallback responses for unknown queries, error learning, and customizability. Learn more about Gemini in our detailed Google Gemini review, including real-world use cases and G2 review data. And if you're curious how Perplexity holds up as a research-first AI, read our full Perplexity AI review for a detailed analysis. Perplexity vs. Gemini: Frequently asked questions (FAQs) Have more questions? Here are the answers. Q1. What is the difference between Perplexity AI and Google Gemini? Perplexity and Gemini are built on fundamentally different assumptions about what you need from an AI. Perplexity is a research-first answer engine -- it searches the web in real time, cites every claim with a clickable source, and lets you switch between frontier models like GPT-5.2, Claude Sonnet 4.6, and Gemini 3.1 Pro depending on the task. Gemini is a creative and productivity assistant deeply embedded in Google's ecosystem -- it generates text, images, audio, and video natively, and works directly inside Gmail, Docs, Sheets, Drive, and Chrome with no setup required. The short version: reach for Perplexity when accuracy and source transparency are non-negotiable. Reach for Gemini when you need to create something or get things done inside tools you already use. Q2. How does Perplexity compare to Gemini for getting accurate, cited answers? Perplexity has a structural advantage here. Every response is grounded in real-time web search with sentence-level citations you can click and verify. It's built specifically for the use case of accurate, traceable answers. Gemini integrates Google Search in supported modes and provides solid responses, but source attribution is less granular, which makes it harder to audit where a claim is coming from. If you're doing research where you need to verify every data point -- academic work, competitive analysis, fact-checking -- Perplexity is the more reliable tool. If accuracy within a broader creative or productivity workflow is what you need, Gemini holds up well. Q3. Is Perplexity or Gemini better for online research? It depends on what kind of research. For fast, source-backed answers where you need to verify claims in real time, Perplexity is the stronger choice. Its entire architecture is built around retrieval and citation transparency. For deep, long-form research that involves synthesizing large documents, analyzing data across files, or working within a collaborative Google Workspace environment, Gemini's long context window and Deep Research mode give it an edge. Q4. Which platform offers more accurate and up-to-date responses: Perplexity or Gemini? Perplexity stands out for real-time web search integration and transparent source citations, making it ideal for users who value up-to-the-minute accuracy. Gemini, powered by Google's ecosystem, also offers high-quality responses but may rely more on model knowledge than live web updates, depending on the context. Q5. Which tool is better suited for business or professional use cases: Perplexity vs Gemini? It comes down to where your work actually happens. Perplexity is the stronger fit for researchers, analysts, and knowledge workers who need deep, source-backed answers with minimal hallucination risk and its enterprise connectors for Notion, Linear, GitHub, and Google and Microsoft Workspace make it a capable research layer for most professional stacks. Gemini is the better fit if your team lives in Google Workspace.It works directly inside Gmail, Docs, Sheets, Drive, and Meet with no additional setup, making AI assistance part of the workflow rather than a separate tool. Q6. What are the pricing differences, and which one gives better value for money, Perplexity vs.Gemini? Perplexity runs Free, Pro at $20/month, Max at $200/month, Enterprise Pro at $40/user/month, and Enterprise Max at $325/user/month. Gemini runs Free, AI Pro at $19.99/month, and AI Ultra at $249.99/month -- with AI Pro bundling 2TB of Google One storage and full Workspace integration alongside model access. At the $20 tier, both are competitive. Perplexity Pro gives you multi-model flexibility, unlimited Pro searches, and Deep Research. Gemini AI Pro gives you Gemini 3 access, Veo 3.1 for video, and deep Workspace integration. The value question is really about what you'd actually use. If web research is your primary use case, Perplexity wins at this price. If you're already paying for Google One storage, Gemini AI Pro is hard to beat on bundled value. Q7. How do the customization and integration options compare in Perplexity and Gemini? Perplexity's model-agnostic approach is its biggest customization advantage -- you can switch between GPT-5.2, Claude Sonnet 4.6, Gemini 3.1 Pro, and Sonar depending on the task, and Max subscribers can run the same prompt through multiple models simultaneously via Model Council. Enterprise users can connect to external tools and data sources via MCP with 400+ prebuilt connectors. Gemini's customization strength is ecosystem depth -- it integrates natively across all Google products and, at the enterprise tier, connects to Salesforce, SAP, BigQuery, and Microsoft 365 through Gemini Enterprise. For cross-provider flexibility, Perplexity wins. For depth within Google's stack, Gemini wins. Q8. Which AI platform scales better for cross-team adoption? Gemini fits organizations already invested in Google Workspace, since employees can start using it within familiar apps, lowering training costs. Perplexity, while intuitive, requires a shift in workflow because it functions as a standalone research-first tool, making adoption slightly steeper in multi-department rollouts. Q9. Do Perplexity and Gemini differ in handling proprietary or internal knowledge? Perplexity excels at surfacing public web insights but offers limited options for connecting to internal data sources. Gemini, through Google Cloud and Vertex AI, allows enterprises to bring proprietary datasets into workflows, making it better suited for companies prioritizing internal knowledge integration. Q10. Which option provides more predictable performance under heavy enterprise use? Perplexity's performance is tied to real-time retrieval, which can vary in depth depending on query complexity. Gemini benefits from Google's large-scale infrastructure, offering more consistent response times and uptime guarantees that enterprises expect when deploying AI at scale. The end verdict: Which AI chatbot would you chat with? When I glance over the outcomes of all eight tasks, I see Perplexity has its own set of strengths, and so does Gemini. The success of an AI chatbot will depend on the type of goal you want to achieve. For an academician or student, Gemini might offer better explanations of scholarly concepts, but, similarly, for a content writer, Perplexity might be more concise. Although both of these tools have their pluses, Gemini stood out in three tasks, each catering to the marketing flair, nuanced creative flow of speech, and argument accuracy. Perplexity, on the other hand, won for two tasks, each aligned with the purpose of content marketing or academic writing. So, given the subjectivity of content and the adaptability of users for a particular chatbot, the decision of Gemini vs. Perplexity depends on your purpose, project bandwidth, and eye for detail. What I've inferred about both these tools also aligns with what G2 reviews say about them, and if you want to get started on your own, maybe this comparison can help. Check out my peer's analysis on DeepSeek vs ChatGPT and learn how the two models performed in a series of various testing scenarios against each other.

AI model's ability to uncover critical vulnerabilities prompts restricted rollout under Project Glasswing Anthropic has unveiled a powerful new artificial intelligence model capable of identifying and exploiting software vulnerabilities at an unprecedented scale. But the AI giant is withholding it from the public. The model, called Claude Mythos Preview, is the latest in Anthropic's Claude family and has been released only to a limited group of technology firms. The AI leader said that the system can autonomously detect, analyse and even explore weaknesses in software systems, in some cases outperforming human experts. The Cyber Factor During internal testing, Mythos reportedly uncovered thousands of high and critical severity vulnerabilities across major operating systems and web browsers. Some of these flaws are believed to have gone undetected for decades. According to experts, Claude Mythos marks a significant leap, compressing vulnerability discovery and exploit development timelines from months or weeks to hours. However, this capability has raised an alarm. Cybersecurity specialists have warned that if widely accessible, such tools could be misused to conduct malicious acts by threat actors to rapidly generate exploit chains, phishing campaigns or major cyberattacks. It should be noted that Anthropic itself described the technology as a potential "watershed moment", noting that even individuals without deep technical expertise could leverage it for harmful purposes. Controlled Rollout Strategy To manage these risks, Anthropic has launched a controlled access initiative called Project Glasswing. Under the programme, more than 50 organisations, including Microsoft, Nvidia and Cisco, will receive access to Mythos Preview. The goal is to strengthen cyber defences by enabling trusted partners to identify and patch vulnerabilities before they can be exploited. But keeping aside the controlled rollout, questions remain about the scope and nature of the vulnerabilities identified. The company said it will reveal details of the hidden vulnerabilities within 135 days after informing the companies responsible for the affected software. The model has also drawn attention among policymakers. Anthropic has briefed US federal agencies on its capabilities, even as it faces tensions with the administration of Donald Trump. Earlier this year, Defense Secretary Pete Hegseth labelled the company a potential "supply chain risk" to national security. However, Anthropic's cautious approach echoes a similar decision taken by OpenAI, which in 2019 limited access to its GPT-2 model due to misuse concerns. At the moment, all that the world needs from these tech giants is the balance between innovation and security -- something which is increasingly under scrutiny.

Scaling a business is often seen as a sign of success. However, growth brings its own set of challenges, and without the right strategies in place, it can lead to inefficiencies, operational issues, and even failure. The ability to scale effectively -- without losing control -- is becoming a critical capa... Scaling a business is often seen as a sign of success. However, growth brings its own set of challenges, and without the right strategies in place, it can lead to inefficiencies, operational issues, and even failure. The ability to scale effectively -- without losing control -- is becoming a critical capability for modern organisations. Growth is not just about increasing revenue or expanding operations. It is about building systems, processes, and structures that can support expansion while maintaining efficiency and quality. One of the biggest challenges businesses face during growth is operational complexity. As organisations expand, processes that once worked well may become inefficient or unsustainable. Without proper planning, growth can lead to bottlenecks, delays, and increased costs. According to the World Bank, efficient business operations and strong institutional frameworks are essential for sustainable growth and competitiveness (source: https://www.worldbank.org/en/topic/competitiveness). To address these challenges, businesses must focus on building scalable systems. This includes investing in technology, standardising processes, and creating clear workflows. Automation can play a key role in reducing manual effort and improving efficiency. Another critical factor is leadership and organisational structure. As businesses grow, decision-making becomes more complex. Leaders must ensure that responsibilities are clearly defined and that teams are aligned with the organisation's goals. Decentralisation can be an effective strategy for managing growth. By empowering teams to make decisions, organisations can improve responsiveness and reduce bottlenecks. However, this must be balanced with strong governance to ensure consistency. Workforce management is also essential. Scaling requires the right talent, and organisations must invest in recruitment, training, and development. Building a strong organisational culture is equally important, as it helps maintain alignment and engagement. According to McKinsey, companies that scale successfully often focus on building strong organisational capabilities and maintaining discipline in execution (source: https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/growth-and-scaling). Financial management is another key consideration. Growth requires investment, and businesses must ensure that they have the financial resources to support expansion. This includes managing cash flow, controlling costs, and securing funding. At the same time, organisations must avoid over-expansion. Rapid growth without proper planning can lead to financial strain and operational challenges. Businesses must balance ambition with discipline, ensuring that growth is sustainable. Technology is a major enabler of scalable growth. Cloud computing, data analytics, and digital platforms allow businesses to expand without significantly increasing costs. These technologies provide flexibility and scalability, enabling organisations to respond to changing demands. The OECD highlights that innovation and digital adoption are key drivers of business growth and competitiveness (source: https://www.oecd.org/going-digital/). Customer experience must also remain a priority during growth. As organisations scale, maintaining quality and consistency can be challenging. Businesses must ensure that their customer experience remains strong, as this is critical for retention and long-term success. Looking ahead, the ability to scale effectively will become increasingly important. As markets evolve and competition intensifies, organisations must find ways to grow without compromising efficiency or quality. In conclusion, growth without chaos is not just about expansion -- it is about control. By building scalable systems, investing in people, and maintaining discipline, businesses can achieve sustainable growth and long-term success.

In an era defined by rapid technological advancement, shifting consumer expectations, and global economic uncertainty, business agility has emerged as one of the most critical determinants of long-term success. Organisations that can adapt quickly to change are not only better equipped to survive di... In an era defined by rapid technological advancement, shifting consumer expectations, and global economic uncertainty, business agility has emerged as one of the most critical determinants of long-term success. Organisations that can adapt quickly to change are not only better equipped to survive disruptions -- they are also more likely to seize new opportunities and outperform competitors. Business agility refers to an organisation's ability to respond swiftly and effectively to internal and external changes. This includes adjusting strategies, reallocating resources, and embracing innovation without being constrained by rigid structures or outdated processes. The importance of agility has become increasingly evident in recent years. From global supply chain disruptions to evolving digital landscapes, businesses have faced unprecedented challenges that require rapid adaptation. According to the World Economic Forum, organisations that prioritise agility and resilience are better positioned to navigate economic uncertainty and maintain competitiveness (source: https://www.weforum.org/agenda/2023/01/business-resilience-strategies/). One of the key drivers of business agility is technology adoption. Digital tools such as cloud computing, data analytics, and artificial intelligence enable organisations to operate more flexibly and make faster decisions. These technologies provide real-time insights into performance, allowing businesses to respond proactively to changing conditions. For example, cloud-based platforms allow organisations to scale operations up or down based on demand, reducing costs and improving efficiency. Similarly, data analytics tools enable businesses to identify trends and make informed decisions quickly. However, technology alone is not sufficient to achieve agility. Organisations must also foster a culture that encourages flexibility, innovation, and continuous improvement. This requires strong leadership and a willingness to challenge traditional ways of working. Organisational structure plays a crucial role in enabling agility. Traditional hierarchical models can slow decision-making and limit responsiveness. In contrast, more flexible structures -- such as cross-functional teams and decentralised decision-making -- allow organisations to act more quickly and effectively. According to McKinsey, companies that adopt agile operating models are more likely to achieve higher performance and faster growth (source: https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-five-trademarks-of-agile-organizations). Another important aspect of business agility is customer-centricity. In today's competitive environment, understanding and responding to customer needs is essential. Agile organisations continuously gather feedback and adjust their products and services accordingly. This approach not only improves customer satisfaction but also drives innovation. By staying closely aligned with customer expectations, businesses can identify new opportunities and develop solutions that meet evolving demands. Workforce agility is also a critical component. Employees must be equipped with the skills and mindset needed to adapt to change. This includes continuous learning, collaboration, and the ability to work effectively in dynamic environments. The Organisation for Economic Co-operation and Development (OECD) highlights the importance of adaptability and skills development in supporting business resilience and long-term growth (source: https://www.oecd.org/employment/future-of-work/). Despite its benefits, achieving business agility can be challenging. Organisations must overcome resistance to change, invest in new technologies, and develop new capabilities. This often requires significant cultural and organisational transformation. In addition, businesses must balance agility with stability. While flexibility is important, organisations must also maintain a clear strategic direction and ensure that changes are aligned with long-term goals. Looking ahead, the importance of business agility is expected to grow. As markets become more dynamic and competition intensifies, organisations that can adapt quickly will have a significant advantage. In conclusion, business agility is becoming a defining factor of long-term success. By embracing flexibility, investing in technology, and fostering a culture of innovation, organisations can navigate uncertainty and position themselves for sustained growth.

Greetings, it's Bruce Einhorn in Princeton. Amazon's buying satellite operator Globalstar in a deal to narrow the gap with Starlink. But first... Three things you need to know today: * SpaceX deal for EchoStar will save CEO's heirs $3 billion. * NASA's lunar economy is underway (video). * Telesat's rebuild gets boost from Canada's defense war chest. A new D2D player The $11.6 billion acquisition of Globalstar is upending the satellite industry, as Jeff Bezos & Co. push to make Amazon Leo the main alternative to SpaceX's Starlink ahead of its blockbuster IPO. Bloomberg reported back in October that Elon Musk's company had held early talks about purchasing GlobalStar. A deal would've added the satellite operator's wireless spectrum to what SpaceX agreed in September to buy from EchoStar for about $17 billion. As the industry laggard, though, Amazon needs a deal more. The company recently announced it has launched more than 200 satellites "and has another 200+ stacked and ready for launch." Unfortunately, that leaves Amazon well behind its target set by the Federal Communications Commission for about 1,600 satellites by July. Also, as Desjardins Capital Markets analysts pointed out earlier this month on a possible Amazon/Globalstar deal, the e-commerce giant's purchase would likely face fewer antitrust and spectrum-concentration concerns than a SpaceX-Globalstar deal. Amazon's had a tricky relationship with the regulator lately. The company got a dressing down from FCC Chairman Brendan Carr last month after the company objected to SpaceX plans to deploy more satellites. However, Carr on Tuesday spoke positively about the acquisition in an interview with CNBC. With the Globalstar purchase, Amazon said it plans on entering the direct-to-device market, a business just getting started that promises to use satellites instead of cell towers for voice, text and data, freeing users from the tyranny of dead zones. Amazon won't be a standalone carrier with Globalstar's spectrum, "but it's adequate for delivering basic services such as voice and texting, particularly in less densely populated areas," Bloomberg Intelligence senior industry analyst John Butler wrote in a report published on Wednesday. Amazon would likely look to build a more powerful network of satellites, he added. D2D satellites "are among the most technologically demanding spacecraft you can build," according to Caleb Henry, director of research at Quilty Space, since operators need a lot of power to provide a service similar to that of terrestrial-based towers. That's something Globalstar couldn't hope to do on its own, he added. "Having the backing of Amazon turbocharges their ability to provide direct-to-cell service in a way that might have been too expensive otherwise," he said. Amazon's news therefore spooked investors in AST SpaceMobile, the Texas-based company that until now has benefited from being the main alternative to Starlink in that nascent business. AST's shares sank nearly 11% on Tuesday. Amazon said it won't begin D2D service till 2028, so AST still has time before facing that additional competition. But AST has faced challenges getting its satellites into orbit. A big test is coming soon, with reports that AST will launch its next satellite - the first to go to orbit on a New Glenn rocket from Bezos' Blue Origin - later this week. -- Bruce Einhorn with Loren Grush Of lunar flybys and landings While the Artemis II mission's successful journey is great publicity for NASA, China may still win the space race. "Unless something changes, it is highly unlikely the United States will beat China's projected timeline to the Moon's surface," Jim Bridenstine, NASA administrator during the first Trump administration, said at a Senate hearing in September. Some things have changed since then: Jared Isaacman is now NASA administrator and work on the Gateway lunar orbiter was paused last month. But NASA's complicated architecture for a moon landing remains the same. At a conference in Washington in March, Bridenstine told me he hadn't changed his mind. His office last week responded to an interview request with a referral to his September comments. NASA's counting on SpaceX and Blue Origin to develop lunar landers, but both companies must overcome significant engineering hurdles. The agency's Office of Inspector General has said that more delays are likely for SpaceX's Starship, the rocket that the company is also developing into a lunar lander. Jonathan McDowell - a former Harvard-Smithsonian astrophysicist who's now affiliated with Durham University's Space Research Centre - recently told Bloomberg that while China is behind in the human lunar program race, its plans are "less likely to be derailed by changes in political support." On the same day as the Artemis II crew's return, Chinese state media published news that the China Manned Space Agency had safely transported the nation's next lunar probe, the Chang'e-7, to its launch site in southern Hainan province, with takeoff slated for later this year. The robotic mission, after Chang'e-6's trip to the moon's far side in 2024, is part of a program designed to send Chinese astronauts to the moon for the first time by 2030. Meanwhile, the biggest success in years for the US didn't get much of a response in China. A spokesperson for the Ministry of Foreign Affairs told the Bloomberg Beijing bureau, "China has always advocated that countries strengthen international cooperation, peacefully explore and utilize outer space for the benefit of humanity." The ministry threw in a dig at President Donald Trump's executive order in December titled "Ensuring American Space Superiority." "We have no intention" of participating in a space race with any country or "seeking so-called 'space superiority,'" the Chinese spokesperson said. -- Bruce Einhorn Lockheed looks at the Artemis heat shield NASA and its partners are sifting through data after the Artemis II crew splashed down on Earth, including discoloration of the heat shield on the Lockheed Martin-built Orion Crew Capsule. After the four crew members made it home safely, Kirk Shireman, head of human space flight at Lockheed Martin, joined Caroline Hyde and Ed Ludlow on "Bloomberg Tech" and said the heat shield performed exceptionally and most of the more than 12 million parts performed as intended. What we're reading Ukraine launched rockets into space during the war with Russia, MP says: Ukrainska Pravda. Jerry Moran, key Senate appropriator, rejects proposed NASA cuts: SpaceNews. ESA spent €82 million to launch Sentinel-1D on Ariane 6: European Spaceflight. In our orbit April 15-16: Final days of Space Symposium in Colorado Springs. April 15-22: Meeting in Vienna of the legal subcommittee of the UN's Committee on the Peaceful Uses of Outer Space. April 21: NASA to unveil the Nancy Grace Roman Space Telescope. Talk to us Please send us ideas, tips and questions to [email protected]. As always, you can reach Bloomberg's global business of space editor, Eric Johnson, at [email protected] (or via Signal). If you don't receive this newsletter, you should sign up here. More from Bloomberg * Tech In Depth for analysis and scoops about the business of technology * Management & Work for analyzing trends in leadership, company culture and the art of career building * Business of Sports for the context you need on the collision of power, money and sports You have exclusive access to other subscriber-only newsletters. Explore all newsletters here to get most out of your Bloomberg subscription.

Security leaders warn AI driven exploits could jump from "bug" to irreversible on-chain loss before humans even spot the trail. Anthropic's Mythos threat to the crypto industry can trigger hundreds of millions, if not billions, of dollars in sudden, irreversible losses. That is the stark reality facing digital asset markets following Anthropic's quiet unveiling of Claude Mythos Preview, a vulnerability-seeking AI model the San Francisco startup admits is simply too dangerous to release to the public. Deddy David, chief executive of blockchain security firm Cyvers, told CryptoSlate about the catastrophic scale of the problem, noting that the financial exposure of AI-driven exploits in crypto ranges from hundreds of millions to billions of dollars. He said: "If AI can identify vulnerabilities at scale across core internet infrastructure, crypto will be one of the first markets to feel the impact." If those estimates are correct, the scope of potential damage is staggering. Moreover, the scale of this new threat isn't just about bad actors writing slightly better phishing emails or generating malicious code snippets. Instead, it is about an autonomous system capable of finding deep, emergent logic flaws across smart contracts, wallets, and cross-chain bridges before human auditors even know where to look. For years, crypto founders and security researchers have obsessed over "Q-Day," the theoretical future date when a quantum computer becomes powerful enough to shatter blockchain cryptography. But Mythos recent launch is forcing a pivot. Security experts have noted that the most immediate threat to digital assets is no longer a future attack on cryptography. It is an AI system that can already uncover exploitable flaws in the very software layer the industry depends on. Anthropic's Mythos model fundamentally rewrites the timeline of infrastructure risk. According to the company, the model has already successfully identified vulnerabilities across every major web browser and operating system. In one alarming instance, it unearthed a 27-year-old bug buried in a critical piece of security infrastructure, alongside multiple deep-seated flaws within the Linux kernel. This was also corroborated by the UK government's AI Security Institute (AISI), which noted: "Our evaluation of Mythos Preview shows that it - and potentially future models - could be directed to autonomously compromise small, weakly defended, and vulnerable systems if given network access." The primary danger from these revelations is not simply that artificial intelligence makes cyber risk possible. Hackers have always existed. It is that AI radically compresses the time between bug discovery and exploit development. This means that vulnerability research that historically required months of painstaking human labor can now be executed at machine speed. For the traditional financial system, this represents a severe escalation in the cyber arms race. For the crypto industry, where transactions are instantaneous, irreversible, and governed entirely by autonomous code, it represents an immediate, systemic vulnerability. The architecture of the crypto ecosystem makes it uniquely vulnerable to machine-speed auditing. While traditional banks rely on siloed, proprietary networks with centralized fail-safes and circuit breakers, the digital asset sector runs almost entirely on public code. The industry is built on open-source dependencies, browser-based wallets, remote procedure call infrastructure, and smart contracts that are completely transparent to anyone or any AI model wishing to inspect them. This transparency creates a massive, publicly available attack surface. Compounding the risk is a severe structural mismatch between the value secured on-chain and the security budgets of the organizations that maintain it. Lean protocol teams frequently manage aging codebases that hold hundreds of millions of dollars in total value locked. Alex Svanevik, the chief executive of the agentic trading platform Nansen, told CryptoSlate: "Mythos is a different kind of threat: it's already finding vulnerabilities in the infrastructure crypto runs on that humans and every automated tool missed for decades." When AI-accelerated vulnerability discovery meets instant value transfer, the results can be devastating. Thus, the industry can no longer rely on traditional audits or post-incident detection. David explained: "When you combine AI-accelerated vulnerability discovery with instant, irreversible transactions, you dramatically shorten the path from bug to breach to loss. This is not just an increase in attack surface, it's an acceleration of time-to-exploit in a system where seconds matter." So what exactly is an AI model looking for? According to security experts, the most exposed layers are highly complex smart contracts and cross-chain bridges. These protocols are susceptible to emergent vulnerabilities, such as subtle state inconsistencies between upgradeable contracts or edge-case interactions across different modules. These are not simple syntax errors that a standard audit catches. Instead, they are complex interaction paths that large-scale AI simulations can easily surface. While artificial intelligence poses an immediate threat to the software layer, quantum computing remains the ultimate, looming threat to the cryptographic foundation of digital assets. Google Research has warned that future quantum computers may be able to break the elliptic-curve cryptography used in crypto systems with fewer resources than previously estimated. A sufficiently powerful cryptanalytically relevant quantum computer (CRQC) could derive private keys from public keys in minutes. With Bitcoin hovering around $70,000, the digital asset ecosystem presents a multi-trillion-dollar bounty. Current estimates suggest that up to 37% of circulating Bitcoin could be vulnerable to such a quantum hijacking before the network confirms the transaction. However, Google's public messaging remains focused on preparation and migration. The tech giant recently announced a 2029 target for a full industry transition to post-quantum cryptography. That contrast highlights the core of the industry's current dilemma. Anthropic's model represents software exploits happening right now. Quantum computing could pose a cryptographic threat later, assuming the industry fails to migrate its security standards in time. Chris Smith, chief executive of the cryptography firm Quantus, emphasized this exact distinction in his statement to CryptoSlate. He noted that while AI models are highly effective at finding and locating software bugs, quantum computing threatens the very foundations of the mathematics on which the crypto industry is built. If the underlying algorithms are broken, even flawless software becomes entirely insecure. Recognizing the sheer immediacy of the AI threat, the defensive race has officially begun. Through a new initiative called Project Glasswing, Anthropic has partnered with major tech firms and financial institutions, including Amazon Web Services, Google, Microsoft, and JPMorgan Chase, to use Mythos Preview to proactively find and fix flaws in critical systems. The company is committing up to $100 million in usage credits to help secure infrastructure before malicious actors can develop similar offensive capabilities. The threat has reached the highest levels of government. Last week, Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent convened a surprise meeting with major US bank chief executives to discuss the specific systemic risks posed by models like Mythos. Meanwhile, the crypto industry is scrambling to join this defensive perimeter. Major exchanges, including Coinbase and Binance, are reportedly in close communication with Anthropic to secure early access to the Mythos model. Decentralized platforms are also echoing the urgency, with Uniswap founder Hayden Adams publicly requesting access to test the model against the platform. Uniswap is the largest decentralized exchange protocol, with more than $3 billion in assets locked. Nansen's Svanevik argues that the crypto industry could utilize the tools in ways that would make it "the best security auditing tool ever built." According to him: "Smart contracts have historically been audited by humans -- slow, expensive, incomplete. An AI that can find a 27-year-old bug in OpenBSD can also find the reentrancy vulnerability that hasn't been caught yet in a major DeFi protocol. The question is whether defenders get access before attackers do -- and whether the crypto industry moves fast enough to use it proactively rather than reactively." Simultaneously, OpenAI has expanded access to a more cyber-permissive model, GPT-5.4-Cyber, through its Trusted Access for Cyber program, allowing vetted security vendors to stress-test their own systems. Despite the severe implications of machine-speed vulnerability discovery, crypto markets have shown remarkably little reaction to the advent of frontier cyber-offensive AI. Financial markets have spent years developing a vocabulary for quantum risk. Investors broadly understand that a quantum computer could break current encryption standards and the catastrophic impact that would have on digital ownership. However, the market appears far less prepared to price a systemic threat that operates not through a dramatic break in mathematics, but through quiet audit failures, compromised wallet dependencies, and complex exploit chains. As artificial intelligence fundamentally reshapes the speed and scale of cyber warfare, the digital asset market may significantly underestimate the fragility of the very infrastructure on which it is built.
