Posted on Leave a comment

Pacific Moisture Drenches the U.S. Northwest

A map shows atmospheric water vapor over the Pacific Ocean, with a dense green plume of moisture stretching from the tropical Pacific in the lower left toward the U.S. Pacific Northwest in the middle right.
December 10, 2025

Waves of heavy rainfall in early December 2025 spurred landslides and flooding in parts of the Pacific Northwest. The deluge was the result of a potent atmospheric river that took aim at the region starting around December 7.

Atmospheric rivers are long, narrow bands of moisture that move like rivers in the sky, transporting water vapor from the tropics toward the poles. They occur around the planet, most often in autumn and winter, with the U.S. West Coast typically affected by moist air that originates near Hawaii. In this event, however, some of the moisture arrived from even farther away, originating roughly 7,000 miles (11,000 kilometers) across the Pacific from near the Philippines.

This map shows the total precipitable water vapor in the atmosphere at 11:30 p.m. Pacific Time on December 10. It is derived from NASA’s GEOS (Goddard Earth Observing System) and uses satellite data and models of physical processes to approximate what is happening in the atmosphere.

Precipitable water vapor represents the amount of water contained in a column of air, assuming all the water vapor condensed into liquid. The map’s green areas indicate the highest amounts of moisture. Note that not all precipitable water vapor falls as rain; at least some remains in the atmosphere. Nor is it a cap on how much rain can fall, since rainfall can increase as more moisture flows into a column of air. Still, it serves as a useful indicator of areas where excessive rainfall is likely.

According to the National Weather Service, preliminary ground-based measurements showed that several locations in western Washington received more than 10 inches (250 millimeters) of rain over a 72-hour period ending on the morning of December 11. Seattle-Tacoma International Airport set a daily rainfall record on December 10, with 1.6 inches (40 millimeters). 

River flooding was ongoing on December 11, with the Skagit River and Snohomish River seeing record or near-record flood levels that day. Floodwater and mudslides have closed numerous roadways, including the eastbound lanes of I-90 out of western Washington.

NASA’s Disasters Response Coordination System has been activated to support the ongoing response efforts by the Washington State Emergency Operations Center. The team will be posting maps and data products on its open-access mapping portal as new information becomes available.

NASA Earth Observatory images by Lauren Dauphin, using GEOS data from the Global Modeling and Assimilation Office at NASA GSFC. Story by Kathryn Hansen.

References & Resources

Ref link: Pacific Moisture Drenches the U.S. Northwest

Posted on Leave a comment

World launches its ‘super app,’ including crypto pay and encrypted chat features

World, the biometric ID verification project co-founded by Sam Altman, released the newest version of its app today, debuting several new features, including an encrypted chat integration and an expanded, Venmo-like capability for sending and requesting crypto. 

World was created by the startup Tools for Humanity in 2019, and originally launched its app in 2023. The company says that, in a world roiled by AI-generated digital fakery, it hopes to create digital “proof of human” tools that can help separate the humans from the bots.

During a small gathering at World’s headquarters in San Francisco on Thursday, Altman and World’s co-founder and CEO, Alex Blania, briefly introduced the new version of the app (which developers have termed a “super app”) before the product team took over to explain the new features. During his remarks, Altman said that the concept for World grew out of conversations he and Blania had had about the need to create a new kind of economic model. That model, based around web3 principles, is what World has been trying to accomplish through its verification network. “It’s really hard to both identify unique people and do that in a privacy-preserving way,” said Altman.

World Chat, the app’s new messenger, seems designed to do just that. It uses end-to-end encryption to keep users’ conversations safe (this encryption is described as being equivalent to Signal, the privacy-focused messenger), and also leverages color-coded speech bubbles to alert users to whether the person they’re talking to has been verified by World’s system or not, the company said. The idea is to incentivize verification, giving people the power to know whether the person they’re talking to is who they say they are. Chat was originally launched in beta in March.

The other big feature reveal on Thursday was an expanded digital payment system that allows app users to send and receive cryptocurrency. World app has functioned as a digital wallet for some time, but the newest version of the app includes broader capabilities. Using virtual bank accounts, users can also receive paychecks directly into World App and make deposits from their bank accounts, both of which can then be converted into crypto. You don’t have be verified by World’s authentication system to use these features.

Tiago Sada, World’s chief product officer, told TechCrunch that part of the reason chat was added was to create a more interactive experience for users. “What we kept hearing from people is that they wanted a more social World app,” Sada said. World Chat is designed to fill that need, creating what Sada says is a secure way to communicate. “It took a lot of work to make this feature-rich messenger that is similar to a WhatsApp or a Telegram, but with encryption and security of something that is a lot closer to Signal,” Sada said.

World (which was originally called Worldcoin) deploys a unique authentication process: interested humans get their eyes scanned at one of the company’s offices, where the Orb—a large verification device—converts the person’s iris into a unique and encrypted digital code. That code, the verified World ID, can then be used by the person to interact with World’s ecosystem of services, which are available through its app.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

The addition of more social-friendly features is clearly meant to drive broader adoption of the app, which makes sense since scaling verification is the company’s main challenge. Altman has said that he would like the project to scan a billion people’s eyes, but Tools for Humanity claims to have scanned less than 20 million people.  

Since standing in long lines at a corporate office to have your eyeballs scanned by a giant metallic ball may seem slightly less than enticing to some users, the company has already sought to make its verification process less cumbersome. In April, Tools for Humanity announced its Orb Minis—hand-held, phone-like devices—that allow users to scan their own eyes from the comfort of their homes. Blania previously told TechCrunch that, eventually, the company would like to turn the Orb Minis into a mobile point-of-sale device or sell its ID sensor tech to device manufacturers. If the company takes such steps, it would drop the barrier to verification significantly, potentially inspiring much more widespread adoption.

Ref link: World launches its ‘super app,’ including crypto pay and encrypted chat features

Posted on Leave a comment

Stanford’s star reporter takes on Silicon Valley’s ‘money-soaked’ startup culture

Theo Baker is truly an outlier.

While journalism as a major has seen shrinking enrollment for years and is even being dropped by some schools entirely, Baker, a senior at Stanford University, has doubled down on old-school investigative reporting, and it is paying off spectacularly.

Baker first made headlines as a college freshman when his reporting for The Stanford Daily led to the resignation of Stanford president Marc Tessier-Lavigne. After uncovering allegations of research misconduct spanning two decades, Baker — just one month into college — found himself “receiving anonymous letters, conducting stakeouts, and tracking down confidential sources,” according to his publisher. Meanwhile, high-powered lawyers tried to discredit his work. By year’s end, Tessier-Lavigne had resigned, and Baker became the youngest-ever recipient of the George Polk Award, one of journalism’s most prestigious honors.

Shortly after, Warner Bros and famed producer Amy Pascal won a competitive auction for the film rights to his story.

But if that scandal put Baker on the map, his upcoming book may cement his reputation as the rare young journalist willing to challenge Silicon Valley’s startup machine.

“How to Rule the World,” out May 19 — three weeks before he graduates — promises an explosive look at how venture capitalists treat Stanford students as “a commodity,” wooing favored undergrads with slush funds, shell companies, yacht parties, and funding offers before they even have business ideas in their hunt for the next trillion-dollar founder.

“I watched in real time as my peers were taught to cut corners and plied with enormous wealth by people who wanted to exploit their talent,” Baker, who turns 21 next month, tells Axios. Drawing on more than 250 interviews with students, CEOs, VCs, Nobel laureates, and three Stanford presidents, the book aims to expose what Baker describes to Axios as a “weird, money-soaked subculture that has so much influence over the rest of the world.”

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

It’s perhaps an unsurprising move from someone who grew up around top journalists. His father is New York Times chief White House correspondent Peter Baker, and his mother is The New Yorker’s Susan Glasser. While his peers chase venture capital funding and six-figure startup salaries, Baker spent his sophomore year reporting and took his junior year off to write, including two months at the Yaddo writers’ retreat.

That choice becomes even more striking against the backdrop of journalism’s current struggles. While traditional journalism programs fail to fill classes and media outfits face seemingly relentless layoffs, Baker represents something both exciting and increasingly uncommon: a star student betting his career on accountability journalism. Whether he’s a harbinger of renewed interest in investigative reporting remains to be seen, but we’d guess his book will capture the attention of plenty of college students — and it will almost certainly make waves in Silicon Valley while doing it.

Ref link: Stanford’s star reporter takes on Silicon Valley’s ‘money-soaked’ startup culture

Posted on Leave a comment

Google launched its deepest AI research agent yet — on the same day OpenAI dropped GPT-5.2

Google released on Thursday a “reimagined” version of its research agent Gemini Deep Research based on its much-ballyhooed state-of-the-art foundation model, Gemini 3 Pro.  

This new agent isn’t just designed to produce research reports — although it can still do that. It now allows developers to embed Google’s SATA-model research capabilities into their own apps. That capability is made possible through Google’s new Interactions API, which is designed to give devs more control in the coming agentic AI era. 

The new Gemini Deep Research tool is an agent equipped to synthesize mountains of information and handle a large context dump in the prompt. Google says it’s used by customers for tasks ranging from due diligence to drug toxicity safety research. 

Google also says it will soon be integrating this new deep research agent into services, including Google Search, Google Finance, its Gemini App, and its popular NotebookLM. This is another step toward preparing for a world where humans don’t Google anything anymore — their AI agents do. 

The tech giant says that Deep Research benefits from Gemini 3 Pro’s status as its “most factual” model that is trained to minimize hallucinations during complex tasks.

AI hallucinations — where the LLM just makes stuff up — are an especially crucial issue for long-running, deep reasoning agentic tasks, in which many autonomous decisions are made over minutes, hours, or longer. The more choices an LLM has to make, the greater the chance that even one hallucinated choice will invalidate the entire output. 

To prove its progress claims, Google has also created yet another benchmark (as if the AI world needs another one). The new benchmark is unimaginatively named DeepSearchQA and is intended to test agents on complex, multi-step information-seeking tasks. Google has open sourced this benchmark.  

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

It also tested Deep Research on Humanity’s Last Exam, a much more interestingly named, independent benchmark of general knowledge filled with impossibly niche tasks; and BrowserComp, a benchmark for browser-based agentic tasks.

As you might expect, Google’s new agent bested the competition on its own benchmark, and Humanity’s. However, OpenAI’s ChatGPT 5 Pro was a surprisingly close second all the way around and slightly bested Google on BrowserComp. 

But those benchmark comparisons were obsolete almost the moment Google published them. Because on the same day, OpenAI launched its highly anticipated GPT 5.2 — codenamed Garlic. OpenAI says its newest model bests its rivals — especially Google — on a suite of the typical benchmarks, including OpenAI’s homegrown one. 

Perhaps one of the most interesting parts of this announcement was the timing. Knowing that the world was awaiting the release of Garlic, Google dropped some AI news of its own.

Ref link: Google launched its deepest AI research agent yet — on the same day OpenAI dropped GPT-5.2

Posted on Leave a comment

1X struck a deal to send its ‘home’ humanoids to factories and warehouses

Robotics company 1X found some big potential buyers for its humanoid robots designed for consumers — the portfolio companies of one of its investors.

The company announced a strategic partnership to make thousands of its humanoid robots available for EQT’s portfolio companies on Thursday. EQT is a large Swedish multi-asset investor, and its venture fund EQT Ventures, is one of 1X’s backers.

This deal involves shipping up to 10,000 1X Neo humanoid robots between 2026 and 2030 to EQT’s more than 300 portfolio companies with a concentration on manufacturing, warehousing, logistics, and other industrial use cases.

1X will sign individual deals with each of EQT’s interested portfolio companies, 1X confirmed to TechCrunch.

This partnership is particularly interesting because 1X’s Neo has been marketed as a humanoid for personal use and tagged as the “first consumer-ready humanoid robot designed to transform life at home.” Unlike some of 1X’s peers, like Figure, it has not been marketed as a bot for commercial purposes.

1X does have a robot designed for industrial purposes, Eve Industrial, but this deal specifically involves the Neo humanoid.

When the company opened up preorders for the $20,000 robot in October, the announcement was focused on how the robot would operate in someone’s home from descriptions of the different chores that the robot is able to perform and how it interacts with people.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

This deal is quite a different use case.

That’s likely because humanoids for the home will remain a hard sell for quite some time while industrial use cases are an easier sell. The $20,000 price tag automatically limits the potential pool of consumer customers too.

The Neo specifically also comes with a privacy element that would be hard to swallow for many people — human operators from 1X are able to look through the robots eyes into someone’s home.

Humanoids also come with potential safety issues around pets and small children due to their size and instability. Multiple VCs and scientists in the robotics field told TechCrunch this summer that humanoid adoption wouldn’t be for multiple years, if not a decade away.

The company declined to share how many preorders it received for its Neo bot but a spokesperson said preorders “far exceeded” the company’s goal.

Founded in 2014, 1x has since raised more than $130 million in venture capital from firms, including EQT Ventures, Tiger Global, and the OpenAI Startup Fund, among others.

Ref link: 1X struck a deal to send its ‘home’ humanoids to factories and warehouses

Posted on Leave a comment

NASA Selects Two Heliophysics Missions for Continued Development

NASA circular logo
NASA

NASA has selected one small explorer mission concept to advance toward flight design and another for an extended period of concept development.

NASA’s Science Mission Directorate Science Management Council selected CINEMA (Cross-scale Investigation of Earth’s Magnetotail and Aurora) to enter Phase B of development, which includes planning and design for flight and mission operations. The principal investigator for the CINEMA mission concept is Robyn Millan from Dartmouth College in Hanover, New Hampshire.

The proposed CINEMA mission aims to advance our understanding of how plasma energy flows into the Earth’s magnetosphere. This highly dynamic convective flow is unpredictable — sometimes steady and sometimes explosive — driving phenomena like fast plasma jets, global electrical current systems, and spectacular auroral displays.

“The CINEMA mission will help us to research magnetic convection in Earth’s magnetosphere — a critical piece of the puzzle in understanding why some space weather events are so influential, such as causing magnificent aurora displays and impacts to ground- and space-based infrastructure, and others seem to fizzle out,” said Joe Westlake, director of the Heliophysics Division at NASA Headquarters in Washington. “Using multiple, multi-point measurements to improve predictions of these impacts on humans and technology across the solar system is a key strategy for the future of heliophysics research.”

The CINEMA mission’s constellation of nine small satellites will investigate the convective mystery using a combination of instruments — an energetic particle detector, an auroral imager, and a magnetometer — on each spacecraft in a polar low Earth orbit. By relating the energetic particles observed in this orbit to simultaneous auroral images and local magnetic field measurements, CINEMA aims to connect energetic activity in Earth’s large-scale magnetic structure to the visible signatures like aurora that we see in the ionosphere. The mission has been awarded approximately $28 million to enter Phase B. The total cost of the mission, not including launch, will not exceed $182.8 million. Phase B will last 10 months, and if selected, the mission would launch no earlier than 2030.

NASA also selected the proposed CMEx (Chromospheric Magnetism Explorer) mission for an extended Phase A study. This extended phase is for the mission to assess and refine their design for potential future consideration. The principal investigator for the CMEx mission concept study is Holly Gilbert from the National Center for Atmospheric Research in Boulder, Colorado. The cost of the extended Phase A, which will last 12 months, is $2 million.

The CMEx concept is a proposed single-spacecraft mission that would use proven UV spectropolarimetric instrumentation that has been demonstrated during NASA’s CLASP (Chromospheric Layer Spectropolarimeter) sub-orbital sounding rocket flight. Using this heritage hardware, CMEx would be able to diagnose lower layers of the Sun’s chromosphere to understand the origin of solar eruptions and determine the magnetic sources of the solar wind.

The proposed missions completed a one-year early concept study in response to the 2022 Heliophysics Explorers Program Small-class Explorer (SMEX) Announcement of Opportunity.

“Space is becoming increasingly more important and plays a role in just about everything we do,” said Asal Naseri, acting associate flight director for heliophysics at NASA Headquarters. “These mission concepts, if advanced to flight, will improve our ability to predict solar events that could harm satellites that we rely on every day and mitigate danger to astronauts near Earth, at the Moon, or Mars.”

To learn more about NASA heliophysics missions, visit:

https://science.nasa.gov/heliophysics

-end-

Abbey Interrante / Karen Fox
Headquarters, Washington
301-201-0124 / 202-358-1600
abbey.a.interrante@nasa.gov / karen.c.fox@nasa.gov

Ref link: NASA Selects Two Heliophysics Missions for Continued Development

Posted on Leave a comment

The market has ‘switched’ and founders have the power now, VCs say

The way venture capitalists think about fundraising can be a black box. But investors must think about their go-to-market strategy for raising their own funds, just as much as they think about how their portfolio companies find their market fit.  

All season on Build Mode, we’ve explored how founders should approach marketing, but this week we’re exploring how VCs sell themselves to founders as trustworthy partners and to LPs as worthwhile investments.  

Isabelle Johannessen spoke with Graham & Walker’s Leslie Feinzaig and XYZ Venture’s Ross Fubini about raising their first funds and how that experience has given them empathy for the founder fundraising experience.  

Feinzaig came into venture capital with very few industry connections. “It was hundreds of pitches. It was raised almost entirely from individuals. We ended up with 105 LPs,” she said. “If you don’t have a track record, then what they’re investing in is you. Like it is basically, like, raising a gigantic angel round with no lead.” 

With that outsider perspective, she’s been able to position herself as the call founders make before they meet with their board to practice and discuss strategy. 

Similarly, Fubini encourages the leadership teams he works with to carefully consider who they are entering into partnership with. His rubric follows three core tenets: person, firm, terms.

“You work with this person for forever. So it’s everything from like, are they fun? Do you trust them? Do they have the juice to get the deal done? It’s everything around this human,” he said. 

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

Both VCs noted the change from the most recent 2022-23 bear market — where VCs held all the cards — to the current eager dealmaking atmosphere where founders have a bit more power. This makes choosing the right VCs that much more important, they say.

Fubini called this shift “thrilling” because, even though both sides still need to do their diligence and ensure they are good fit together, “you can move so quickly” compared to cautious bear markets. “I think that’s fun and joyful,” he said. 

Both Feinzaig and Fubini are full of tactical advice for both VCs seeking creative ways to capture founder attention and founders seeking smartest choices for their cap tables.

The pitch deck and cold emails may not have the power they once did but creating authentic relationships and proving on execution remains the best strategies to attract the kind of people you want to work with — from both perspectives.  

New episodes of Build Mode drop every Thursday. Subscribe to the podcast or watch on YouTube. Isabelle Johannessen is our host. Build Mode is produced and edited by Maggie Nye. Audience Development is led by Morgan Little. And a special thanks to the Foundry and Cheddar video teams. 

Ref link: The market has ‘switched’ and founders have the power now, VCs say

Posted on Leave a comment

Epic Games’ Fortnite is back in US Google Play Store, as court partially reverses restrictions it won on iOS

Epic Games’ popular battle royale, Fortnite, has returned to the U.S. Google Play Store following a court order.

The game maker had recently settled its five-year legal battle with the tech giant, which stemmed from a dispute around the percentage of in-app purchase sales that app developers had to share with the platforms. However, the company lost a little ground on its related lawsuit against Apple, which was also over in-app purchase restrictions and commission structure.

After Epic Games launched a version of its Fortnite game that routed around the existing in-app payment systems on iOS and Android devices in 2020, Apple and Google removed the game from their respective app stores. Epic Games used that move to then file antitrust lawsuits against both companies.

In Apple’s case, the court ruled the iPhone maker was not a monopolist but said Apple needed to allow developers to point to other payment mechanisms if they chose. Apple has been fighting the specific terms of that agreement, which were today partially overturned by an appeals court that called some of the restrictions “overbroad.”

Of note, the new filing states Apple can tell developers not to make their links to payments bigger or more prominent than Apple’s own. It also says Apple is allowed to charge a fee on purchases made outside its App Store. The latter is a significant blow to developers, who had finally been able to skirt Apple’s commission.

Meanwhile, Epic Games has reason to celebrate by returning to the Google Play Store after Google lost its court battle with the game developer, where it was ruled to have engaged in anticompetitive behavior. Under the new agreement, Google allows app developers to point to alternative payment mechanisms and caps the fees Google could charge.

Epic Games CEO Tim Sweeney called it a “comprehensive solution” that doubled down on Android as an open platform.

The Apple ruling is below:

Epic v Apple – 9th Circuit Order – 20251211 by TechCrunch

Ref link: Epic Games’ Fortnite is back in US Google Play Store, as court partially reverses restrictions it won on iOS

Posted on Leave a comment

NASA Works with Boeing, Other Collaborators Toward More Efficient Global Flights 

NASA works with Boeing and the ecodemonstrator plane is parked on the tarmac.
The 2025 Boeing ecoDemonstrator Explorer, a United Airlines 737-8, sits outside a United hangar in Houston.
Boeing / Paul Weatherman

Picture this: You’re just about done with a transoceanic flight, and the tracker in your seat-back screen shows you approaching your destination airport. And then … you notice your plane is moving away. Pretty far away. You approach again and again, only to realize you’re on a long, circling loop that can last an hour or more before you land. 

If this sounds familiar, there’s a good chance the delay was caused by issues with trajectory prediction. Your plane changed its course, perhaps altering its altitude or path to avoid weather or turbulence, and as a result its predicted arrival time was thrown off.  

“Often, if there’s a change in your trajectory – you’re arriving slightly early, you’re arriving slightly late – you can get stuck in this really long, rotational holding pattern,” said Shivanjli Sharma, NASA’s Air Traffic Management–eXploration (ATM-X) project manager and the agency’s Ames Research Center in California’s Silicon Valley. 

This inconvenience to travelers is also an economic and efficiency challenge for the aviation sector, which is why NASA has worked for years to study the issue, and recently teamed with Boeing to conduct real-time tests an advanced system that shares trajectory data between an aircraft and its support systems. 

Boeing began flying a United Airlines 737 for about two weeks in October testing a data communication system designed to improve information flow between the flight deck, air traffic control, and airline operation centers. The work involved several domestic flights based in Houston, as well as flight over the Atlantic to Edinburgh, Scotland. 

This partnership has allowed NASA to further its commitment to transformational aviation research.

Shivanjli sharma

Shivanjli sharma

NASA’s Air Traffic Management—eXploration project manager

The testing was Boeing’s most recent ecoDemonstrator Explorer program, through which the company works with public and private partners to accelerate aviation innovations. This year’s ecoDemonstrator flight partners included NASA, the Federal Aviation Administration, United Airlines, and several aerospace companies as well as academic and government researchers. 

NASA’s work in the testing involved the development of an oceanic trajectory prediction service – a system for sharing and updating trajectory information, even over a long, transoceanic flight that involves crossing over from U.S. air traffic systems into those of another country. The collaboration allowed NASA to get a more accurate look at what’s required to reduce gaps in data sharing. 

“At what rate do you need these updates in an oceanic environment?” Sharma said. “What information do you need from the aircraft? Having the most accurate trajectory information will allow aircraft to move more efficiently around the globe.” 

Boeing and the ecoDemonstrator collaborators plan to use the flight data to move the data communication system toward operational service. The work has allowed NASA to continue its work to improve trajectory prediction, and through its connection with partners, put its research into practical use as quickly as possible. 

“This partnership has allowed NASA to further its commitment to transformational aviation research,” Sharma said. “Bringing our expertise in trajectory prediction together with the contributions of so many innovative partners contributes to global aviation efficiency that will yield real benefits for travelers and industry.” 

NASA ATM-X’s part in the collaboration falls under the agency’s Airspace Operations and Safety Program, which works to enable safe, efficient aviation transportation operations that benefit the flying public and industry. The work is supported through NASA’s Aeronautics Research Mission Directorate.  

Ref link: NASA Works with Boeing, Other Collaborators Toward More Efficient Global Flights 

Posted on Leave a comment

Disney hits Google with cease-and-desist claiming ‘massive’ copyright infringement

Disney sent a cease-and-desist letter to Google on Wednesday, alleging that the tech giant has infringed on its copyrights, Variety reports.

Disney is accusing the tech giant of copyright infringement on a “massive scale,” claiming it has used AI models and services to commercially distribute unauthorized images and videos, according to the letter seen by Variety.

“Google operates as a virtual vending machine, capable of reproducing, rendering, and distributing copies of Disney’s valuable library of copyrighted characters and other works on a mass scale,” the letter reads. “And compounding Google’s blatant infringement, many of the infringing images generated by Google’s AI Services are branded with Google’s Gemini logo, falsely implying that Google’s exploitation of Disney’s intellectual property is authorized and endorsed by Disney.”

The letter alleges that Google’s AI systems infringe characters from “Frozen,” “The Lion King,” “Moana,” “The Little Mermaid,” “Deadpool,” and more.

Google didn’t confirm or deny Disney’s allegations but did say it will “engage” with the company. “We have a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them. More generally, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content,” a spokesperson said.

Disney’s move comes the same day that it signed a $1 billion, three-year deal with OpenAI that will bring its iconic characters to the company’s Sora AI video generator.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

Ref link: Disney hits Google with cease-and-desist claiming ‘massive’ copyright infringement

Posted on Leave a comment

NASA’s Chandra Finds Small Galaxies May Buck the Black Hole Trend

NGC 6278 and PGC 039620 are two galaxies from a sample of 1,600 that were searched for the presence of supermassive black holes. These images represent the results of a study that suggests that smaller galaxies do not contain supermassive black holes nearly as often as larger galaxies do. The study analyzed over 1,600 galaxies that have been observed with Chandra over two decades. Certain X-ray signatures indicate the presence of supermassive black holes. The study indicates that most smaller galaxies like PGC 03620, shown here in both X-rays from Chandra and optical light images from the Sloan Digital Sky Survey, likely do not have supermassive black holes in their centers. In contrast, NGC 6278, which is roughly the same size as the Milky Way, and most other large galaxies in the sample show evidence for giant black holes within their cores.
NGC 6278 and PGC 039620 are two galaxies from a sample of 1,600 that were searched for the presence of supermassive black holes. These images represent the results of a study that suggests that smaller galaxies do not contain supermassive black holes nearly as often as larger galaxies do. The study analyzed over 1,600 galaxies that have been observed with Chandra over two decades. Certain X-ray signatures indicate the presence of supermassive black holes. The study indicates that most smaller galaxies like PGC 03620, shown here in both X-rays from Chandra and optical light images from the Sloan Digital Sky Survey, likely do not have supermassive black holes in their centers. In contrast, NGC 6278, which is roughly the same size as the Milky Way, and most other large galaxies in the sample show evidence for giant black holes within their cores.
X-ray: NASA/CXC/SAO/F. Zou et al.; Optical: SDSS; Image Processing: NASA/CXC/SAO/N. Wolk

Most smaller galaxies may not have supermassive black holes in their centers, according to a recent  study using NASA’s Chandra X-ray Observatory. This contrasts with the common idea that nearly every galaxy has one of these giant black holes within their cores, as NASA leads the world in exploring how our universe works.

A team of astronomers used data from over 1,600 galaxies collected in more than two decades of the Chandra mission. The researchers looked at galaxies ranging in heft from over ten times the mass of the Milky Way down to dwarf galaxies, which have stellar masses less than a few percent of that of our home galaxy. A paper describing these results has been published in The Astrophysical Journal and is available here https://arxiv.org/abs/2510.05252

The team has reported that only about 30% of dwarf galaxies likely contain supermassive black holes.

“It’s important to get an accurate black hole head count in these smaller galaxies,” said Fan Zou of the University of Michigan in Ann Arbor, who led the study. “It’s more than just bookkeeping. Our study gives clues about how supermassive black holes are born. It also provides crucial hints about how often black hole signatures in dwarf galaxies can be found with new or future telescopes.”

As material falls onto black holes, it is heated by friction and produces X-rays. Many of the massive galaxies in the study contain bright X-ray sources in their centers, a clear signature of supermassive black holes in their centers. The team concluded that more than 90% of massive galaxies – including those with the mass of the Milky Way – contain supermassive black holes.

However, smaller galaxies in the study usually did not have these unambiguous black hole signals. Galaxies with masses less than three billion Suns – about the mass of the Large Magellanic Cloud, a close neighbor to the Milky Way – usually do not contain bright X-ray sources in their centers.

The researchers considered two possible explanations for this lack of X-ray sources. The first is that the fraction of galaxies containing massive black holes is much lower for these less massive galaxies. The second is the amount of X-rays produced by matter falling onto these black holes is so faint that Chandra cannot detect it.

“We think, based on our analysis of the Chandra data, that there really are fewer black holes in these smaller galaxies than in their larger counterparts,” said Elena Gallo, a co-author also from the University of Michigan.

To reach their conclusion, Zou and his colleagues considered both possibilities for the lack of X-ray sources in small galaxies in their large Chandra sample. The amount of gas falling onto a black hole determines how bright or faint they are in X-rays. Because smaller black holes are expected to pull in less gas than larger black holes, they should be fainter in X-rays and often not detectable. The researchers confirmed this expectation. 

However, they found that an additional deficit of X-ray sources is seen in less massive galaxies beyond the expected decline from decreases in the amount of gas falling inwards. This additional deficit can be accounted for if many of the low-mass galaxies simply don’t have any black holes at their centers. The team’s conclusion was that the drop in X-ray detections in lower mass galaxies reflects a true decrease in the number of black holes located in these galaxies.

This result could have important implications for understanding how supermassive black holes form. There are two main ideas: In the first, a giant gas cloud directly collapses into a black hole, which contains thousands of times the Sun’s mass from the start. The other idea is that supermassive black holes instead come from much smaller black holes, created when massive stars collapse.

“The formation of big black holes is expected to be rarer, in the sense that it occurs preferentially in the most massive galaxies being formed, so that would explain why we don’t find black holes in all the smaller galaxies,” said co-author Anil Seth of the University of Utah.

This study supports the theory where giant black holes are born already weighing several thousand times the Sun’s mass. If the other idea were true, the researchers said they would have expected smaller galaxies to likely have the same fraction of black holes as larger ones.

This result also could have important implications for the rates of black hole mergers from the collisions of dwarf galaxies. A much lower number of black holes would result in fewer sources of gravitational waves to be detected in the future by the Laser Interferometer Space Antenna. The number of black holes tearing stars apart in dwarf galaxies will also be smaller.

NASA’s Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program. The Smithsonian Astrophysical Observatory’s Chandra X-ray Center controls science operations from Cambridge, Massachusetts, and flight operations from Burlington, Massachusetts.

To learn more about Chandra, visit:

https://science.nasa.gov/chandra


Read more from NASA’s Chandra X-ray Observatory

Learn more about the Chandra X-ray Observatory and its mission here:

https://www.nasa.gov/chandra

https://chandra.si.edu

News Media Contact

Megan Watzke
Chandra X-ray Center
Cambridge, Mass.
617-496-7998
mwatzke@cfa.harvard.edu

Corinne Beckinger
Marshall Space Flight Center, Huntsville, Alabama
256-544-0034
corinne.m.beckinger@nasa.gov

Ref link: NASA’s Chandra Finds Small Galaxies May Buck the Black Hole Trend

Posted on Leave a comment

Google’s AI try-on feature for clothes now works with just a selfie

Google is updating its AI try-on feature to let you virtually try on clothes using just a selfie, the company announced on Thursday. In the past, users had to upload a full-body picture of themselves to virtually try on a piece of clothing. Now they can use a selfie and Nano Banana, Google’s Gemini 2.5 Flash Image model, to generate a full-body digital version of themselves for virtual try-ons.

Users can select their usual clothing size, and the feature will then generate several images. From there, users can choose one to make it their default try-on photo.

If desired, users still have the option to use a full-body photo or select from a range of models with diverse body types.

The new capability is launching in the United States today.

Image Credits:Google

Google first launched the try-on feature in July, allowing users to try on apparel items from its Shopping Graph across Search, Google Shopping, and Google Images. To use the feature, users need to tap on a product listing or apparel product result and select the “try it on” icon.

The move comes as Google has been investing in the virtual AI try-on space, as the company has a separate app dedicated specifically to that purpose. The app, called Doppl, is designed to help visualize how different outfits might look on you using AI.

Earlier this week, the tech giant updated it with a shoppable discovery feed that displays recommendations so users can discover and virtually try on new items. Nearly everything in the feed is shoppable, with direct links to merchants.

The discovery feed features AI-generated videos of real products and suggests outfits based on your personalized style. While some may not be fond of an AI-generated feed, Google likely views it as a way to showcase products in a format that people are already familiar with, thanks to platforms like TikTok and Instagram.

Ref link: Google’s AI try-on feature for clothes now works with just a selfie

Posted on Leave a comment

OpenAI fires back at Google with GPT-5.2 after ‘code red’ memo

OpenAI launched its latest frontier model, GPT-5.2, on Thursday amid increasing competition from Google, pitching it as its most advanced model yet and one designed for developers and everyday professional use. 

OpenAI’s GPT-5.2 is coming to ChatGPT paid users and developers via the API in three flavors: Instant, a speed-optimized model for routine queries like information-seeking, writing, and translation; Thinking, which excels at complex structured work like coding, analyzing long documents, math, and planning; and Pro, the top-end model aimed at delivering maximum accuracy and reliability for difficult problems. 

“We designed 5.2 to unlock even more economic value for people,” Fidji Simo, OpenAI’s chief product officer, said Thursday during a briefing with journalists. “It’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context, using tools and then linking complex, multi-step projects.”

GPT-5.2 lands in the middle of an arms race with Google’s Gemini 3, which is topping LMArena’s leaderboard across most benchmarks (apart from coding — which Anthropic’s Claude Opus-4.5 still has on lock).

Early this month, The Information reported that CEO Sam Altman released an internal “code red” memo to staff amid ChatGPT traffic decline and concerns that it is losing consumer market share to Google. The code red called for a shift in priorities, including stalling on commitments like introducing ads and instead focusing on creating a better ChatGPT experience. 

GPT-5.2 is OpenAI’s push to reclaim leadership, even as some employees reportedly asked for the model release to be pushed back so the company could have more time to improve it. And despite indications that OpenAI would focus its attention on consumer use cases by adding more personalization and customization to ChatGPT, the launch of GPT-5.2 looks to beef up its enterprise opportunities. 

The company is specifically targeting developers and the tooling ecosystem, aiming to become the default foundation for building AI-powered applications. Earlier this week, OpenAI released new data showing enterprise usage of its AI tools has surged dramatically over the past year. 

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

This comes as Gemini 3 has become tightly integrated into Google’s product and cloud ecosystem for multimodal and agentic workflows. Google this week launched managed MCP servers that make its Google and Cloud services like Maps and BigQuery easier for agents to plug into. (MCPs are the connectors between AI systems and data and tools.)

OpenAI says GPT-5.2 sets new benchmark scores in coding, math, science, vision, long-context reasoning, and tool use, which the company claims could lead to “more reliable agentic workflows, production-grade code, and complex systems that operate across large contexts and real-world data.”

Those capabilities put it in direct competition with Gemini 3’s Deep Think mode, which has been touted as a major reasoning advancement targeting math, logic, and science. On OpenAI’s own benchmark chart, GPT-5.2 Thinking edges out Gemini 3 and Anthropic’s Claude Opus 4.5 in nearly every listed reasoning test, from real-world software engineering tasks (SWE-Bench Pro) and doctoral-level science knowledge (GPQA Diamond) to abstract reasoning and pattern discovery (ARC-AGI suites). 

Research lead Aidan Clark said that stronger math scores aren’t just about solving equations. Mathematical reasoning, he explained, is a proxy for whether a model can follow multi-step logic, keep numbers consistent over time, and avoid subtle errors that could compound over time. 

“These are all properties that really matter across a wide range of different workloads,” Clark said. “Things like financial modeling, forecasting, doing an analysis of data.”

During the briefing, OpenAI product lead Max Schwarzer said GPT-5.2 “makes substantial improvements to code generation and debugging” and can walk through complex math and logic step by step. Coding startups like Windsurf and CharlieCode, he added, report “state-of-the-art agent coding performance” and measurable gains on complex multi-step workflows.

Beyond coding, Schwarzer said that GPT-5.2 Thinking responses contain 38% fewer errors than its predecessor, making the model more dependable for day-to-day decision-making, research, and writing. 

GPT-5.2 appears to be less a reinvention and more of a consolidation of OpenAI’s last two upgrades. GPT-5, which dropped in August, was a reset that laid the groundwork for a unified system with a router to toggle the model between a fast default model and a deeper “Thinking” mode. November’s GPT-5.1 focused on making that system warmer, more conversational, and better suited to agentic and coding tasks. The latest model, GPT-5.2, seems to turn up the dial on all of those advancements, making it a more reliable foundation for production use. 

For OpenAI, the stakes have never been higher. The company has made commitments to the tune of $1.4 trillion for AI infrastructure buildouts over the next few years to support its growth — commitments it made when it still had the first-mover advantage among AI companies. But now that Google, which lagged behind at the start, is pushing ahead, that bet might be what’s driving Altman’s “code red.” 

OpenAI’s renewed focus on reasoning models is also a risky flex. The systems behind its Thinking and Deep Research modes are more expensive to run than standard chatbots because they chew through more compute. By doubling down on that kind of model with GPT-5.2, OpenAI may be setting up a vicious cycle: spend more on compute to win the leaderboard, then spend even more to keep those high-cost models running at scale.

OpenAI is already reportedly spending more on compute than it had previously let on. As TechCrunch reported recently, most of OpenAI’s inference spend — the money it spends on compute to run a trained AI model — is being paid in cash rather than through cloud credits, suggesting the company’s compute costs have grown beyond what partnerships and credits can subsidize.

During the call, Simo suggested that as OpenAI scales, it is able to offer more products and services to generate more revenue to pay for additional compute.

“But I think it’s important to place that in the grand arc of efficiency,” Simo said. “You are getting, today, a lot more intelligence for the same amount of compute and the same amount of dollars as you were a year ago.”

For all its focus on reasoning, one thing that’s absent from today’s launch is a new image generator. Altman reportedly said in his code red memo that image generation would be a key priority moving forward, particularly after Google’s Nano Banana (the nickname for Google’s Gemini 2.5 Flash Image model) had a viral moment following its August release.

Last month, Google launched Nano Banana Pro (aka Gemini 3 Pro Image), an upgraded version with even better text rendering, world knowledge, and an eerie, real-life, unedited vibe to its photos. It also integrates better across Google’s products, as demonstrated over the past week as it pops up in tools and workflows like Google Labs Mixboard for automated presentation generation.

OpenAI reportedly plans to release another new model in January with better images, improved speed, and better personality, though the company didn’t confirm these plans Thursday.

OpenAI also said Thursday it’s rolling out new safety measures around mental health use and age verification for teens, but didn’t spend much of the launch pitching those changes.

This article has been updated with more information about OpenAI’s compute efficiency status.

Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at rebecca.bellan@techcrunch.com or Russell Brandom at russell.brandom@techcrunch.com. For secure communication, you can contact them via Signal at @rebeccabellan.491 and russellbrandom.49.

Ref link: OpenAI fires back at Google with GPT-5.2 after ‘code red’ memo

Posted on Leave a comment

Google debuts ‘Disco,’ a Gemini-powered tool for making web apps from browser tabs

Google on Thursday introduced a new AI experiment for the web browser: the Gemini-powered product Disco, which helps to turn your open tabs into custom applications. With Disco, you can create what Google is calling “GenTabs,” a tool that proactively suggests interactive web apps that can help you complete tasks related to what you’re browsing and allows you to build your own apps via written prompts.

For instance, if you’re studying a particular subject, GenTabs might suggest building a web app to visualize the information, which could help you better understand the core principles.

Image Credits:Google

Or, in a less academic scenario, you could use GenTabs to help you create a meal plan from a series of online recipes or help you plan a trip when you’re researching travel.

These are things that you can already do today with some AI-powered chatbots, but GenTabs builds these custom experiences on the fly using Gemini 3, using the information in your browser and in your Gemini chat history. After the app is built, you can also continue to refine it using natural language commands.

The resulting generative elements in the GenTabs experience will link back to the original sources, Google notes.

Image Credits:Google

Like others in the AI market, Google has been experimenting with bringing AI deeper into the web-browsing experience. Instead of building its own stand-alone AI browser, like Perplexity’s Comet or ChatGPT Atlas, Google integrated its AI assistant Gemini into the Chrome browser, where it can optionally be used to ask questions about the web page you’re on.

With GenTabs, the focus is not only on what you’re currently viewing, but also on your overall browsing, spanning multiple tabs — whether that’s research, learning, or something else.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

However, the feature is only initially going to be available to a small number of testers through Google Labs, who will offer feedback about the experience. The company says that interesting ideas that are developed through Disco may one day find their way into other, larger Google products.

It also suggests that GenTabs will be one of many Disco features to come over time, noting that GenTabs is the “first feature” being tested.

To access Disco, users will need to join a waitlist to download the app, starting on macOS.

Ref link: Google debuts ‘Disco,’ a Gemini-powered tool for making web apps from browser tabs

Posted on Leave a comment

Rivian’s AI assistant is coming to its EVs in early 2026 

Rivian’s two-year effort to build its own AI assistant will launch in early 2026. And when it does, the AI assistant will roll out to every existing EV in its lineup, not just the next-generation versions of its R1T truck and R1S SUV. 

Drivers and passengers will be able to use the AI assistant to operate climate controls and handle other tasks contained within the vehicle’s infotainment system. It will also connect vehicle systems with third-party apps using an agentic framework built by Rivian engineers. Google Calendar will be the first third-party app to launch within the AI assistant, Rivian said Thursday.

“The beauty here is we can integrate third-party agents, and this is completely redefining how apps in the future will integrate in our cars,” software development chief Wassym Bensaid said Thursday during the company’s AI & Autonomy event in Palo Alto, California.

The AI assistant will be augmented by frontier large language models — for instance, the Google Vertex AI and Gemini — for grounded data, natural conversation, and reasoning, according to Rivian.

Image Credits:Rivian

The AI assistant program, which TechCrunch first reported this week, reflects Rivian CEO RJ Scaringe’s push to become more vertically integrated. And that commitment was on full display at its AI & Autonomy event in Palo Alto, California. Beyond the AI assistant, the company detailed how it has developed a software and new hardware, including a custom 5nm processor built in collaboration with both Arm and TSMC, that will expand its hands-free driving assistance system and eventually let drivers take their eyes off the road.

This vertical integration work has been underway for years. In 2024, the EV maker completely reworked the guts of its flagship R1T truck and R1S SUV, changing everything from the battery pack and suspension system to the electrical architecture, sensor stack, and software user interface.

The company’s software team led by Bensaid has continued to work on building out the software stack. A smaller group — the size of which Rivian won’t disclose — focused on the AI assistant, which is designed to be model and platform agnostic, according to Bensaid.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

To power this AI assistant, Rivian developed what it has described as a model- and platform-agnostic architecture that uses custom large language models and is branded as Rivian Unified Intelligence, or RUI. This hybrid software stack includes its own custom models and the “orchestration layer,” the conductor that makes sure the various AI models work together. Rivian said it has used other companies for specific agentic AI functions.

“The Riven Unified Intelligence is the connective tissue that runs through the very heart of Rivian’s digital ecosystem,” Bensaid said at the event. “This platform enables targeted agent solutions that drive value across our entire operation and our entire vehicle life cycle.”

For instance, RUI will be used for more than just providing an AI assistant, according to the company. It will also be used to improve vehicle diagnostics, which Rivian describes as “an expert assistant for technicians, scanning telemetry and history to pinpointing complex issues.”

The article was updated to clarify that the AI assistant will be augmented by frontier large language models.

Ref link: Rivian’s AI assistant is coming to its EVs in early 2026 

Posted on Leave a comment

Rivian goes big on autonomy, with custom silicon, lidar, and a hint at robotaxis

Rivian detailed Thursday how it plans to make its electric vehicles increasingly autonomous — an ambitious effort that includes new hardware, including lidar and custom silicon, and eventually, a potential entry into the self-driving ride-hail market, according to CEO RJ Scaringe.

The announcements at the company’s first “Autonomy & AI Day” event in Palo Alto, California, shed fresh light on Rivian’s technology development, much of which has been kept undercover as it pushes to begin production of its more affordable R2 SUV in the first half of 2026. Rivian’s event is also a very public signal to shareholders that it’s keeping pace, or even exceeding, the automated-driving capabilities of industry rivals like Tesla, Ford, General Motors, as well as automakers from Europe and China.

Rivian said it will expand the hands-free version of its driver-assistance software to “over 3.5 million miles of roads across the USA and Canada” and will eventually expand beyond highways to surface streets (with clearly painted road lines). This expanded access will be available on the company’s second-generation R1 trucks and SUVs. It’s calling the expanded capabilities “Universal Hands-Free” and will launch in early 2026. Rivian says it will charge a one-time fee of $2,500 or $49.99 per month.

“What that means is you can get into the vehicle at your house, plug in the address to where you’re going, and the vehicle will completely drive you there,” Scaringe said Thursday, describing a point-to-point navigation feature.

After that, Rivian plans to allow drivers to take their eyes off the road. “This gives you your time back. You can be on your phone, or reading a book, no longer needing to be actively involved in the operation of vehicle.”

Rivian’s driver assistance software won’t stop there; the EV maker laid out plans on Thursday to enhance its capabilities all the way up to what it’s calling “personal L4,” a nod to the level set by the Society of Automotive Engineers that means a car can operate in a particular area with no human intervention.

After that, Scaringe hinted that Rivian will be looking at competing with the likes of Waymo. “While our initial focus will be on personally owned vehicles, which today represent a vast majority of the miles driven in the United States, this also enables us to pursue opportunities in the ride-share space,” he said.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

To help accomplish these lofty goals, Rivian has been building a “large driving model” (think: an LLM but for real-world driving), part of a move away from a rules-based framework for developing autonomous vehicles that has been led by Tesla. The company also showed off its own custom 5nm processor, which it says will be built in collaboration with both Arm and TSMC.

That custom chip powers what Rivian is referring to as its third-generation “autonomy computer,” or ACM3. The new computer can process 5 billion pixels per second and will start showing up on Rivian’s upcoming mass-market R2 SUV in late 2026.

Rivian will couple the ACM3 with a lidar sensor at the top of the windshield (from an undisclosed supplier) to provide “three-dimensional spatial data and redundant sensing,” which it says will help with “real-time detection for the edge cases of driving.”

“We expect that at launch in late 2026 this will be the most powerful combination of sensors and inference compute in consumer vehicles in North America,” senior vice president of electrical hardware Vidya Rajagopalan said at the event.

The R2 is set to start shipping in the first half of 2026, meaning the launch versions of the SUV will not have ACM3 or the lidar sensor. That means early versions of the R2 without the ACM3 and lidar hardware will most likely plateau at hands-free driving. Anyone hoping to do eyes-off or, later, unsupervised driving in a Rivian will need a vehicle with a lidar sensor.

“Adding lidar creates the ultimate sensing combination. It gives the most comprehensive 3D model of the space the vehicle is traveling through,” vice president of autonomy and AI James Philbin said Thursday. “The goal for our onboard sensing stack isn’t just human level, it’s superhuman level.”

This story has been updated to reflect that Rivian will not offer eyes-off driving in vehicles without lidar sensors.

Ref link: Rivian goes big on autonomy, with custom silicon, lidar, and a hint at robotaxis

Posted on Leave a comment

Runway releases its first world model, adds native audio to latest video model

The race to release world models is on as AI image and video generation company Runway joins an increasing number of startups and Big Tech companies by launching its first one. Dubbed GWM-1, the model works through frame-by-frame prediction, creating a simulation with an understanding of physics and how the world actually behaves over time, the company said.

A world model is an AI system that learns an internal simulation of how the world works so it can reason, plan, and act without needing to be trained on every scenario possible in real life.

Runway, which earlier this month launched its Gen 4.5 video model that surpassed both Google and OpenAI on the Video Arena leaderboard, said its GWM-1 world model is more “general” than Google’s Genie-3 and other competitors. The firm is pitching it as a model that can create simulations to train agents in different domains like robotics and life sciences.

“To build a world model, we first needed to build a really great video model. We believe that the right path to building a world model is teaching models to predict pixels directly is the best way to achieve general-purpose simulation. At sufficient scale and with the right data, you can build a model that has sufficient understanding of how the world works,” the company’s CTO, Anastasis Germanidis, said during the livestream.

Runway released specific slants or versions to the new world model called GWM-Worlds, GWM-Robotics, and GWM-Avatars.

Image Credits:Runway

GWM-Worlds is an app for the model that lets you create an interactive project. Users can set a scene through a prompt or an image reference, and as you explore the space, the model generates the world with an understanding of geometry, physics, and lighting. The company mentioned that the simulation runs at 24 fps and 720p resolution. Runway said that while Worlds could be useful for gaming, it’s also well-positioned to teach agents how to navigate and behave in the physical world.

With GWM-Robotics, the company aims to use synthetic data enriched with new parameters like changing weather conditions or obstacles. Runway says this method could also reveal when and how robots might violate policies and instructions in different scenarios.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

Runway is also building realistic avatars under GWM-Avatars to simulate human behavior. Companies like D-ID, Synthesia, Soul Machines, and even Google have worked on creating human avatars that look real and work in areas like communication and training.

The company noted that technically Worlds, Robotics, and Avatars are separate models, but eventually it plans to merge all these into one model.

Besides releasing a new world model, the company is also updating its foundational Gen 4.5 model released earlier in the month. The new update brings native audio and long-form, multi-shot generation capabilities to the model. The company said that with this model, users can generate one-minute videos with character consistency, native dialogue, background audio, and complex shots from various angles. The company said that you can also edit existing audio and add dialogues. Plus, you can edit multi-shot videos of any length.

The Gen 4.5 update nudges Runway closer to competitor Kling’s all-in-one video suite, which also launched earlier this month, particularly around native audio and multi-shot storytelling. It also signals that video generation models are moving from prototype to production-ready tools. Runway’s updated Gen 4.5 model is available to all paid plan users.

Image Credits:Runway

The company said that it will make GWM-Robotics available through an SDK. It added that it is in active conversation with several robotics firms and enterprises for the use of GWM-Robotics and GWM-Avatars.

Ref link: Runway releases its first world model, adds native audio to latest video model

Posted on Leave a comment

Ford and SK On are ending their US battery joint venture

Four years ago, Ford and South Korean battery maker SK On struck a deal to form a joint venture and spend $11.4 billion to build factories in Tennessee and Kentucky that would produce batteries for the next generation of electric F-Series trucks.

The factories live on; the joint venture will not.

SK On, a subsidiary of SK Innovation, said Thursday it reached an agreement with Ford to end the joint venture. The two companies will divide the assets: Ford will take ownership and operation of the twin battery plants in Kentucky, while SK On will operate the factory at the massive BlueOval SK campus in Tennessee.

SK On said it will maintain a strategic partnership with Ford centered on the Tennessee plant, according to Bloomberg.

When reached for comment, a Ford spokesperson told TechCrunch the company was aware of SK’s disclosure and had nothing further to share at this time.

The joint venture was created when the industry was investing billions of dollars to ramp up electric vehicle production. While EV sales have risen over the past several years, demand has not kept up with the industry’s lofty projections. The end of the federal EV tax credit has also dampened the pace of sales.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

Ref link: Ford and SK On are ending their US battery joint venture

Posted on Leave a comment

NASA’s Parker Solar Probe Spies Solar Wind ‘U-Turn’

Images captured by NASA’s Parker Solar Probe as the spacecraft made its record-breaking closest approach to the Sun in December 2024 have now revealed new details about how solar magnetic fields responsible for space weather escape from the Sun — and how sometimes they don’t.

Like a toddler, our Sun occasionally has disruptive outbursts. But instead of throwing a fit, the Sun spews magnetized material and hazardous high-energy particles that drive space weather as they travel across the solar system. These outbursts can impact our daily lives, from disrupting technologies like GPS to triggering power outages, and they can also imperil voyaging astronauts and spacecraft. Understanding how these solar outbursts, called coronal mass ejections (CMEs), occur and where they are headed is essential to predicting and preparing for their impacts at Earth, the Moon, and Mars.

Images taken by Parker Solar Probe in December 2024, and published Thursday in the Astrophysical Journal Letters, have revealed that not all magnetic material in a CME escapes the Sun — some makes it back, changing the shape of the solar atmosphere in subtle, but significant, ways that can set the course of the next CME exploding from the Sun. These findings have far-reaching implications for understanding how the CME-driven release of magnetic fields affects not only the planets, but the Sun itself.

These images from the Wide-Field Imager for Solar Probe on NASA’s Parker Solar Probe show a phenomenon that occurs in the Sun’s upper atmosphere called an inflow. Inflows are the result of stretched magnetic field lines reconfiguring and causing material trapped along the lines to rain back toward the solar surface.
NASA

“These breathtaking images are some of the closest ever taken to the Sun and they’re expanding what we know about our closest star,” said Joe Westlake, heliophysics division director at NASA Headquarters in Washington. “The insights we gain from these images are an important part of understanding and predicting how space weather moves through the solar system, especially for mission planning that ensures the safety of our Artemis astronauts traveling beyond the protective shield of our atmosphere.”

Parker Solar Probe reveals solar recycling in action

As Parker Solar Probe swept through the Sun’s atmosphere on Dec. 24, 2024, just 3.8 million miles from the solar surface, its Wide-Field Imager for Solar Probe, or WISPR, observed a CME erupt from the Sun. In the CME’s wake, elongated blobs of solar material were seen falling back toward the Sun.

This type of feature, called “inflows”, has previously been seen from a distance by other NASA missions including SOHO (Solar and Heliospheric Observatory, a joint mission with ESA, the European Space Agency) and STEREO (Solar Terrestrial Relations Observatory). But Parker Solar Probe’s extreme close-up view from within the solar atmosphere reveals details of material falling back toward the Sun and on scales never seen before. 

“We’ve previously seen hints that material can fall back into the Sun this way, but to see it with this clarity is amazing,” said Nour Rawafi, the project scientist for Parker Solar Probe at the Johns Hopkins Applied Physics Laboratory, which designed, built, and operates the spacecraft in Laurel, Maryland. “This is a really fascinating, eye-opening glimpse into how the Sun continuously recycles its coronal magnetic fields and material.”

Insights on inflows

For the first time, the high-resolution images from Parker Solar Probe allowed scientists to make precise measurements about the inflow process, such as the speed and size of the blobs of material pulled back into the Sun. These previously hidden details provide scientists with new insights into the physical mechanisms that reconfigure the solar atmosphere.

Ref link: NASA’s Parker Solar Probe Spies Solar Wind ‘U-Turn’

Posted on Leave a comment

Disney signs deal with OpenAI to allow Sora to generate AI videos featuring its characters

The Walt Disney Company announced on Thursday that it has signed a three-year partnership with OpenAI that will bring its iconic characters to the company’s Sora AI video generator. Disney is also making a $1 billion equity investment in OpenAI.

Launched in September, Sora allows users to create short videos using simple prompts. With this new agreement, users will be able to draw on more than 200 animated, masked, and creature characters from Disney, Marvel, Pixar, and Star Wars, including costumes, props, vehicles, and more.

These characters include iconic faces like Mickey Mouse, Ariel, Belle, Cinderella, Baymax, and Simba, as well as characters from Encanto, Frozen, Inside Out, Moana, Monsters, Inc., Toy Story, Up, and Zootopia. Users will also be able to draw on animated or illustrated versions of Marvel and Lucasfilm characters like Black Panther, Captain America, Deadpool, Groot, Iron Man, Darth Vader, Han Solo, Stormtroopers, and more.

Users will also be able to draw on these characters while using ChatGPT Images, the feature in ChatGPT that allows users to create visuals using text prompts.

The agreement does not include any talent likenesses or voices, Disney says.

“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” said Disney CEO Bob Iger in a statement.

Disney says that alongside the agreement, it will “become a major customer of OpenAI,” as it will use its APIs to build new products, tools, and experiences, including for Disney+.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

“Disney is the global gold standard for storytelling, and we’re excited to partner to allow Sora and ChatGPT Images to expand the way people create and experience great content,” said Sam Altman, co-founder and CEO of OpenAI, in a statement. “This agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity, and help works reach vast new audiences.”

It’s worth noting that Disney has sued the generative AI platform Midjourney for ignoring requests to stop violating its intellectual property rights. Disney also sent a cease-and-desist letter to Character.AI, urging the chatbot company to remove Disney characters from among the millions of AI companions on its platform.

Disney’s agreement with OpenAI indicates the company isn’t fully closing the door on AI platforms.

Ref link: Disney signs deal with OpenAI to allow Sora to generate AI videos featuring its characters