Posted on Leave a comment

Grok got crucial facts wrong about Bondi Beach shooting

Grok, the chatbot built by Elon Musk’s xAI and popularized on his social media platform X, appears to have repeatedly spread misinformation about today’s mass shooting at Bondi Beach in Australia.

Gizmodo pointed to a number of posts where Grok misidentified the bystander — 43-year-old Ahmed al Ahmed — who disarmed one of the gunmen, and where it questioned the authenticity of videos and photos capturing al Ahmed’s actions.

In one post, the chatbot misidentified the man in a photo as an Israeli hostage, and in another post brought up irrelevant information about the Israeli army’s treatment of Palestinians. In another post, it claimed a “43-year-old IT professional and senior solutions architect” named Edward Crabtree was the one who actually disarmed a gunman.

Grok does appear to be fixing some of its mistakes. At least one post that reportedly claimed a video of the shooting actually showed Cyclone Alfred has been corrected “upon reevaluation.”

And the chatbot subsequently acknowledged al Ahmed’s identity, writing that the “misunderstanding arises from viral posts that mistakenly identified him as Edward Crabtree, possibly due to a reporting error or a joke referencing a fictional character.” (The article in question appeared on a largely non-functional news site that may be AI-generated.)

Ref link: Grok got crucial facts wrong about Bondi Beach shooting

Posted on Leave a comment

AI data center boom could be bad news for other infrastructure projects

Improvements to roads, bridges, and other infrastructure could take a hit as data center construction accelerates, according to Bloomberg.

In 2025, state and local governments reportedly sold a record amount of debt for the second year in a row, with strategists predicting another $600 billion in sales next year. Most of that money is expected to fund infrastructure projects. 

Meanwhile, Census Bureau data reportedly shows that private spending on data center construction was running at annualized run rate of more than $41 billion — roughly the same as state and local government spending on transportation construction.

All these projects are likely to compete for construction workers just as the industry faces labor shortages from retirements and President Donald Trump’s immigration crackdown.

Andrew Anagnost, CEO of architecture and design software maker Autodesk, told Bloomberg there’s “absolutely no doubt” that data center construction “sucks resources from other projects.

“I guarantee you a lot of those [infrastructure] projects are not going to move as fast as people want,” he said.

Ref link: AI data center boom could be bad news for other infrastructure projects

Posted on Leave a comment

Google launched its deepest AI research agent yet — on the same day OpenAI dropped GPT-5.2

Google released on Thursday a “reimagined” version of its research agent Gemini Deep Research based on its much-ballyhooed state-of-the-art foundation model, Gemini 3 Pro.  

This new agent isn’t just designed to produce research reports — although it can still do that. It now allows developers to embed Google’s SATA-model research capabilities into their own apps. That capability is made possible through Google’s new Interactions API, which is designed to give devs more control in the coming agentic AI era. 

The new Gemini Deep Research tool is an agent equipped to synthesize mountains of information and handle a large context dump in the prompt. Google says it’s used by customers for tasks ranging from due diligence to drug toxicity safety research. 

Google also says it will soon be integrating this new deep research agent into services, including Google Search, Google Finance, its Gemini App, and its popular NotebookLM. This is another step toward preparing for a world where humans don’t Google anything anymore — their AI agents do. 

The tech giant says that Deep Research benefits from Gemini 3 Pro’s status as its “most factual” model that is trained to minimize hallucinations during complex tasks.

AI hallucinations — where the LLM just makes stuff up — are an especially crucial issue for long-running, deep reasoning agentic tasks, in which many autonomous decisions are made over minutes, hours, or longer. The more choices an LLM has to make, the greater the chance that even one hallucinated choice will invalidate the entire output. 

To prove its progress claims, Google has also created yet another benchmark (as if the AI world needs another one). The new benchmark is unimaginatively named DeepSearchQA and is intended to test agents on complex, multi-step information-seeking tasks. Google has open sourced this benchmark.  

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

It also tested Deep Research on Humanity’s Last Exam, a much more interestingly named, independent benchmark of general knowledge filled with impossibly niche tasks; and BrowserComp, a benchmark for browser-based agentic tasks.

As you might expect, Google’s new agent bested the competition on its own benchmark, and Humanity’s. However, OpenAI’s ChatGPT 5 Pro was a surprisingly close second all the way around and slightly bested Google on BrowserComp. 

But those benchmark comparisons were obsolete almost the moment Google published them. Because on the same day, OpenAI launched its highly anticipated GPT 5.2 — codenamed Garlic. OpenAI says its newest model bests its rivals — especially Google — on a suite of the typical benchmarks, including OpenAI’s homegrown one. 

Perhaps one of the most interesting parts of this announcement was the timing. Knowing that the world was awaiting the release of Garlic, Google dropped some AI news of its own.

Ref link: Google launched its deepest AI research agent yet — on the same day OpenAI dropped GPT-5.2

Posted on Leave a comment

1X struck a deal to send its ‘home’ humanoids to factories and warehouses

Robotics company 1X found some big potential buyers for its humanoid robots designed for consumers — the portfolio companies of one of its investors.

The company announced a strategic partnership to make thousands of its humanoid robots available for EQT’s portfolio companies on Thursday. EQT is a large Swedish multi-asset investor, and its venture fund EQT Ventures, is one of 1X’s backers.

This deal involves shipping up to 10,000 1X Neo humanoid robots between 2026 and 2030 to EQT’s more than 300 portfolio companies with a concentration on manufacturing, warehousing, logistics, and other industrial use cases.

1X will sign individual deals with each of EQT’s interested portfolio companies, 1X confirmed to TechCrunch.

This partnership is particularly interesting because 1X’s Neo has been marketed as a humanoid for personal use and tagged as the “first consumer-ready humanoid robot designed to transform life at home.” Unlike some of 1X’s peers, like Figure, it has not been marketed as a bot for commercial purposes.

1X does have a robot designed for industrial purposes, Eve Industrial, but this deal specifically involves the Neo humanoid.

When the company opened up preorders for the $20,000 robot in October, the announcement was focused on how the robot would operate in someone’s home from descriptions of the different chores that the robot is able to perform and how it interacts with people.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

This deal is quite a different use case.

That’s likely because humanoids for the home will remain a hard sell for quite some time while industrial use cases are an easier sell. The $20,000 price tag automatically limits the potential pool of consumer customers too.

The Neo specifically also comes with a privacy element that would be hard to swallow for many people — human operators from 1X are able to look through the robots eyes into someone’s home.

Humanoids also come with potential safety issues around pets and small children due to their size and instability. Multiple VCs and scientists in the robotics field told TechCrunch this summer that humanoid adoption wouldn’t be for multiple years, if not a decade away.

The company declined to share how many preorders it received for its Neo bot but a spokesperson said preorders “far exceeded” the company’s goal.

Founded in 2014, 1x has since raised more than $130 million in venture capital from firms, including EQT Ventures, Tiger Global, and the OpenAI Startup Fund, among others.

Ref link: 1X struck a deal to send its ‘home’ humanoids to factories and warehouses

Posted on Leave a comment

Disney hits Google with cease-and-desist claiming ‘massive’ copyright infringement

Disney sent a cease-and-desist letter to Google on Wednesday, alleging that the tech giant has infringed on its copyrights, Variety reports.

Disney is accusing the tech giant of copyright infringement on a “massive scale,” claiming it has used AI models and services to commercially distribute unauthorized images and videos, according to the letter seen by Variety.

“Google operates as a virtual vending machine, capable of reproducing, rendering, and distributing copies of Disney’s valuable library of copyrighted characters and other works on a mass scale,” the letter reads. “And compounding Google’s blatant infringement, many of the infringing images generated by Google’s AI Services are branded with Google’s Gemini logo, falsely implying that Google’s exploitation of Disney’s intellectual property is authorized and endorsed by Disney.”

The letter alleges that Google’s AI systems infringe characters from “Frozen,” “The Lion King,” “Moana,” “The Little Mermaid,” “Deadpool,” and more.

Google didn’t confirm or deny Disney’s allegations but did say it will “engage” with the company. “We have a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them. More generally, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content,” a spokesperson said.

Disney’s move comes the same day that it signed a $1 billion, three-year deal with OpenAI that will bring its iconic characters to the company’s Sora AI video generator.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

Ref link: Disney hits Google with cease-and-desist claiming ‘massive’ copyright infringement

Posted on Leave a comment

Google’s AI try-on feature for clothes now works with just a selfie

Google is updating its AI try-on feature to let you virtually try on clothes using just a selfie, the company announced on Thursday. In the past, users had to upload a full-body picture of themselves to virtually try on a piece of clothing. Now they can use a selfie and Nano Banana, Google’s Gemini 2.5 Flash Image model, to generate a full-body digital version of themselves for virtual try-ons.

Users can select their usual clothing size, and the feature will then generate several images. From there, users can choose one to make it their default try-on photo.

If desired, users still have the option to use a full-body photo or select from a range of models with diverse body types.

The new capability is launching in the United States today.

Image Credits:Google

Google first launched the try-on feature in July, allowing users to try on apparel items from its Shopping Graph across Search, Google Shopping, and Google Images. To use the feature, users need to tap on a product listing or apparel product result and select the “try it on” icon.

The move comes as Google has been investing in the virtual AI try-on space, as the company has a separate app dedicated specifically to that purpose. The app, called Doppl, is designed to help visualize how different outfits might look on you using AI.

Earlier this week, the tech giant updated it with a shoppable discovery feed that displays recommendations so users can discover and virtually try on new items. Nearly everything in the feed is shoppable, with direct links to merchants.

The discovery feed features AI-generated videos of real products and suggests outfits based on your personalized style. While some may not be fond of an AI-generated feed, Google likely views it as a way to showcase products in a format that people are already familiar with, thanks to platforms like TikTok and Instagram.

Ref link: Google’s AI try-on feature for clothes now works with just a selfie

Posted on Leave a comment

OpenAI fires back at Google with GPT-5.2 after ‘code red’ memo

OpenAI launched its latest frontier model, GPT-5.2, on Thursday amid increasing competition from Google, pitching it as its most advanced model yet and one designed for developers and everyday professional use. 

OpenAI’s GPT-5.2 is coming to ChatGPT paid users and developers via the API in three flavors: Instant, a speed-optimized model for routine queries like information-seeking, writing, and translation; Thinking, which excels at complex structured work like coding, analyzing long documents, math, and planning; and Pro, the top-end model aimed at delivering maximum accuracy and reliability for difficult problems. 

“We designed 5.2 to unlock even more economic value for people,” Fidji Simo, OpenAI’s chief product officer, said Thursday during a briefing with journalists. “It’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context, using tools and then linking complex, multi-step projects.”

GPT-5.2 lands in the middle of an arms race with Google’s Gemini 3, which is topping LMArena’s leaderboard across most benchmarks (apart from coding — which Anthropic’s Claude Opus-4.5 still has on lock).

Early this month, The Information reported that CEO Sam Altman released an internal “code red” memo to staff amid ChatGPT traffic decline and concerns that it is losing consumer market share to Google. The code red called for a shift in priorities, including stalling on commitments like introducing ads and instead focusing on creating a better ChatGPT experience. 

GPT-5.2 is OpenAI’s push to reclaim leadership, even as some employees reportedly asked for the model release to be pushed back so the company could have more time to improve it. And despite indications that OpenAI would focus its attention on consumer use cases by adding more personalization and customization to ChatGPT, the launch of GPT-5.2 looks to beef up its enterprise opportunities. 

The company is specifically targeting developers and the tooling ecosystem, aiming to become the default foundation for building AI-powered applications. Earlier this week, OpenAI released new data showing enterprise usage of its AI tools has surged dramatically over the past year. 

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

This comes as Gemini 3 has become tightly integrated into Google’s product and cloud ecosystem for multimodal and agentic workflows. Google this week launched managed MCP servers that make its Google and Cloud services like Maps and BigQuery easier for agents to plug into. (MCPs are the connectors between AI systems and data and tools.)

OpenAI says GPT-5.2 sets new benchmark scores in coding, math, science, vision, long-context reasoning, and tool use, which the company claims could lead to “more reliable agentic workflows, production-grade code, and complex systems that operate across large contexts and real-world data.”

Those capabilities put it in direct competition with Gemini 3’s Deep Think mode, which has been touted as a major reasoning advancement targeting math, logic, and science. On OpenAI’s own benchmark chart, GPT-5.2 Thinking edges out Gemini 3 and Anthropic’s Claude Opus 4.5 in nearly every listed reasoning test, from real-world software engineering tasks (SWE-Bench Pro) and doctoral-level science knowledge (GPQA Diamond) to abstract reasoning and pattern discovery (ARC-AGI suites). 

Research lead Aidan Clark said that stronger math scores aren’t just about solving equations. Mathematical reasoning, he explained, is a proxy for whether a model can follow multi-step logic, keep numbers consistent over time, and avoid subtle errors that could compound over time. 

“These are all properties that really matter across a wide range of different workloads,” Clark said. “Things like financial modeling, forecasting, doing an analysis of data.”

During the briefing, OpenAI product lead Max Schwarzer said GPT-5.2 “makes substantial improvements to code generation and debugging” and can walk through complex math and logic step by step. Coding startups like Windsurf and CharlieCode, he added, report “state-of-the-art agent coding performance” and measurable gains on complex multi-step workflows.

Beyond coding, Schwarzer said that GPT-5.2 Thinking responses contain 38% fewer errors than its predecessor, making the model more dependable for day-to-day decision-making, research, and writing. 

GPT-5.2 appears to be less a reinvention and more of a consolidation of OpenAI’s last two upgrades. GPT-5, which dropped in August, was a reset that laid the groundwork for a unified system with a router to toggle the model between a fast default model and a deeper “Thinking” mode. November’s GPT-5.1 focused on making that system warmer, more conversational, and better suited to agentic and coding tasks. The latest model, GPT-5.2, seems to turn up the dial on all of those advancements, making it a more reliable foundation for production use. 

For OpenAI, the stakes have never been higher. The company has made commitments to the tune of $1.4 trillion for AI infrastructure buildouts over the next few years to support its growth — commitments it made when it still had the first-mover advantage among AI companies. But now that Google, which lagged behind at the start, is pushing ahead, that bet might be what’s driving Altman’s “code red.” 

OpenAI’s renewed focus on reasoning models is also a risky flex. The systems behind its Thinking and Deep Research modes are more expensive to run than standard chatbots because they chew through more compute. By doubling down on that kind of model with GPT-5.2, OpenAI may be setting up a vicious cycle: spend more on compute to win the leaderboard, then spend even more to keep those high-cost models running at scale.

OpenAI is already reportedly spending more on compute than it had previously let on. As TechCrunch reported recently, most of OpenAI’s inference spend — the money it spends on compute to run a trained AI model — is being paid in cash rather than through cloud credits, suggesting the company’s compute costs have grown beyond what partnerships and credits can subsidize.

During the call, Simo suggested that as OpenAI scales, it is able to offer more products and services to generate more revenue to pay for additional compute.

“But I think it’s important to place that in the grand arc of efficiency,” Simo said. “You are getting, today, a lot more intelligence for the same amount of compute and the same amount of dollars as you were a year ago.”

For all its focus on reasoning, one thing that’s absent from today’s launch is a new image generator. Altman reportedly said in his code red memo that image generation would be a key priority moving forward, particularly after Google’s Nano Banana (the nickname for Google’s Gemini 2.5 Flash Image model) had a viral moment following its August release.

Last month, Google launched Nano Banana Pro (aka Gemini 3 Pro Image), an upgraded version with even better text rendering, world knowledge, and an eerie, real-life, unedited vibe to its photos. It also integrates better across Google’s products, as demonstrated over the past week as it pops up in tools and workflows like Google Labs Mixboard for automated presentation generation.

OpenAI reportedly plans to release another new model in January with better images, improved speed, and better personality, though the company didn’t confirm these plans Thursday.

OpenAI also said Thursday it’s rolling out new safety measures around mental health use and age verification for teens, but didn’t spend much of the launch pitching those changes.

This article has been updated with more information about OpenAI’s compute efficiency status.

Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at rebecca.bellan@techcrunch.com or Russell Brandom at russell.brandom@techcrunch.com. For secure communication, you can contact them via Signal at @rebeccabellan.491 and russellbrandom.49.

Ref link: OpenAI fires back at Google with GPT-5.2 after ‘code red’ memo

Posted on Leave a comment

Google debuts ‘Disco,’ a Gemini-powered tool for making web apps from browser tabs

Google on Thursday introduced a new AI experiment for the web browser: the Gemini-powered product Disco, which helps to turn your open tabs into custom applications. With Disco, you can create what Google is calling “GenTabs,” a tool that proactively suggests interactive web apps that can help you complete tasks related to what you’re browsing and allows you to build your own apps via written prompts.

For instance, if you’re studying a particular subject, GenTabs might suggest building a web app to visualize the information, which could help you better understand the core principles.

Image Credits:Google

Or, in a less academic scenario, you could use GenTabs to help you create a meal plan from a series of online recipes or help you plan a trip when you’re researching travel.

These are things that you can already do today with some AI-powered chatbots, but GenTabs builds these custom experiences on the fly using Gemini 3, using the information in your browser and in your Gemini chat history. After the app is built, you can also continue to refine it using natural language commands.

The resulting generative elements in the GenTabs experience will link back to the original sources, Google notes.

Image Credits:Google

Like others in the AI market, Google has been experimenting with bringing AI deeper into the web-browsing experience. Instead of building its own stand-alone AI browser, like Perplexity’s Comet or ChatGPT Atlas, Google integrated its AI assistant Gemini into the Chrome browser, where it can optionally be used to ask questions about the web page you’re on.

With GenTabs, the focus is not only on what you’re currently viewing, but also on your overall browsing, spanning multiple tabs — whether that’s research, learning, or something else.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

However, the feature is only initially going to be available to a small number of testers through Google Labs, who will offer feedback about the experience. The company says that interesting ideas that are developed through Disco may one day find their way into other, larger Google products.

It also suggests that GenTabs will be one of many Disco features to come over time, noting that GenTabs is the “first feature” being tested.

To access Disco, users will need to join a waitlist to download the app, starting on macOS.

Ref link: Google debuts ‘Disco,’ a Gemini-powered tool for making web apps from browser tabs

Posted on Leave a comment

Rivian’s AI assistant is coming to its EVs in early 2026 

Rivian’s two-year effort to build its own AI assistant will launch in early 2026. And when it does, the AI assistant will roll out to every existing EV in its lineup, not just the next-generation versions of its R1T truck and R1S SUV. 

Drivers and passengers will be able to use the AI assistant to operate climate controls and handle other tasks contained within the vehicle’s infotainment system. It will also connect vehicle systems with third-party apps using an agentic framework built by Rivian engineers. Google Calendar will be the first third-party app to launch within the AI assistant, Rivian said Thursday.

“The beauty here is we can integrate third-party agents, and this is completely redefining how apps in the future will integrate in our cars,” software development chief Wassym Bensaid said Thursday during the company’s AI & Autonomy event in Palo Alto, California.

The AI assistant will be augmented by frontier large language models — for instance, the Google Vertex AI and Gemini — for grounded data, natural conversation, and reasoning, according to Rivian.

Image Credits:Rivian

The AI assistant program, which TechCrunch first reported this week, reflects Rivian CEO RJ Scaringe’s push to become more vertically integrated. And that commitment was on full display at its AI & Autonomy event in Palo Alto, California. Beyond the AI assistant, the company detailed how it has developed a software and new hardware, including a custom 5nm processor built in collaboration with both Arm and TSMC, that will expand its hands-free driving assistance system and eventually let drivers take their eyes off the road.

This vertical integration work has been underway for years. In 2024, the EV maker completely reworked the guts of its flagship R1T truck and R1S SUV, changing everything from the battery pack and suspension system to the electrical architecture, sensor stack, and software user interface.

The company’s software team led by Bensaid has continued to work on building out the software stack. A smaller group — the size of which Rivian won’t disclose — focused on the AI assistant, which is designed to be model and platform agnostic, according to Bensaid.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

To power this AI assistant, Rivian developed what it has described as a model- and platform-agnostic architecture that uses custom large language models and is branded as Rivian Unified Intelligence, or RUI. This hybrid software stack includes its own custom models and the “orchestration layer,” the conductor that makes sure the various AI models work together. Rivian said it has used other companies for specific agentic AI functions.

“The Riven Unified Intelligence is the connective tissue that runs through the very heart of Rivian’s digital ecosystem,” Bensaid said at the event. “This platform enables targeted agent solutions that drive value across our entire operation and our entire vehicle life cycle.”

For instance, RUI will be used for more than just providing an AI assistant, according to the company. It will also be used to improve vehicle diagnostics, which Rivian describes as “an expert assistant for technicians, scanning telemetry and history to pinpointing complex issues.”

The article was updated to clarify that the AI assistant will be augmented by frontier large language models.

Ref link: Rivian’s AI assistant is coming to its EVs in early 2026 

Posted on Leave a comment

Runway releases its first world model, adds native audio to latest video model

The race to release world models is on as AI image and video generation company Runway joins an increasing number of startups and Big Tech companies by launching its first one. Dubbed GWM-1, the model works through frame-by-frame prediction, creating a simulation with an understanding of physics and how the world actually behaves over time, the company said.

A world model is an AI system that learns an internal simulation of how the world works so it can reason, plan, and act without needing to be trained on every scenario possible in real life.

Runway, which earlier this month launched its Gen 4.5 video model that surpassed both Google and OpenAI on the Video Arena leaderboard, said its GWM-1 world model is more “general” than Google’s Genie-3 and other competitors. The firm is pitching it as a model that can create simulations to train agents in different domains like robotics and life sciences.

“To build a world model, we first needed to build a really great video model. We believe that the right path to building a world model is teaching models to predict pixels directly is the best way to achieve general-purpose simulation. At sufficient scale and with the right data, you can build a model that has sufficient understanding of how the world works,” the company’s CTO, Anastasis Germanidis, said during the livestream.

Runway released specific slants or versions to the new world model called GWM-Worlds, GWM-Robotics, and GWM-Avatars.

Image Credits:Runway

GWM-Worlds is an app for the model that lets you create an interactive project. Users can set a scene through a prompt or an image reference, and as you explore the space, the model generates the world with an understanding of geometry, physics, and lighting. The company mentioned that the simulation runs at 24 fps and 720p resolution. Runway said that while Worlds could be useful for gaming, it’s also well-positioned to teach agents how to navigate and behave in the physical world.

With GWM-Robotics, the company aims to use synthetic data enriched with new parameters like changing weather conditions or obstacles. Runway says this method could also reveal when and how robots might violate policies and instructions in different scenarios.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

Runway is also building realistic avatars under GWM-Avatars to simulate human behavior. Companies like D-ID, Synthesia, Soul Machines, and even Google have worked on creating human avatars that look real and work in areas like communication and training.

The company noted that technically Worlds, Robotics, and Avatars are separate models, but eventually it plans to merge all these into one model.

Besides releasing a new world model, the company is also updating its foundational Gen 4.5 model released earlier in the month. The new update brings native audio and long-form, multi-shot generation capabilities to the model. The company said that with this model, users can generate one-minute videos with character consistency, native dialogue, background audio, and complex shots from various angles. The company said that you can also edit existing audio and add dialogues. Plus, you can edit multi-shot videos of any length.

The Gen 4.5 update nudges Runway closer to competitor Kling’s all-in-one video suite, which also launched earlier this month, particularly around native audio and multi-shot storytelling. It also signals that video generation models are moving from prototype to production-ready tools. Runway’s updated Gen 4.5 model is available to all paid plan users.

Image Credits:Runway

The company said that it will make GWM-Robotics available through an SDK. It added that it is in active conversation with several robotics firms and enterprises for the use of GWM-Robotics and GWM-Avatars.

Ref link: Runway releases its first world model, adds native audio to latest video model

Posted on Leave a comment

Disney signs deal with OpenAI to allow Sora to generate AI videos featuring its characters

The Walt Disney Company announced on Thursday that it has signed a three-year partnership with OpenAI that will bring its iconic characters to the company’s Sora AI video generator. Disney is also making a $1 billion equity investment in OpenAI.

Launched in September, Sora allows users to create short videos using simple prompts. With this new agreement, users will be able to draw on more than 200 animated, masked, and creature characters from Disney, Marvel, Pixar, and Star Wars, including costumes, props, vehicles, and more.

These characters include iconic faces like Mickey Mouse, Ariel, Belle, Cinderella, Baymax, and Simba, as well as characters from Encanto, Frozen, Inside Out, Moana, Monsters, Inc., Toy Story, Up, and Zootopia. Users will also be able to draw on animated or illustrated versions of Marvel and Lucasfilm characters like Black Panther, Captain America, Deadpool, Groot, Iron Man, Darth Vader, Han Solo, Stormtroopers, and more.

Users will also be able to draw on these characters while using ChatGPT Images, the feature in ChatGPT that allows users to create visuals using text prompts.

The agreement does not include any talent likenesses or voices, Disney says.

“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” said Disney CEO Bob Iger in a statement.

Disney says that alongside the agreement, it will “become a major customer of OpenAI,” as it will use its APIs to build new products, tools, and experiences, including for Disney+.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

“Disney is the global gold standard for storytelling, and we’re excited to partner to allow Sora and ChatGPT Images to expand the way people create and experience great content,” said Sam Altman, co-founder and CEO of OpenAI, in a statement. “This agreement shows how AI companies and creative leaders can work together responsibly to promote innovation that benefits society, respect the importance of creativity, and help works reach vast new audiences.”

It’s worth noting that Disney has sued the generative AI platform Midjourney for ignoring requests to stop violating its intellectual property rights. Disney also sent a cease-and-desist letter to Character.AI, urging the chatbot company to remove Disney characters from among the millions of AI companions on its platform.

Disney’s agreement with OpenAI indicates the company isn’t fully closing the door on AI platforms.

Ref link: Disney signs deal with OpenAI to allow Sora to generate AI videos featuring its characters

Posted on Leave a comment

TIME names ‘Architects of AI’ its Person of the Year

Each December, TIME Magazine names a person of the year — someone who has most influenced the news and world, for good or ill. Last year, TIME chose President Donald Trump for the second time. The year before that, it was Taylor Swift, who many claimed saved the economy from a recession with her Eras Tour. In 1938, the magazine chose Adolf Hitler

This year, TIME has chosen to bestow its award on not just one person, but a group of people: the so-called “Architects of AI,” comprising the CEOs shaping the global AI race from the U.S. With AI on everyone’s minds, embodying hope for a small minority and economic anxiety for a majority, per recent Edelman data, this tracks.

“For decades, humankind steeled itself for the rise of thinking machines,” the article reads. “Leaders striving to develop the technology, including Sam Altman and Elon Musk, warned that the pursuit of its powers could create unforeseen catastrophe […] This year, the debate about how to wield AI responsibly gave way to a sprint to deploy it as fast as possible.”

Based on one of TIME’s two cover photos, some of those people appear to be Nvidia’s Jensen Huang, Tesla’s Elon Musk, OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, AMD’s Lisa Su, Anthropic’s Dario Amodei, Google DeepMind’s Demis Hassabis, and World Labs’ Fei-Fei Li — all individuals who raced “both beside and against each other.” 

TIME writes that these individuals, through their multibillion-dollar bets on “one of the biggest physical infrastructure projects of all time,” have reshaped government policy, turned up the heat on geopolitical competition, and pushed AI adoption forward. 

This is the story of how AI changed our world in 2025, in new and exciting and sometimes frightening ways. It is the story of how Huang and other tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods… AI emerged as arguably the most consequential tool in great-power competition since the advent of nuclear weapons.

TIME only announced the news on Thursday morning, but images of the cover photo were leaked on prediction market Polymarket on Wednesday evening.

Ref link: TIME names ‘Architects of AI’ its Person of the Year

Posted on Leave a comment

Interest in Spoor’s bird-monitoring AI software is soaring

Spoor launched in 2021 with the goal of using computer vision to help reduce the impact of wind turbines on local bird populations. Now the startup has proven its technology works and is seeing demand from wind farms and beyond.

Oslo, Norway-based Spoor has built software that uses computer vision to track and identify bird populations and migration patterns. The software can detect birds within a 2.5-kilometer radius (about 1.5 miles) and can work with any off-the-shelf high-resolution camera.

Wind farm operators can use this information to better plan where wind farms should be located and to help them better navigate migration patterns. For example, a wind farm could slow down its turbines, or even stop them entirely, during heavy periods of local migration.

Ask Helseth (pictured above left), the co-founder and CEO of Spoor, told TechCrunch last year that he got interested in this space after learning that wind farms lacked effective tracking methods, despite many countries having strict rules around where wind farms can be built and how they can operate due to local bird populations.

“The expectations from the regulators are growing but the industry doesn’t have a great tool,” Helseth said at the time. “A lot of people [go out] in the field with binoculars and trained dogs to find out how many birds are colliding with the turbines.”

Helseth told TechCrunch last week that since then, the company has proven the need for this technology and worked to make it better.

Image Credits:Spoor

At the time of its seed raise in 2024, Spoor was able to track birds in a 1-kilometer range, which has since doubled. As the company has collected more data to feed into its AI model, it has been able to improve its bird identification accuracy to about 96%.

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

“Identifying the species of the bird for some of the clients, you add another layer,” Helseth said. “Is it a bird or not a bird? We have an in-house ornithologist to help train the model to train the new types of birds or a new type of species. Having deployment in other countries [means] having rare species in the database.”

Spoor now works across three continents and with more than 20 of the world’s largest energy companies. It has also started to see interest from other industries such as airports and aquaculture farms. Spoor has a partnership with Rio Tinto, a London-based mining giant, to track bats.

The company has also received interest in using its tech to track other objects of similar size — but Helseth said they aren’t thinking of pivoting into those areas quite yet.

“Drones are of course a plastic bird in our mind,” Helseth joked. “They move in a different way and have a different shape and size. Currently we are discarding that data but we are getting interest in it.”

Spoor recently raised an €8 million ($9.3 million) Series A round led by SET Ventures with participation from Ørsted Ventures and Superorganism in addition to strategic investors.

Helseth predicts that interest in this type of technology will only grow as regulators continue to crack down on wind farms. For example, French regulators shut down a wind farm in April due to its impact on the local bird population and imposed hundreds of millions of fines.

“Our mission is to enable industry and nature to coexist,” Helseth said. “We have started on that journey, but we are still a small startup with a lot to prove. In the coming years, we want to really cement our position in the wind industry and become a global leader to tackle these challenges. At the same time, we want to build some proof points that this technology has value beyond that main category.”

Ref link: Interest in Spoor’s bird-monitoring AI software is soaring