Tesla‘s CEO Elon Musk shared fresh details on Wednesday about the upcoming AI6 and AI6.5 chips, hours after announcing that the company had taped out its next-generation AI5 processor.
Tapeout is the semiconductor industry term for the final stage of chip design, after which the layout is sent to a foundry for fabrication.
First AI5 silicon samples are expected later this year, with high-volume production targeted for mid-2027.
AI5 is Tesla‘s custom system-on-chip designed primarily for real-time AI inference in its vehicles and the Optimus humanoid robot.
It replaces the AI4 hardware, which has been fitted in Tesla vehicles since early 2023 and was manufactured by Samsung on a 7-nanometre process.
Musk has said AI5 delivers roughly 8 times the compute, 9 times the memory, and 5 times the bandwidth of AI4.
He has also benchmarked a single AI5 chip as comparable to an Nvidia H100 GPU for Tesla‘s specific workloads, with a dual-chip configuration roughly on par with Nvidia‘s Blackwell-class processors — at significantly lower cost and power draw.
“This will be a very capable chip. Roughly Hopper class as single SoC and Blackwell as dual, but it costs peanuts and uses much less power,” Musk wrote in January.
“A single AI5 has ~5 times the useful compute of a dual SoC AI4,” he added on Wednesday.
Team Development
By the end of the day, Musk praised on X the close collaboration between Tesla‘s AI hardware and software teams on the latest chip project.
He described the teamwork as “more fun than going to parties on Saturdays by far.”
To hit an aggressive schedule, the team made some smart design trade-offs, allowing them to finish the design stage 45 days ahead of schedule and send the chip to production quicker.
Small-batch engineering samples are expected in late 2026, potentially for early Optimus testing or development vehicles.
High-volume production for vehicles is targeted for mid-to-late 2027.
Musk has outlined an aggressive cadence for future chip generations, targeting a new design reaching volume production roughly every 12 months with a goal of nine-month design cycles.
Impact on Products
A few hours after the tapeout announcement, Musk clarified that the AI5 chip is not going to be used in its vehicles, but on “Optimus and our supercomputer clusters.”
“AI4 is enough to achieve much better than human safety for FSD,” he added.
The Cybercab — Tesla‘s dedicated robotaxi scheduled to begin production this month — will launch on the current AI4 hardware.
Musk had previously said AI4 alone would “achieve self-driving safety levels very far above human,” while AI5 “will make the cars almost perfect and greatly enhance Optimus.”
The chip enables Tesla to run significantly larger neural network models on-device — critical for the company’s vision-only, end-to-end autonomous driving approach.
Tesla‘s current FSD software runs on a model with approximately one billion parameters.
For Optimus, the chip provides the real-time edge inference needed for humanoid robotics tasks that require rapid processing of sensor data without relying on cloud connectivity.
AI6 and AI6.5
Musk also shared more details on the AI6 chip, which is due later this year.
Tapeout for AI6 is targeted for December 2026, with AI7 and subsequent generations already in planning.
The chip will use LPDDR6 memory, a faster and more power-efficient type of RAM widely used in mobile devices and vehicles.
Musk wrote on Wednesday that the AI6 “will deliver a true doubling of performance over AI5 in the same half reticle size.”
The chip will be manufactured at Samsung’s new 2-nanometer fab in Texas, with smaller process nodes allowing for more powerful and energy-efficient chips.
Samsung already fabricates AI4 for Tesla and secured a reported $16.5 billion eight-year agreement with the company in July 2025.
AI5 will be dual-sourced at TSMC’s Arizona facility and Samsung’s Texas plant — both US-based — ensuring volume production and supply chain resilience.
“AI6.5 will further improve performance using TSMC 2nm in Arizona,” Musk added.
Both AI6 and AI6.5 include a large allocation of SRAM, the ultra-fast on-chip memory that serves as a high-speed workspace for the processor.
“Note, both chips have ~half of the TRIP AI computation accelerators dedicated to SRAM, so effective memory bandwidth is an order of magnitude greater than DRAM bandwidth for any calculations in SRAM cache,” Musk said.
The roughly tenfold increase in effective bandwidth allows the chip to move and process data without the bottlenecks typical of DRAM-based designs.
Faster, more efficient AI chips accelerate AI training in Tesla‘s data centers, which underpins development of its robotaxi program, the Optimus humanoid robot, and energy-optimization software.
By designing its own silicon and relying on US-based fabs, Tesla reduces exposure to foreign suppliers and can tailor the hardware to its software stack — an increasingly important edge as the EV and AI industries converge.
“Existential” to Tesla
Musk has repeatedly described the AI5 programme as critical to the company’s future.
“Solving AI5 was existential to Tesla, which is why I had to focus both the teams on that chip and I’ve personally spent every Saturday for several months working on it,” he wrote in January.
The chip is central to Tesla‘s strategy of vertical integration in AI — designing both the hardware and the full software stack to maximise efficiency.
“AI5 will punch far above its weight, because the entire Tesla AI software stack is designed to make maximally effective use of every circuit. We co-designed our AI software and hardware,” Musk wrote in March.
He has also framed the chip as superior to third-party alternatives for Tesla‘s purposes:
“It will perform — for our purposes — much better than anything else available. To borrow Jensen’s phrase, we wouldn’t use any other chip in our cars and robots even if they were free.”
Terafab
Tesla is also building an in-house fabrication facility called Terafab in Austin, Texas, which will handle higher volumes in the future.
When speaking about the project at the unveiling event in Austin, Musk said that “we either build the Terafab or we don’t have the chips, and we need the chips, so we build the Terafab.”
The company has allocated $20 billion in capital expenditure for 2026 to fund Terafab and other non-vehicle projects including the Cybercab robotaxi and Optimus robot.
As the first-quarter earnings call approaches — scheduled for April 22 — investors have been wary of the company’s spending since the Terafab announcement in March.
Barclays’ analyst Dan Levy warned last month that the chip factory could require capital expenditure far exceeding the bank’s own bull-case estimate of $50 billion.
On Wednesday, he reaffirmed his concerns, stating that “alongside the questions of how Tesla will execute on Terafab and solar build out, we believe a key question is how much incremental capex Tesla will need to incur for these projects.”
The Terafab project is a joint venture between Tesla, SpaceX, and xAI — which SpaceX acquired in an all-stock deal in February.
Tesla‘s most bullish analyst, Wedbush’s Daniel Ives, wrote last month that the Terafab represents the beginning of a path that will culminate in a merger between the company and SpaceX, predicting the combination will take place “likely in 2027.”
The company’s shares rose more than 8% on Wednesday, hitting an intraday high of $394.48 and climbing further to $398.33 in after-hours trading.
This marks the stock’s biggest gain since September and puts it within reach of the $400 level it dropped below two months ago.









