Loading organizations...

§ Private Profile · Mountain View, CA, USA
AI better, faster, and cheaper
We dedicate every transistor to maximizing performance for large models. MatX delivers 10× more computing power for the world's largest models, enabling AI labs to make models smarter and more useful.
MatX has raised $625.0M across 3 funding rounds.
MatX has raised $625.0M in total across 3 funding rounds.
MatX has raised $625.0M across 3 funding rounds. Most recently, it raised $500.0M Series B in February 2026.
| Date | Round | Lead Investors | Other Investors | Status |
|---|---|---|---|---|
| Feb 24, 2026 | $500M Series B | Jane Street, Leopold Aschenbrenner | Adam D’Angelo, Andrej Karpathy, Daniel Gross, Dwarkesh Patel, John Collison, NAT Friedman, Patrick Collison, Marvell Technology, NFDG, Spark Capital | Announced |
| Nov 1, 2024 | $100M Series A | Spark Capital | Afore Capital, Array Ventures, Benchmark, C2 Investment, Chingona Ventures, Conviction Partners, Felicis Ventures, Founders Fund, Crystal Huang, Scott Sandell, Propeller VC, Rebel Fund, Sequoia Capital, Suli, Sunset Ventures, Y Combinator, AMR Awadallah, Furqan Rydhan, Gabriel Jarrosson, Hubert Thieblot, Kulveer Taggar, Larry Summers, Reshma Saujani | Announced |
| Mar 1, 2024 | $25M Seed | — | Afore Capital, Array Ventures, C2 Investment, Chingona Ventures, Conviction Partners, Crystal Huang, Propeller VC, Rebel Fund, Sequoia Capital, Suli, Sunset Ventures, Y Combinator, Furqan Rydhan, Gabriel Jarrosson, Hubert Thieblot, Kulveer Taggar, Reshma Saujani | Announced |
MatX has raised $625.0M in total across 3 funding rounds.
MatX's investors include Jane Street, Leopold Aschenbrenner, Adam D’Angelo, Andrej Karpathy, Daniel Gross, Dwarkesh Patel, John Collison, Nat Friedman, Patrick Collison, Marvell Technology, NFDG, Spark Capital.
MatX is an AI hardware startup that designs specialized chips optimized for large language models (LLMs), focusing on high-performance training and inference for models with at least 7 billion parameters, ideally 20 billion or more.[1][2][3][5] The company serves AI research labs, developers, and startups by providing cost-efficient computing power that claims to deliver 10x more performance than competitors like Nvidia GPUs for the world's largest models, enabling faster development of smarter AI.[3][4][5] Founded in 2022 and based in Fremont, California, MatX has raised $105M total ($25M seed + $80M Series A in 2024 at ~$300M post-money valuation), showing strong early momentum in the AI chip space.[1][3][4]
MatX was founded in 2022 by Google alumni Reiner Pope (CEO) and Mike Gunter (CTO), both veterans of Google's Tensor Processing Unit (TPU) team.[2][3] Pope served as Efficiency Lead for Google PaLM, where he built the world's fastest LLM inference software, conceived TPU v5e optimizations, and holds 11 patents; Gunter was Chief Architect for one of Google's ML chips, invented latency-improving techniques, contributed to Innovium's $1.1B exit, and holds over 10 patents.[2] The idea emerged from their deep expertise in ML hardware and software, aiming to address AI chip shortages by creating purpose-built processors for massive LLMs—born from hands-on experience scaling AI at Google.[3] Early traction came swiftly: a $25M seed round in late 2023 led by AI angels Nat Friedman and Daniel Gross, followed by a $80M Series A in November 2024 led by Spark Capital.[3]
MatX rides the explosive demand for AI compute amid global chip shortages, where training frontier LLMs requires unprecedented scale (e.g., 10^29 FLOPs for 10T-class models).[3][5] Timing is ideal post-2024 AI hype, as labs race to build "order of magnitude smarter" models but face Nvidia bottlenecks—MatX's physics-limited efficiency accelerates this by 3-5 years, democratizing access for researchers and startups.[2][5] Market forces like surging LLM adoption (e.g., in medicine, tutoring) favor its niche focus, influencing the ecosystem by pressuring incumbents on price/performance and enabling broader AI proliferation.[2][3][5]
MatX is poised to disrupt AI hardware with its LLM-first chips, potentially capturing share from Nvidia as clusters scale to millions of units. Upcoming trends like multimodal models and edge AI will amplify demand for its efficient, cluster-friendly design, with tape-out and first shipments likely driving 2026 growth toward Series B. Its influence could evolve from niche innovator to ecosystem enabler, making elite AI ubiquitous and fulfilling its mission of physics-bound efficiency for a world of personalized AI assistants.[2][5] This positions MatX as a high-momentum bet in the AI gold rush, echoing founders' Google roots in redefining compute frontiers.