Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Machine Learning

LMArena Raises $100M for AI Evaluation Platform


LMArena Raises $100M for AI Evaluation Platform
  • by: Source Logo
  • |
  • June 19, 2025

LMArena, an open community platform for AI model evaluation, has raised $100 million in seed funding to enhance its infrastructure for transparent and reliable AI performance assessments. The platform’s relaunch next week will set a new standard for scientific rigor in AI development.

Quick Intel

  • LMArena secures $100M in seed funding led by a16z and UC Investments.
  • Platform relaunch next week features rebuilt UI and mobile-first design.
  • Over 400 model evaluations and 3M votes shape AI industry standards.
  • Open, transparent evaluation tests models with real-world prompts.
  • Funding to expand analytics and enterprise services while keeping core free.
  • Backed by Lightspeed, Laude Ventures, Felicis, Kleiner Perkins, The House Fund.

$100M Seed Funding Boosts AI Evaluation

On May 21, 2025, LMArena, a San Francisco-based open community platform, announced a $100 million seed funding round led by a16z and UC Investments, with participation from Lightspeed, Laude Ventures, Felicis, Kleiner Perkins, and The House Fund. The funding supports the platform’s relaunch next week, introducing a faster, rebuilt interface designed for rigorous, transparent, and human-centered AI evaluation. “In a world racing to build ever-bigger models, the hard question is no longer what can AI do. Rather, it’s how well can it do it for specific use cases, and for whom,” said Anastasios N. Angelopoulos, co-founder and CEO at LMArena.

Transparent Infrastructure for AI Performance

LMArena bridges the gap in AI evaluation by offering a neutral, community-driven platform that tests models from industry leaders like Google, OpenAI, Meta, and xAI. With over 400 evaluations and 3 million votes, it provides critical insights into real-world AI performance. “AI evaluation has often lagged behind model development,” said Ion Stoica, co-founder at LMArena and UC Berkeley professor. “LMArena closes that gap by putting rigorous, community-driven science at the center. It’s refreshing to be part of a team that leads with long-term integrity in a space moving this fast.” The platform’s open leaderboard mechanics and diverse prompts ensure transparency and reliability.

Relaunch Enhances Platform Capabilities

The relaunch at lmarena.ai introduces a mobile-first design, lower latency, and features like saved chat history and endless chat, driven by community feedback. The legacy site will remain active temporarily, but future innovations will focus on the new platform. “Our mission has always been to make AI evaluation open, scientific, and grounded in how people actually use these models,” said Wei-Lin Chiang, co-founder and CTO of LMArena. The funding will enhance analytics and enterprise services while keeping core access free for all users.

Industry Backing and Long-Term Vision

LMArena’s investors highlight its role in advancing AI reliability. “We invested in LMArena because the future of AI depends on reliability,” said Anjney Midha, General Partner at a16z. “And reliability requires transparent, scientific, community-led evaluation. LMArena is building that backbone.” Jagdeep Singh Bachher, chief investment officer at UC Investments, added, “We’re excited to see open AI research translated into real-world impact through platforms like LMArena.” The platform collaborates with model providers to analyze performance trends and test updates, with plans to expand into new modalities.

LMArena’s $100 million funding and upcoming relaunch establish it as a leader in AI evaluation, promoting trust and scientific rigor in a fast-evolving industry. By focusing on transparent, community-driven insights, LMArena is poised to shape the future of reliable AI development, addressing real-world performance challenges effectively.

 

About LMArena

LMArena is an open platform where everyone has access to leading AI models and can contribute to their progress through real-world voting and feedback. Built with scientific rigor and transparency at its core, LMArena enables developers, researchers, and users to compare model outputs, uncover performance differences, and advance the reliability of AI systems. With a commitment to open access, reproducible methods, and diverse human judgment, LMArena is shaping the infrastructure layer AI needs to earn long-term trust.

News Disclaimer
  • Share