Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Machine Learning

PyTorch Foundation Adds Helion Project for Simplified Kernel Authoring


PyTorch Foundation Adds Helion Project for Simplified Kernel Authoring
  • by: PR Newswire
  • |
  • April 7, 2026

The PyTorch Foundation, a community-driven hub for open source AI under the Linux Foundation, today announced that it has welcomed Helion as its newest foundation-hosted project alongside DeepSpeed, PyTorch, Ray, and vLLM. This contribution by Meta addresses a critical layer of the AI stack, making kernel authoring a first-class part of PyTorch by strengthening custom kernel creation and reducing manual coding effort through autotuning.

Quick Intel

  • PyTorch Foundation adds Helion as its newest hosted project, contributed by Meta, to enhance kernel authoring in the PyTorch ecosystem.
  • Helion simplifies writing high-performance machine learning kernels with automated ahead-of-time autotuning and greater hardware portability.
  • The project supports the growing AI inference boom by addressing cross-platform compatibility challenges across diverse hardware and model architectures.
  • Helion is a Python-embedded DSL that compiles to multiple backends including Triton and TileIR, raising the abstraction level for kernel developers.
  • ExecuTorch is also moving into PyTorch Core to extend on-device and edge capabilities under open community governance.
  • The addition strengthens the open AI stack, making it more portable and accessible for developers building production-grade AI.

Helion joins the Foundation as AI model development expands from training to an inference boom, elevating the importance of serving models at scale. In this landscape, in which hardware, software, and model architectures are shifting simultaneously, engineering teams face significant hurdles in cross-platform compatibility. Helion eliminates bottlenecks associated with model architectures and execution, providing developers with radically simpler kernels, automated ahead-of-time autotuning, and greater hardware performance portability.

This contribution by Meta addresses a critical layer of the AI stack.

"Helion joining the PyTorch Foundation as its newest project reflects where the open AI ecosystem needs to go next: higher-level performance portability for kernel authors," said Matt White, Global CTO of AI at the Linux Foundation and CTO of the PyTorch Foundation. "Helion gives engineers a much more productive path to writing high-performance kernels, including autotuning across hundreds of candidate implementations for a single kernel. As part of the PyTorch Foundation community, this project strengthens the foundation for an open AI stack that is more portable and significantly easier for the community to build on."

Helion is a Python-embedded domain-specific language (DSL) for authoring machine learning kernels, designed to compile down to multiple backends for hardware heterogeneity (Triton, TileIR, and more coming soon). Helion aims to raise the level of abstraction compared to kernel languages, making it easier to write efficient kernels while enabling more automation in the autotuning process.

In addition to Helion joining the Foundation, ExecuTorch is becoming part of PyTorch Core. Started at Meta, ExecuTorch continues to extend PyTorch model functionality for on edge and on-device environments under the Foundation, ensuring that ecosystem and technical decisions are made in an open, community-guided manner.

"Helion brings kernel authoring into PyTorch – making it simpler, portable, and accessible to every developer. Joining the PyTorch Foundation opens Helion to the broader hardware ecosystem, so developers write one kernel and it runs fast everywhere." – Jana van Greunen, Director of PyTorch Engineering, Meta

"By bringing Helion into the PyTorch Foundation community, we are meeting the technical frontier of AI head on. The project provides a vital layer of abstraction that makes it easier for developers to target different architectures and accelerate AI adoption. This addition is integral to shaping and fueling production-grade AI across industries." – Mark Collier, Executive Director, PyTorch Foundation

Developers and contributors interested in participating in the PyTorch project ecosystem are encouraged to join the community onsite at upcoming events like PyTorch Conference China (Shanghai, September 8-9) and PyTorch Conference North America (San Jose, October 20-21).

 

About the PyTorch Foundation

The PyTorch Foundation is a community-driven hub supporting the open source PyTorch framework and a broader portfolio of innovative open source AI projects, including DeepSpeed, Helion, PyTorch, Ray, and vLLM. Hosted by the Linux Foundation, the PyTorch Foundation provides a vendor-neutral, trusted home for collaboration across the AI lifecycle—from model training and inference, to domain-specific applications. Through open governance, strategic support, and a global contributor community, the PyTorch Foundation empowers developers, researchers, and enterprises to build and deploy AI at scale. Learn more at https://pytorch.org/foundation.

 

About the Linux Foundation

The Linux Foundation is the world's leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world's infrastructure, including Linux, Kubernetes, LF Decentralized Trust, Node.js, ONAP, OpenChain, OpenSSF, PyTorch, RISC-V, SPDX, Zephyr, and more. The Linux Foundation focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

  • Open Source AIMachine LearningAI Kernel
News Disclaimer
  • Share