THE LINUX FOUNDATION PROJECTS
Blog

Inside the First OPI Summit: Advancing an Open Ecosystem for DPU/IPUs

By November 24, 2025December 9th, 2025No Comments

On October 17, tour community came together at The Tech Interactive in San Jose, CA for the very first OPI Summit on DPU/IPUs, co-located with OCP Global Summit. The half-day event brought hardware vendors, cloud and data center operators, software developers, and open source contributors into one room to talk about where Data Processing Units (DPUs) and Infrastructure Processing Units (IPUs) are headed—and what it will take to make them truly open, interoperable, and production-ready. 

In the spirit of open source, the summit was designed as a technical, community-first gathering rather than a vendor showcase. 

Why now?

DPUs and IPUs have moved from niche accelerators to core building blocks in modern infrastructure. They offload critical functions (e.g., networking, security, storage, observability, and increasingly AI0 from the CPU, enabling higher performance and more efficient use of compute. But with that power comes fragmentation: different hardware, different software stacks, and limited portability for applications. 

The OPI Summit was created to address exactly that. The program focused on: 

  • Real-world applications on DPU/IPUs: from networking and security to storage, HPC, and AI infrastructure
  • Provisioning, lifecycle, and fleet management: how to actually scale thousands of DPUs/IPUs in production
  • Kubernetes and cloud-native integration:  operators, resource management, and control-plane patterns
  • Standard APIs and interoperability:  common models that make software portable across different vendors
  • Proofs-of-concept and experimental results: what’s working in the lab and what’s already in the field

That mix set the tone for a day that was both highly technical and grounded in real deployments.

Opening: the state of the OPI ecosystem

The summit opened with context on how far the OPI community has come since the project’s launch: a growing set of member companies, multiple working groups, and early demos that show what’s possible when APIs and behavior are standardized across accelerators. 

The opening keynote looked at OPI’s evolution from “what is a DPU/IPU?” to “how do we run production workloads on them safely and consistently?” Speakers highlighted:

  • The need for vendor-neutral APIs and behavioral models so applications can run across different devices
  • Progress in security and networking use cases, including IPsec, load balancing, and observability offload
  • How DPU/IPU adoption is increasingly tied to AI and data-intensive workloads, from inference pipelines to secure data paths

A recurring theme: DPUs and IPUs are no longer a “science project,” they’re part of mainstream infrastructure planning, and the ecosystem now needs common frameworks, not one-off integrations.

Technical deep dives: from onboarding to automation

Across the technical sessions, speakers dug into what it takes to move from a single DPU/IPU proof-of-concept to a repeatable, scalable deployment model.

Highlights included:

Provisioning, lifecycle, and fleet management

Sessions on onboarding, service deployment, and SZTP-driven automation explored how operators can treat DPUs/IPUs as first-class managed assets, not snowflake devices hiding in servers. Topics included: 

  • Vendor-agnostic provisioning flows
  • Secure boot and firmware update strategies
  • Integrating DPU/IPU lifecycle into existing automation and observability stacks

Kubernetes and cloud-native use cases

Talks focused on how to bring DPUs/IPUs into Kubernetes and container-native environments, including:

  • Using operators and CRDs to represent DPU/IPU resources
  • Scheduling workloads that span CPU and DPU/IPU capabilities
  • Handling multi-tenant and multi-cluster scenarios

Speakers reinforced that success here depends on consistent APIs and behavior models—core focus areas for the OPI API & Behavioral Model and Provisioning & Platform Management working groups. 

AI, security, storage, and HPC

In line with OPI’s broader work on AI inference and infrastructure, several sessions showed how DPUs/IPUs are being used to secure and accelerate real workloads, for example: 

  • Offloading security and network services to DPUs/IPUs to free CPU cycles
  • Using IPU-based pipelines to improve AI inference performance and isolation
  • Applying DPUs/IPUs in HPC and disaggregated storage architectures

Across these talks, the message was clear: DPU/IPU value is no longer hypothetical and is being validated in concrete, measurable scenarios.

Key Takeaways:

One of the most important outcomes of the summit is how the content feeds directly into ongoing OPI work: 

  • API & Behavioral Model WG gains new input on the kinds of abstractions needed for networking, security, storage, AI, and HPC workloads.
  • Provisioning & Platform Management WG gets more real-world requirements around device discovery, secure onboarding, inventory, and updates at fleet scale.
  • Dev Platform / PoC WG can prioritize reference architectures and testbeds that mirror how attendees are actually deploying DPUs/IPUs.
  • Use Case WG heard fresh use cases from operators, integrators, and ISVs—which will help prioritize future demos and reference implementations.

In other words, the summit didn’t just showcase the current state of OPI, it created a feedback loop to accelerate the next wave of specifications, demos, and open source code.

Watch the sessions & get involved!

If you couldn’t join us in San Jose, you can still watch the full summit recording on the OPI YouTube channel (and please subscribe to keep updated on the latest OPI video content!)

Ready to help shape the future of DPU/IPU-based infrastructure? Join an OPI working group (API & Behavioral Model, Provisioning & Platform Management, Dev Platform/PoC, Use Case, Outreach). Bring your use cases, PoCs, and experimental results to the community.

The first OPI Summit on DPU/IPUs showed that this ecosystem is ready to move from experimentation to shared, open foundations. We’re excited to build on that momentum together with the broader community.