THE LINUX FOUNDATION PROJECTS
Blog

Open Programmable Infrastructure in 2025: From Lab PoCs to a Growing DPU/IPU Ecosystem

By December 18, 2025No Comments

2025 was a pivotal year for the Open Programmable Infrastructure (OPI) Project. As DPUs and IPUs moved firmly into the mainstream of modern infrastructure design, our community focused on turning early blueprints into shared, open building blocks: a more capable OPI Lab, maturing APIs and bridges, new members, and our first dedicated OPI Summit on DPU/IPUs.

Together, these efforts brought us closer to our core mission: a community-driven, standards-based open ecosystem for next-generation architectures based on DPU/IPU-like technologies. 

Turning the OPI Lab into a PoC and Demo Engine

Following the 2024 announcement of the OPI Lab as a shared resource for testing and exploring a common provisioning and lifecycle management framework, the project spent 2025 shifting the lab from “environment setup” to “PoC engine.”

In May, we published “Accelerating the Next Phase of the Open Programmable Infrastructure (OPI) Lab”, outlining how the lab’s second phase is now focused on proof-of-concept development and real-world use cases. The OPI Lab provides:

  • A physical and virtual testbed where participants can validate PoCs and use cases.
  • Shared automation and observability tooling—Ansible playbooks, Grafana dashboards, Secure Zero Touch Provisioning (sZTP), Redfish workflows, and detailed hardware configurations.
  • A structured place for working groups to develop, test, and refine specs and demos.

Throughout 2025, the Lab repo and infrastructure continued to evolve with hundreds of commits and documented procedures, making it easier for new contributors to plug into an existing CI, automation, and observability pipeline. 

Advancing APIs, Bridges, and Reference Implementations

OPI’s technical work in 2025 continued to center on a vendor-neutral API and behavioral model that can span networking, security, storage, telemetry, AI/ML, and more. The OPI API and Behavioral Model repo added and refined object models and protobuf definitions across these domains, maintaining a strong focus on taxonomy, capabilities, and alignment with concrete use cases. 

To connect those abstractions to real hardware and software stacks, the community advanced multiple bridge implementations, including:

  • OPI EVPN Bridge – A gRPC to EVPN gateway bridge that released v0.2.0 in June 2025, adding richer tests, pagination support, TLS options, Redis integration, and updated dependencies.
  • OPI SPDK Bridge – A gRPC–SPDK JSON-RPC bridge that released v0.1.1 in February 2025, incorporating API updates, crypto device examples, NVMe controller operations, and additional telemetry and CI improvements.

These bridges are critical proof points: they show how a common OPI API can be realized across different storage and networking stacks while preserving portability and giving implementers clear patterns to follow.

Growing the Community: New Members and Collaborators

Ecosystem growth was another highlight of 2025. New organizations joined OPI to help shape standards, contribute code, and validate real-world deployments.

In February, VyOS Networks announced it was joining both the Linux Foundation and OPI. As the company behind the VyOS open source network operating system, VyOS is working with the OPI community to bring DPU/IPU-aware networking into a flexible, open NOS. 

Our OPI Member Spotlight: VyOS Networks blog explored why VyOS joined OPI and how they plan to contribute—ranging from an open OS for DPUs to active participation in working groups and discussions about how to standardize interactions with heterogeneous accelerators. 

Additional companies, including FusionLayer, publicly announced that they were joining the Open Programmable Infrastructure Project to help telecoms and enterprises reduce operational complexity, accelerate service delivery, and increase infrastructure agility using programmable infrastructure. 

These new members complement existing premier and general members from across silicon, systems, networking, and software, reinforcing OPI’s role as a neutral collaboration space rather than a single-vendor stack.

Real-World Use Cases: AI Inference, Security, and Cloud-Native Platforms

Moving beyond conceptual discussions, 2025 also showcased concrete DPU/IPU solutions that align with OPI’s vision.

A notable example was the post “Enhanced AI Inference Security with Intel OpenVINO: Leveraging Intel IPU, F5 NGINX Plus, and Red Hat OpenShift,” republished on the OPI blog in June. Open Programmable Infrastructure Project This solution demonstrates:

  • Offloading infrastructure tasks—traffic routing, decryption, and access control—to an Intel E2100 IPU, freeing host CPUs to focus on AI inference workloads.
  • A clear separation of responsibilities between infrastructure and application administrators, improving operational clarity and security. 
  • Use of Red Hat OpenShift and MicroShift DPU operators to automate FXP rule creation and secure PCIe traffic, turning complex access control into a dynamic, policy-driven process.

This kind of pattern—AI and security workloads accelerated by DPUs/IPUs, integrated with Kubernetes-native tooling—is exactly where OPI aims to provide standardized APIs, lifecycle management patterns, and lab-validated reference architectures.

Bringing the Community Together: The First OPI Summit on DPU/IPUs

The biggest community milestone of the year was the first-ever OPI Summit on DPU/IPUs, held October 17 at The Tech Interactive in San Jose, co-located with the OCP Global Summit.

      

Our recap post, “Inside the First OPI Summit: Advancing an Open Ecosystem for DPU/IPUs,” highlights how the event brought together hardware vendors, cloud and data center operators, software developers, and open source contributors to discuss where DPUs/IPUs are headed and what it takes to make them open, interoperable, and production-ready. Key themes from the summit included:

  • Why now: DPUs/IPUs moving from niche accelerators to core infrastructure components for networking, security, storage, observability, and AI. Provisioning and lifecycle at scale: Vendor-agnostic onboarding, secure boot, firmware management, and fleet-scale lifecycle management. 
  • Kubernetes and cloud-native integration: Operators, CRDs, and scheduling patterns that span CPU and DPU/IPU resources across multi-tenant and multi-cluster environments.
  • Workload-driven use cases: Security offload, AI inference pipelines, HPC, and disaggregated storage deployments that demonstrate concrete gains from programmable infrastructure.

Importantly, the summit content is feeding directly into OPI working groups—API & Behavioral Model, Provisioning & Platform Management, Dev Platform/PoC, and Use Cases—creating a tight feedback loop between operators’ needs and the specifications and code we build next. 

For anyone who couldn’t attend, the full summit recording is available on the OPI YouTube channel. 

OPI as the “Operating System” for DPUs

A recurring theme at the summit was OPI as a kind of “operating system” for DPUs/IPUs; not replacing Linux on the device, but acting as the common control and integration layer that makes these accelerators usable at scale.

Every DPU/IPU today comes with its own drivers, SDKs, and programming model. That’s great for innovation, but painful for operators who need consistent behavior across mixed fleets. OPI tackles this by defining shared APIs, behavioral models, and lifecycle patterns so higher-level software can treat DPUs/IPUs as a well-defined class of infrastructure, regardless of vendor.

In this sense:

  • Southbound, OPI connects to diverse silicon, firmware, and vendor-specific capabilities.
  • Northbound, it exposes stable, cloud-native interfaces to platforms like Kubernetes or custom control planes.

This abstraction is what turns DPUs/IPUs from bespoke appliances into first-class, schedulable resources in modern data centers, and why so many in the community described OPI as the emerging “OS for DPUs.”

Looking Ahead to 2026

If 2025 was about proving that DPUs and IPUs are real, impactful components of production infrastructure, 2026 will be about making them boring—in the best possible way.

Our priorities include:

  • Deepening the OPI Lab’s role as a shared CI, demo, and PoC environment, with more test coverage across networking, storage, security, AI/ML, and edge use cases. 
  • Continuing to evolve the OPI API and bridges so that operators and developers can rely on stable, interoperable abstractions rather than one-off integrations.
  • Expanding real-world demonstrations in areas like AI inference, secure multi-tenant networks, disaggregated storage, and software-defined vehicles. 
  • Growing the member and contributor base so that more vendors, integrators, and operators can help shape the direction of programmable infrastructure. 

If you’re building or deploying DPUs/IPUs (or just starting to explore how they fit into your environment) now is the ideal time to get involved:

  • Join an OPI working group (API & Behavioral Model, Provisioning & Platform Management, Dev Platform/PoC, Use Case, Outreach). 
  • Contribute to the OPI Lab by sharing hardware, PoCs, or test cases. 
  • Bring your use cases, requirements, and experiences to the community so we can design APIs and frameworks that work for everyone.

On behalf of the OPI Project community, thank you to everyone who contributed code, specs, demos, talks, and feedback in 2025. We’re excited to keep building an open, programmable future for infrastructure, together.