
Reduces operational complexity, integrates with full-stack NVIDIA accelerated computing and software, as well as other technologies
KUBECON--Mirantis, a leader in open-source cloud infrastructure and platform engineering, today announced that Netherlands-based private cloud service provider Nebul has deployed open source k0rdent to deliver an on-demand service that enables customers to run production AI inference (applying a trained model to new data to generate predictions or decisions) workloads.
Nebul has always focused on privacy and sovereignty with a broad European reach, and is a pioneer in high-performance computing, artificial intelligence (AI) and machine learning technologies. The company an NVIDIA elite partner, NVIDIA Cloud Partner and elite NVIDIA Solution Provider is leveraging k0rdent Kubernetes-native multi-cluster management that is integrated with NVIDIA GPU Operator and Gcore Everywhere Inference. (More details in this case study.)
Nebul can support distributed AI inference across its NVIDIA-accelerated infrastructure that deliver low-latency and high performance, with dynamically provisioned processing resources to meet demand and policy-driven automation that optimizes GPU utilization for maximum efficiency.
"We believe open source is the enabler for infrastructure to support AI," said Alex Freedland, co-founder and CEO, Mirantis, the maintainer of k0rdent. "Nebul is demonstrating the enormous potential of open technologies to solve one of the most complex challenges in IT today delivering AI workloads reliably at scale."
Launched last month, k0rdent helps platform engineers manage infrastructure sprawl and operational complexity across cloud service providers, on-premises infrastructure, and edge devices. It simplifies maintenance with declarative automation, centralized policy enforcement, and production-ready templates optimized for modern workloads. k0rdent is fully composable and leverages the open source Cluster API so that Kubernetes clusters can be created and existing clusters can be deployed anywhere.
"As demand for AI services grows, our challenge was transitioning our existing infrastructure," said Arnold Juffer, CEO and founder at Nebul. "Using k0rdent enables us to effectively unify our diverse infrastructure across OpenStack, bare metal Kubernetes, while sunsetting the VMware technology stack and fully transforming to open source to streamline operations and accelerate our shift to Inference-as-a-Service for enterprise customers. Now, they can bring their trained AI model to their data and just run it with assurance of privacy and sovereignty in accordance with regulations. It's as simple as that."
"As Nebul is demonstrating, AI inference at scale requires infrastructure that dynamically adapts to end customer needs, ensuring AI applications meet the business requirements from a cost and performance stand point," said Seva Vayner, product director of Edge Cloud and Edge AI at Gcore. "Our Everywhere Inference product provides a super super-simple portal to deploy and manage AI inference, while Smart Routing ensures inference tasks are sent to the nearest GPUs for the lowest latency."
According to NVIDIA, "AI models are rapidly expanding in size, complexity, and diversity pushing the boundaries of what's possible. For the successful use of AI inference, organizations need a full-stack approach that supports the end-to-end AI life cycle and tools that enable teams to meet their goals in the new scaling laws era."
Mirantis and Gcore announced an agreement at NVIDIA GTC on March 18.
Anyone attending KubeCon in London can learn more about k0rdent and see a demo for AI inference at one of the following locations:
- KubeCon main event (April 2-4): Mirantis booth N331
- Cloud Native Kubernetes AI Day Europe (colocated at KubeCon on April 1): Mirantis table on Level 1 in the N10 area.
Go here to request a personalized demo.
About Mirantis
Mirantis helps organizations simplify operations, reduce complexity, and accelerate innovation by providing open-source solutions for delivering and managing modern distributed applications at scale. The company enables platform engineering teams to build and operate secure, scalable, and customizable developer platforms across any environment-on-premises, public cloud, hybrid, or edge. As AI-driven workloads become a core component of modern architectures, Mirantis provides the automation, multi-cloud orchestration, and infrastructure flexibility required to support high-performance AI, machine learning, and data-intensive applications. Committed to open standards and avoiding vendor lock-in, Mirantis empowers organizations to deploy and operate infrastructure and services on their terms.
Mirantis serves many of the world's leading enterprises, including Adobe, Ericsson, Inmarsat, PayPal, and Societe Generale. Learn more at www.mirantis.com.
About Nebul
Nebul's European Private Sovereign AI Cloud provides world-class AI capabilities at your terms. We assure privacy to the core through military-grade isolation, sovereignty through the use of Open-Source standards, and Compliance with EU legislation by complying with all the required regulations and certifications.
Nebul's AI applications provide industry-specific alternatives to Public AI like ChatGPT, Microsoft Co-Pilot, Amazon Bedrock, and others, and are built for (European) organizations that enable AI and connect it to corporate data without the risk of data and IP loss. As an official NVIDIA Elite Partner we are optimally equipped to help you navigate the complexities of deploying supercomputing solutions based on the latest NVIDIA technologies optimized for training, fine-tuning or inference and strategically located in datacenters across the European Continent
Learn more at https://www.nebul.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250402438159/en/
Contacts:
Joseph Eckert for Mirantis
jeckert@eckertcomms.com