PARIS — At OpenInfra Summit Europe 2025, NVIDIA wanted to make it very clear to AI developers, operators and users: If you want to run sensitive AI workloads on GPUs anywhere — on premises, in public clouds or at the edge — you need both virtual machine (VM)-level sandboxing and hardware-backed memory confidentiality. That means, said Zvonko Kaiser, NVIDIA principal systems engineer, you should combine Kata Containers (lightweight VMs for containers) with Confidential Computing to preserve bare-metal GPU performance while preventing the…








