The goal of this project is to get an overview of the state-of-the-art technology on training and deploying machine learning projects with kubernetes and apply that to a SUSE CaaSP cluster.

With that in mind, we will train and deploy a model for summarizing github issues:

This example, will make use of the following technology:

For this project, I will use a workstation with a nvidia GeForce GTX 1060 which is supported by CUDA and I will install SUSE CaaSP in it.

Looking for mad skills in:

machinelearning kubeflow keras seldoncore tensorflow cri-o kubernetes caasp nvidia cuda gpu containers

This project is part of:

Hack Week 17


  • 11 months ago: mbrugger liked Architecting a Machine Learning project with SUSE CaaSP
  • 11 months ago: jordimassaguerpla started Architecting a Machine Learning project with SUSE CaaSP
  • 11 months ago: jordimassaguerpla added keyword "containers" to Architecting a Machine Learning project with SUSE CaaSP
  • 11 months ago: jordimassaguerpla added keyword "gpu" to Architecting a Machine Learning project with SUSE CaaSP
  • 11 months ago: jordimassaguerpla added keyword "cuda" to Architecting a Machine Learning project with SUSE CaaSP
  • Show History


    • jordimassaguerpla
      11 months ago by jordimassaguerpla | Reply

      I think I was a bit too ambitious when I wrote this description :) ... but it is been fun any way.

      This is what I accomplished

      • Seting up a SUSE CaaSP cluster where the admin and the master where running on top of kvm and the worker was a workstation with an nvidia GPU. The first trick was to setup the virtual machines to use the ethernet network interface from the host (macvtap). For whatever reason I could not setup this with the virt-manager run as a "normal user" but I could if I started virt-manager from YaST (with root permissions... may that be the reason?). The second trick was to restrict master to 2GB of RAM and admin to 4GB, so I could run this on my laptop (thanks @ereslibre !) Finally the third trick was to add "hostname=UNIQUE_HOSTNAME" as a linuxrc parameter when installing each machine (otherwise they were all be named linux.lan :) )

      • Building nvidia packages for CaaSP. Nvidia packages built for SLE12SP3 by SUSE, but provided by nvidia at, had been built for an older kernel than the one released in CaaSP. Thus, when installing those packages, the nvidia kernel modules could not be loaded. For this reason, I built them for the latest kernel in openSUSE Leap 42.3, and install them at the same time I was upgrading the kernel to the one in openSUSE Leap 42.3 (see [0] why openSUSE Leap 42.3). You can download them from this project.

      • Installing and fixing nvidia-runtime-hooks and libnvidia-containers: There is no package for SUSE but instead I took the ones from centos 7; the trick was to run a centos7 container, and follow the instructions from, but add the "--download-only" option to yum. Luckily, the packages installed without any error ... but they were not really working! Using "strace nvidia-container-cli info" I realized the problem was on the permissions of /dev/nvidia* files. Thus, running "chmod 0666 /dev/nvidia*" fixed the installation... but you have to do this on every reboot (actually, everytime the nvidia mod is loaded). The trick was to use "transactional-update shell" to do all these changes :) . Note I am not installing nvidia-container-runtime, but only the hook. That is because we will use cri-o and not docker. For cri-o we don't need to install the nvidia-container-runtime.

      See as a "proof":

      nvidia-container-cli info

      NVRM version: 390.67
      CUDA version: 9.1

      Device Index: 0
      Device Minor: 0
      Model: GeForce GTX 1060 3GB
      GPU UUID: GPU-f96a76d4-7ba9-07cc-2774-bb7a55ef3e68
      Bus Location: 00000000add-emoji00.0
      Architecture: 6.1

      • Setting up the cri-o hook to use libnvidia-container: I just had to follow the instructions here: I couldn't really verify this, but I am quite confident this worked, as kubelet was starting and parsing the hook.

      and this is where I failed

      • Using a chainned forward proxy to add the workstation into a SUSE CaaSP cluster which was running in a SUSE Cloud cluster I tried configuring 2 proxies with apache2 and using modproxy, modproxyhttp, modproxyconnect, where both were configured as forward proxies and the second one was using the "RemoteProxy" configuration to "chain" with the first one. Then I placed the first one inside the SUSE Cloud cluster, as a virtual machine, and the second one on my laptop. The tricked worked, and I was able to access the autoyast file from the admin node which was in the SUSE Cloud cluster (http://adminnode/autoyast), when installing the workstation via the DVD, even thought the admin node was not accessible outside the SUSE Cloud cluster, and the SUSE Cloud cluster is inside the vpn, where the workstation is not (but the laptop is). It sounds a bit complicated but actually the solution was quite simple. However, salt-minion does not use http but zeromq, and was not going through the proxies.

      • Building nvidia-container and libnvidia-container packages for SUSE: I tried getting the spec file from github but it required too many tunning that it would have taken me the whole hackweek (or more) to have them building for SUSE, so I ended up using the ones from centos 7.

      • Setting up k8s to schedule jobs that require gpu: Even thought cri-o seemed correctly configured, jobs were not being scheduled. More docs I found in internet were referring to add the "--experimental-nvidia-gpus=1" option to kubelet, but this is not possible because kubelet does not recognize this option and fails to start. Then, I read in the k8s docs about enabling this via a device plugin: This required enabling feature gates, which by default is not. Here I think I failed cause I didn't know how to do it and unfortunately I run out of time ... However, while writing this report, flavio pointed me to (thanks @flavio_castelli !) where you can see how to enable the feature gates. This is where we should resume the work if we have some time at some point.

      • Run a kubeflow deployment I didn't had time to reach to this point. This was the last step and a project on its own. Next hackweek, maybe...

      [0] Why openSUSE Leap 42.3? SLE12SP3 has the same common code as openSUSE Leap 42.3, and for the hackweek I wanted to build the nvidia package in the openBuildService Using openSUSE Leap 42.3 (plus its update repo) was easier than trying to build that for exact kernel that has been shipped in CaaSPv3.

    • jordimassaguerpla
      11 months ago by jordimassaguerpla | Reply

      and thanks to @vrothberg for helping me out with cri-o/podman.

    • jordimassaguerpla
      11 months ago by jordimassaguerpla | Reply

      The url for how to enable the feature gates got formatted weirdly ... This is the url

      and I think this is an internal document, so for the ones that do not have access:

      How to enable Kubernetes feature gates

      Feature gates are a way used by kubernetes to enable experimental features in advance.

      It's possible to enable Kubernetes feature gates on SUSE CaaS Platform 3.

      Please note: feature gates are experimental features, hence they won't be supported by SUSE.

      Let's assume a user wants to use two feature gates:


      The user would have to log into the admin node and execute this command:

      docker exec $(docker ps | grep velum-dashboard | awk {'print $1'}) bundle exec rails runner "Pillar.apply(kubernetes_feature_gates: 'DevicePlugins=true,ReadOnlyAPIDataVolumes=true')"

      And then issue an orchestration. This can be done using the following command on the admin node:

      docker exec $(docker ps | grep salt-master | awk {'print $1'}) salt-run state.orchestrate orch.kubernetes

    • jordimassaguerpla
      11 months ago by jordimassaguerpla | Reply

      Following the instructions in the previous comment, I was able to enable the device plugin.

      However, when deploying the nvidia plugin, this didn't deploy a pod as expected, so I opened an issue upstream asking for more information.

    • jordimassaguerpla
      11 months ago by jordimassaguerpla | Reply

      namespaced RoleBinding would add host path mount privileges , without granting excess privileges over all namespaces:

      apiVersion: v1
      kind: ServiceAccount
        name: nvidia-device-plugin
        namespace: kube-system
      kind: RoleBinding
        name: nvidia-device-plugin-psp-privileged
        namespace: kube-system
        kind: ClusterRole
        name: suse![add-emoji](
      - kind: ServiceAccount
        name: nvidia-device-plugin
        namespace: kube-system
      And then in your DeamonSet spec, `serviceAccount: nvidia-device-plugin` .
      This creates the ServiceAccount+RoleBinding in the kube-system
      namespace - if you're deploying into another NS, swap out `kube-system` 
      for the namespace you're using.

      Thanks to Ludovic and Kiall

    Similar Projects

    ML and AI for code static analysis by mvarlese

    The idea is to explore the technologies and the...

    DPHAT: Data Plane Health Assessment Tool For Cloud Networking Technology by rtidwell

    A common challenge for OpenStack and K8S deploy...

    Dudenetes by pgeorgiadis

    ![alt text](

    Running Virtual Machines and Containers together with Kubernetes by pgeorgiadis

    SUSE is well known for the standard enterprise ...