Exporting Apple CoreML models using Keras and Docker

15 June 2019
Yehya Abouelnaga
Technical University of Munich

Introduction

Apple devices could be used to host machine learning models. In this article, we are interested in the deployment of deep learning models on Apple devices. While there are many online references on the matter, there seems to be little resources on getting the correct CoreML models from an existing Keras model. In this article, I present a docker image where model exports are simple and have no dependency conflicts.

Nvidia GPU on Docker

During the period of my master studies, I have been very interested in the use of Docker images in containerizing machine learning / deep learning training and testing. Nvidia GPU driver helped with bridging GPU device interfaces over from host machines to guest containers. In a previous post, I have shown how to containerize existing machine learning projects / models for use as a research ops tool (See Running Containerized OpenPose).

Here I also present a Docker image where Keras models could be trained using Nvidia GPU. These models could later on be deployed to Apple’s CoreML on Swift. For references on deploying CoreML models, the following links are a good start:

Keras & CoreML Docker Image

Understanding the Docker Image

The above image contains the following packages:

  1. Nvidia GPU
  2. Anaconda Python deployment
  3. Common Python ML Libraries (Sickit Learn, iPython, Cython, Jupyter)
  4. Deep Learning Libraries (Keras & Tensorflow GPU)
  5. Visualization libraries (Matplotlib, Visdom)
  6. Computer Vision libraries (OpenCV, Pillow)

In order to use the above docker file, you first need to build it as follows:

docker build -t my-image-name .

If you installed this docker image on a machine that already has a GPU correctly installed, when you SSH into the image, you should be able to see your GPU. In order to SSH into your image, you could do run the following snippet:

docker run -v $(pwd):/code --p 8000:8000 --runtime=nvidia --host=network --rm -it my-image-name bash

The above command does the following:

  1. Runs bash on your built image my-image-name.
  2. --runtime=nvidia allows the host to pass Nvidia GPU to the guest image.
  3. --host=network allows your guest image to access the host network (i.e. for Visdom reporting or Tensorboard reporting to send training updates).
  4. --rm removes the anonymous volumes attached to your container once/if it is removed (which helps consuming unneeded disk space).
  5. --it runs docker interactively (i.e. rather than a background daemon).
  6. -v $(pwd):/code passes the current directory (on the host) to the guest under path /code.
  7. -p 8000:8000 binds the host 8000 port to the guest’s 8000 port. That is helpful if you need to run jupyter from within your docker guest image.