Victor Meunier

Engineering student

How to do multithreading with Keras

You might have experienced the following error:

                    File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/", line 267, in init
                    fetch, allow_tensor=True, allow_operation=True))
                    File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/", line 2405, in as_graph_element
                    return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
                    File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/", line 2489, in _as_graph_element_locked
                    raise ValueError("Operation %s is not an element of this graph." % obj)

while trying to use Keras with multiple threads or when using simultaneously a Tensorflow and a Keras model. This is because Keras is not thread safe, and will load your model with the default session, which is the one already used, either by your TF model or another thread with your Keras model.

Don't worry though, the solution is pretty simple and require few changes. Let's dive into the code!

Loading Keras for multithreading

Let's say you created your architecture, trained your model and save it as model.h5. You would usually just do the following to load the model:

                      from keras.models import load_model

                      # Returns a compiled model
                      model = load_model('model.h5')

This will load your model with the default graph and session from Tensorflow. If you try to do that in multiple threads, you'll have an error.

To resolve that, you want to load the model in each thread like this:

                      from tensorflow import Graph, Session
                      import tensorflow as tf
                      import keras

                      thread_graph = Graph()
                      with thread_graph.as_default():
                          thread_session = Session()
                          with thread_session.as_default():
                              model = keras.models.load_model(path_to_model)
                              graph = tf.get_default_graph()

                      return model, graph, thread_session

Each thread will have, his graph, his session and its model. Then when you want to predict, do:

                      with graph.as_default():
                        with sess.as_default():
                          prediction = model.predict(data)

That's it! You know how to load your model to be used in multiple threads.