Panos Filianos
Panos Filianos

Reputation: 346

Keras error when predicting on multithreading

I'm trying to create four threads (each one with its own graph and model) that will run concurently and issue predictions in the same way.

My thread code is something like:

        thread_locker.acquire()
        thread_graph = Graph()
        with thread_graph.as_default():
            thread_session = Session()
            with thread_session.as_default():
                #Model Training
                if (once_flag_raised == False):
                    try:
                        model = load_model('ten_step_forward_'+ timeframe +'.h5')
                    except OSError:
                        input_layer = Input(shape=(X_train.shape[1], 17,))

                        lstm = Bidirectional(
                            LSTM(250),
                            merge_mode='concat')(input_layer)

                        pred = Dense(10)(lstm)
                        model = Model(inputs=input_layer, outputs=pred)
                        model.compile(optimizer='adam', loss='mean_squared_error')
                    once_flag_raised = True

                model.fit(X_train, y_train, epochs=10, batch_size=128)
                thread_locker.acquire()
                nn_info_dict['model'] = model
                nn_info_dict['sc'] = sc
                model.save('ten_step_forward_'+ timeframe +'.h5')
                thread_locker.release()
        thread_locker.release()

        (....)
            thread_locker.acquire()
            thread_graph = Graph()
            with thread_graph.as_default():
                thread_session = Session()
                with thread_session.as_default():
                    pred_data= model.predict(X_pred)
            thread_locker.release()

on each thread.

I keep getting the following error (threads - 1 times) when I read the predicting part of the code:

ValueError: Tensor Tensor("dense_1/BiasAdd:0", shape=(?, 10), dtype=float32) is not an element of this graph.

My understanding is that one of the threads "claims" the Tensorflow backend and its default Graph and Session.

Is there any way to work around that?

Upvotes: 3

Views: 1735

Answers (1)

Panos Filianos
Panos Filianos

Reputation: 346

I have figured what I was doing wrong. My thinking was right but I shouldn't have recreated the Graph and Session below. The bottom part of the code should simply be:

    thread_locker.acquire()
    with thread_graph.as_default():
        with thread_session.as_default():
            pred_data= model.predict(X_pred)
    thread_locker.release()

Upvotes: 4

Related Questions