faraa
faraa

Reputation: 585

AttributeError: 'Tensor' object has no attribute 'is_initialized'

I got this error when I try to fit the model. I tried to use a single GPU version but it remains. If I upgrade to TensorFlow 2 it will be solved but I need to keep it that in this version of TensorFlow.

This is the code for the model that I have used. This model consists of different layers.

def hybrid_LSTM(depth=2,conv_size=16,dense_size=512,input_dim=(100,5,9,),dropoutRate=0.2):

    """
    Autoencoder model builder composes of CNNs and a LSTM
    Args:
        depth (int): number of CNN blocks, each has 3 CNN layers with BN and a dropout
        conv_size (int): initial CNN filter size, doubled in each depth level
        dense_size (int): size of latent vector and a number of filters of ConvLSTM2D
        input_dim (tuple): input dimention, should be in (y_spatial,x_spatial,temporal)
        dropoutRate (float): dropout rate used in all nodes
    Return:
        keras model
    """
    """Setup"""
    temp_filter = conv_size
    X = Input(shape=input_dim, name = 'input')
    model_input = X
    # X = Permute((3,1,2))(X)  #move temporal axes to be first dim
    X = Reshape((100,5,9,1))(X) #reshape (,1) to be feature of each spatial

    """Encoder"""
    for i in range(depth):
        for j in range(3):
            if j == 0: #j==0 is first layer(j) of the CNN block(i); apply stride with double filter size
                X = TimeDistributed(Conv2D(2*temp_filter,(3,3),padding='same' ,strides=(2,2),data_format="channels_last"),name = 'encoder_'+str(i)+str(j)+'_timeConv2D')(X)
            else:
                X = TimeDistributed(Conv2D(temp_filter,(3,3), padding='same', data_format="channels_last"),name = 'encoder_'+str(i)+str(j)+'_timeConv2D')(X)
            X = BatchNormalization(name = 'encoder_'+str(i)+str(j)+'_BN')(X)
            X = LeakyReLU(alpha=0.1,name = 'encoder_'+str(i)+str(j)+'_relu')(X)
            X = Dropout(dropoutRate,name = 'encoder_'+str(i)+str(j)+'_drop')(X)
        temp_filter = int(temp_filter * 2)
    X = TimeDistributed(Flatten())(X)
    X = LSTM(dense_size, recurrent_dropout=dropoutRate ,return_sequences=False, implementation=2)(X)

    """Latent"""
    latent = X

    """Setup for decoder"""
    X = RepeatVector(100)(X)
    temp_filter = int(temp_filter/2)

    """Decoder"""
    X = LSTM(temp_filter*2*3, recurrent_dropout=dropoutRate ,return_sequences=True, implementation=2)(X)
    X = Reshape((100,2,3,temp_filter))(X)
    for i in range(depth):
        for j in range(3):
            if j == 0:
                X = TimeDistributed(UpSampling2D((2,2)),name = 'decoder_'+str(i)+str(j)+'_upsampling')(X)
                X = TimeDistributed(ZeroPadding2D(((1,0),(1,0))),name = 'decoder_'+str(i)+str(j)+'_padding')(X)
                X = TimeDistributed(Conv2D(temp_filter,(3,3),data_format="channels_last"),name = 'decoder_'+str(i)+str(j)+'_timeConv2D')(X)
            else:
                X = TimeDistributed(Conv2D(temp_filter,(3,3), padding='same', data_format="channels_last"),name = 'decoder_'+str(i)+str(j)+'_timeConv2D')(X)
            X = BatchNormalization(name = 'decoder_'+str(i)+str(j)+'_BN')(X)
            X = LeakyReLU(alpha=0.1,name = 'decoder_'+str(i)+str(j)+'_relu')(X)
            X = Dropout(dropoutRate,name = 'decoder_'+str(i)+str(j)+'_drop')(X)
        temp_filter = int(temp_filter / 2)
    X = TimeDistributed(Conv2D(1,(1,1), padding='same', data_format="channels_last"),name = 'decoder__timeConv2D')(X)
    X = Reshape((100,5,9))(X)
    # X = Permute((2,3,1))(X)
    decoded = X
    X = latent
    X = Dense(1,name = 'Dense10',activation='sigmoid')(X)

    return Model(inputs = model_input, outputs = [decoded,X])
 File "/Midgard/home/projects/Pre-trained-EEG-for-Deep-Learning-master/trainSafe_version1.py", line 167, in train_subtasks_all_tasks_keras
    parallel_model = multi_gpu_model(model, gpus=2)
  File "/Midgard/home/miniconda3/envs/erpenet5/lib/python3.7/site-packages/tensorflow/python/keras/utils/multi_gpu_utils.py", line 172, in multi_gpu_model
    available_devices = _get_available_devices()
  File "/Midgard/home/miniconda3/envs/erpenet5/lib/python3.7/site-packages/tensorflow/python/keras/utils/multi_gpu_utils.py", line 28, in _get_available_devices
    return [x.name for x in K.get_session().list_devices()]
  File "/Midgard/home/miniconda3/envs/erpenet5/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 462, in get_session
    _initialize_variables(session)
  File "/Midgard/home/miniconda3/envs/erpenet5/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 879, in _initialize_variables
    [variables_module.is_variable_initialized(v) for v in candidate_vars])
  File "/Midgard/home/miniconda3/envs/erpenet5/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 879, in <listcomp>
    [variables_module.is_variable_initialized(v) for v in candidate_vars])
  File "/Midgard/home/miniconda3/envs/erpenet5/lib/python3.7/site-packages/tensorflow/python/util/tf_should_use.py", line 193, in wrapped
    return _add_should_use_warning(fn(*args, **kwargs))
  File "/Midgard/home/miniconda3/envs/erpenet5/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 3083, in is_variable_initialized
    return state_ops.is_variable_initialized(variable)
  File "/Midgard/home/miniconda3/envs/erpenet5/lib/python3.7/site-packages/tensorflow/python/ops/state_ops.py", line 133, in is_variable_initialized
    return ref.is_initialized(name=name)
AttributeError: 'Tensor' object has no attribute 'is_initialized'

model = hybrid_LSTM(depth=2, conv_size=8, dense_size=512, input_dim=(100, 5, 9), dropoutRate=0.2)
model.compile(optimizer=SGD(learning_rate=lr, decay=1E-5),
                          loss=[mean_squared_error_ignore_0, 'binary_crossentropy'],
                          # metrics=['AUC','Recall', 'Precision','binary_accuracy','accuracy'],
                         metrics={'Dense10': ['AUC', 'Recall',
                                               tf.keras.metrics.SensitivityAtSpecificity(specificity=0.02),
                                               tf.keras.metrics.SpecificityAtSensitivity(sensitivity=0.02),
                                               'accuracy']},
                          loss_weights=[0.4, 0.6])
parallel_model = multi_gpu_model(model, gpus=2)
parallel_model.__setattr__('callback_model', model)
parallel_model.compile(optimizer=SGD(learning_rate=lr, decay=1E-5),
                          loss=[mean_squared_error_ignore_0, 'binary_crossentropy'],
                          # metrics=['AUC','Recall', 'Precision','binary_accuracy','accuracy'],
                         metrics={'Dense10': ['AUC', 'Recall',
                                               tf.keras.metrics.SensitivityAtSpecificity(specificity=0.02),
                                               tf.keras.metrics.SpecificityAtSensitivity(sensitivity=0.02),
                                               'accuracy']},
                          loss_weights=[0.4, 0.6])

tensorflow-gpu 1.14.0, cudatoolkit 10.1.243, cudnn 7.6.5,

Upvotes: 1

Views: 1863

Answers (1)

Yaoshiang
Yaoshiang

Reputation: 1941

This is likely an incompatibility between your version of TF and Keras. Daniel Möller got you on the right path but tf.keras is a TF2 thing, and you are using TF1, so your solution will be different.

What you need to do is install a version of Keras that is compatible with TF 1.14. According to pypi, TF 1.14 was released June 18, 2019.

https://pypi.org/project/tensorflow/#history

You should do a grid search of the Keras versions just before and after that date.

https://pypi.org/project/keras/#history

I'd go with these Keras versions.

2.2.4 2.2.5 2.3.1 2.4.1

Install these versions using for example

pip3 install --upgrade keras==2.2.4

I have run into a similar problem recently in the mismatch between TF2.7/2.8 and Keras 2.7/2.8.

Upvotes: 2

Related Questions