j35t3r
j35t3r

Reputation: 1533

AttributeError: 'ZeroOut' object has no attribute 'eval'

I am getting this error AttributeError: 'ZeroOut' object has no attribute 'eval' when I execute the code. I compiled it as in the standard tutorial of tensorflow (https://www.tensorflow.org/extend/adding_an_op)

TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') )
TF_LFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))') )
g++ -std=c++11 -shared zero_out.cc -o zero_out.so -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} -O2

main.py

import tensorflow as tf
import numpy as np

forward_module = tf.load_op_library('./zero_out.so')
with tf.Session(''):
    outgrad0 = np.arange(1,7).reshape(3,2).astype('float32')
    print(forward_module.zero_out(outgrad0).eval())

zero_out.cc file:

#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/shape_inference.h"
#include "tensorflow/core/framework/op_kernel.h"
using namespace tensorflow;

REGISTER_OP("ZeroOut")
    .Input("to_zero: int32")
    .Output("zeroed: int32")
    .Output("indice: int32");

class ZeroOutOp : public OpKernel {
 public:
  explicit ZeroOutOp(OpKernelConstruction* context) : OpKernel(context) {
    }

  void Compute(OpKernelContext* context) override {
    const Tensor& input_tensor = context->input(0);
    auto input = input_tensor.flat<int32>();

    Tensor* output_tensor = NULL;
    Tensor* output_tensor_indice = NULL;
    TensorShape indice_shape;
    int dims[] = {1};
    TensorShapeUtils::MakeShape(dims, 1, &indice_shape);

    OP_REQUIRES_OK(context, context->allocate_output(0, input_tensor.shape(), &output_tensor));
    OP_REQUIRES_OK(context, context->allocate_output(1, indice_shape, &output_tensor_indice));
    auto output_flat = output_tensor->flat<int32>();
    auto indice_flat = output_tensor_indice->flat<int32>();
    indice_flat(0) = 3;
  }
};

REGISTER_KERNEL_BUILDER(Name("ZeroOut").Device(DEVICE_CPU), ZeroOutOp);

As you can see I added another output variable, because I want to have a tuple as output. But I always get this silly error and I do not know how to get rid of it.

When compiling no error yields, but when I execute the python code, it cannot be evaluated without eval() it says at least there are 2 outputs.


Edit:

Another approach would be as in C++:

void addOne(int &y) // y is a reference variable
{
    y = y + 1;
}

The advantage is that I do not need any return. How do that for the tensorflow op?

Upvotes: 0

Views: 1470

Answers (1)

j35t3r
j35t3r

Reputation: 1533

Problem:

out0, out1 = zero_out_module.zero_out(volume0, img0_in, img1_in, np.array([0]), 1).eval()  ---> leads to the error in the title

Solution:

out0, out1 = zero_out_module.zero_out(volume0, img0_in, img1_in, np.array([0]), 1) 

# use eval() seperately
print (out0.eval().shape)
print (out1.eval().shape)

Upvotes: 1

Related Questions