Ujjwal
Ujjwal

Reputation: 1638

Why not allocating memory not an error in TensorFlow?

Sometimes one will see warnings in the following spirit in TensorFlow :

W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this
 is not a failure, but may mean that there could be performance gains if more memory were available.

What is the source of such warnings and why it does not generate an error ?

Upvotes: 1

Views: 684

Answers (1)

Peter Szoldan
Peter Szoldan

Reputation: 4868

The TensorFlow code from where the error originates, looks like this from line 206:

void* BFCAllocator::AllocateRaw(size_t unused_alignment, size_t num_bytes,
                            const AllocationAttributes& allocation_attr) {
  if (allocation_attr.no_retry_on_failure) {
    // Return immediately upon the first failure if this is for allocating an
    // optional scratch space.
etc.

Based on the comment, the allocator function can be called with the no_retry_on_failure flag set high (as in your case) because it just tries to allocate an optional "scratch space", which could presumably help some operations to be quicker, but TensorFlow can live without it if need be.

Upvotes: 3

Related Questions