Rodrigo Laguna
Rodrigo Laguna

Reputation: 1850

implementation difference between ReLU and LeakyRelu

I know those activations differ in their definition, however, when reading ReLU's documentation, it takes a parameter alpha as an input with 0 as default, and says

relu

relu(x, alpha=0.0, max_value=None) Rectified Linear Unit.

Arguments

x: Input tensor. alpha: Slope of the negative part. Defaults to zero. max_value: Maximum value for the output. Returns

The (leaky) rectified linear unit activation: x if x > 0, alpha * x if x < 0. If max_value is defined, the result is truncated to this value.

And there is also a LeakyReLU with a similar documentation, but as part of other module (advanced activation)

Is there a difference between them? and how shoud I import relu to instantiate it with alpha?

from keras.layers.advanced_activations import LeakyReLU
..
..
model.add(Dense(512, 512, activation='linear')) 
model.add(LeakyReLU(alpha=.001)) # using Relu insted of LeakyRelu

Note that when using LeakyReLU I'm getting the following error:

AttributeError: 'LeakyReLU' object has no attribute '__name__'

but when I use ReLU instead, It works:

model.add(Activation('relu')) # This works correctly but can't set alpha

To sum up: What are de diferencies and how can I import ReLU to pass aplha to it?

Upvotes: 3

Views: 1969

Answers (1)

nuric
nuric

Reputation: 11225

As far as implementation is concerned they call the same backend function K.relu. The difference is that relu is an activation function whereas LeakyReLU is a Layer defined under keras.layers. So the difference is how you use them. For activation functions you need to wrap around or use inside layers such Activation but LeakyReLU gives you a shortcut to that function with an alpha value.

Upvotes: 2

Related Questions