Frank
Frank

Reputation: 169

Why use multiple ReLU objects in Neural Net class definition?

Recently I observed that a lot of times while defining the neural nets we define separate ReLU objects for each layer. Why can't we use the same ReLU object wherever it is needed.

For example instead of writing like this-

def __init__(self):
    self.fc1     = nn.Linear(784, 500)
    self.ReLU_1  = nn.ReLU()
    self.fc2     = nn.Linear(500, 300)
    self.ReLU_2  = nn.ReLU()
    
def forward(x):
    x = self.fc1(x)
    x = self.ReLU_1(x)
    x = self.fc2(x)
    x = self.ReLU_2(x)

why can't we use

def __init__(self):
    self.fc1    = nn.Linear(784, 500)
    self.ReLU   = nn.ReLU()
    self.fc2    = nn.Linear(500, 300)
    
def forward(x):
    x = self.fc1(x)
    x = self.ReLU(x)
    x = self.fc2(x)
    x = self.ReLU(x)

Is this something specific to PyTorch?

Upvotes: 5

Views: 969

Answers (1)

roman
roman

Reputation: 1091

We can do so. First variant is just for clarity.

Upvotes: 3

Related Questions