Reputation: 116
Imagine we have a class "NeuralNetwork"
Each neurons consists of weights, a bias and an activation function
class Neuron:
def __init__(self,weights,bias,activation):
self._weights = weights
self._bias = bias
self._activation = activation
def activate(self,input):
return self._activation.compute(self._weights.dot(input)+self._bias)
class Layer:
def __init__(self,neurons):
self._neurons = neurons
def compute(self,input):
output = []
for neuron in self._neurons:
output.append(neuron.activate(input))
return output
class NeuralNetwork:
def __init__(self,layers):
self._layers = layers
def compute(self,input):
output = input
for layer in self._layers:
output = layer.compute(output)
return output
def train(self,dataset):
# do some training
# changes the neurons inside the layers
NeuralNetwork has a method train() that changes its internal representation. But this means having access to the internals of the Layer objects. It needs access to the individual neurons, breaking the law of demeter. for example
layers[0].getNeuron(0).compute(input)
layer[0].getNeuron(0).changeBias(2)
The only solution I can think of is to provide extra methods in "Layer" and delegate it to the neurons. This would also allow me to use different implementations of a Layer interface. One which is more flexible and one that has better performance.
But this seems cumbersome. Isn't there a better way to model this?
Upvotes: 2
Views: 269
Reputation: 116
The first possible solution is to just add some methods that delegate.
A second solution, extracted from Amr Mostafa's comments would be to send an event through the Layer object to the neurons.
Both solutions would allow us to use a different Layer object which consists out of multidimensional arrays instead of neuron objects (performance consideration).
Upvotes: 1