Answer:
The Neural Net is uninteresting because, *no matter* how large and complicated it is, it can be replaced by a single perceptron (node) for each output.
Training a single node is far faster than training 500!
A single perceptron here represents a linear combination of the weighted inputs. Without thresholds, the output will always just be a linear combination of the weighted inputs.
The perceptron can represent a vector in an N dimensional space, where N is the number of inputs. If you add two/three/a billion vectors together you just get another vector.
So the entire Net only has the power to represent a vector for each of its outputs.
This means that it can be replaced by a single node for each output with no loss of representational ability.
__________________
Actually, I am a rocket scientist.
|