A perceptron with a few twists
This is a simple perceptron implementation along with a tutorial notebook, testing the perceptron's functionality on some linearly separable data.
What's different is the different weight initialization methods, you can choose from 3 methods: random, zeroes or xavier.
I was curious what are the effects of different weight initialization methods had on the training process.