Recently I played with neural networks, changing the matrix multiplication in NN’s propagation into a convolution, with FFT to speed up computation. This architecture allows for training neural networks with larger layer sizes, given that we allow weights to be reused in a certain way. Preliminary experiments shows 93% accuracy on MNIST dataset.