First of all, Happy New Year to you all!
We have a great year ahead. And, let's start it with something interesting.
We've talked about how Convolutional Neural Networks (CNNs) are able to learn complex features from input procedurally through convolutional filters in each layer.
But, how does a convolutional filter really look like?
In today's post, let's try to visualize the convolutional filters of the LeNet model trained on the MNIST dataset (handwritten digit classification) - often considered the 'hello world' program of deep learning.
We can use a technique to visualize the filters from the article "How convolutional neural networks see the world" by François Chollet (the author of the Keras library). The original article is available at the Keras Blog: https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html.
The original code is designed to work with the VGG16 model. Let’s modify it a bit to work with our LeNet model.
We need to load the LeNet model with its weights. You can follow the code here to train the model yourself and get the weights. Let's name the weights file as 'lenet_weights.hdf5'.
We'll start with the imports,
from scipy.misc import imsave import numpy as np import time from keras import backend as K from keras.models import Sequential from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.layers.core import Activation from keras.layers.core import Flatten from keras.layers.core import Dense from keras.optimizers import SGD
We need to build and load the LeNet model with the weights. So, we define a function - build_lenet - for it.


