“TensorSpace is a neural network 3D visualization framework. — TensorSpace.org”
“Keras is a high-level neural network API. — keras.io ”
You may learn about TensorSpace can be used to 3D visualize the neural networks. You might have read my previous introduction about TensorSpace. Maybe you find it is a little complicated on the model preprocess.
Hence today, I want to talk about the model preprocess of TensorSpace for more details. To be more specific, how to preprocess the deep learning model built by Keras to be TensorSpace compatible.
Fig. 1 — Use TensorSpace to visualize an LeNet built by Keras
To make a model built by Keras to be TensorSpace compatible, we need the model to satisfy two key points:
For the following parts, I will use an LeNet as an example to introduce the workflow on preprocessing a Keras model.
To continue the process, I will assume we have a proper Python environment set up:
You can find all resources of the example from the TensorSpace preprocess Keras directory which includes:
The source file to train a sample LeNet mode can be found from keras_model.py. You can also try to use your own model which should be valid and work properly with a sample input data.
Before the preprocess, the pre-trained LeNet model is actually a black box: it is feed by a 28x28 image and then returns a list of 10 doubles. Each double represents the probability of a digit from ‘0’ to ‘9’.
Fig. 2 — Classic pre-trained model with single output
We can load the model and perform the prediction like:
model = load_model("/PATH/TO/OUTPUT/keras_model.h5")input_sample = np.ndarray(shape=(28,28),buffer=np.random.rand(28,28))input_sample = np.expand_dims(input_sample, axis=0)print(model.predict(input_sample))
We use a 28x28 numpy array to mock a random input image. The sample output is like:
Fig. 3 — Single list prediction output from trained model
If the model is pre-trained well, it is good enough to be used as an application — we just need to add an input pan for handwritten and a proper output function to show the predictions.
However, just like a “magic”, it is difficult for people to learn the process of the model predictions, since there is a gap between the input and output. The preprocess is actually a way to expose some parts of the “magic” from the gap.
A model preprocess for TensorSpace is the process to:
The series of actions should be completed before applying the model to TensorSpace framework. The preprocess for TensorSpace is the way to satisfy the basic requirements of TensorSpace. The gathered data from pre-trained deep learning model are used to render the TensorSpace visualization model.
From a call of model.summary()
, it is easier to check the information of each layer.
Fig. 4 — Model summary and layer names
Here for example, we want to collect all layer names.
output_layer_names = ["Conv2D_1", "MaxPooling2D_1", "Conv2D_2", "MaxPooling2D_2","Dense_1", "Dense_2", "Softmax"]
Next, we want to construct a new model based on the original model and the layer names we just collected.
def generate_encapsulate_model_with_output_layer_names(model,output_layer_names):enc_model = Model(inputs=model.input,outputs=list(map(lambda oln: model.get_layer(oln).output,output_layer_names)))return enc_model
enc_model = generate_encapsulate_model_with_output_layer_names(model,output_layer_names)
enc_model.save("/PATH/TO/ENC_MODEL/enc_keras_model.h5")
Then, we can use the new encapsulated model to check if the last prediction is valid as before.
input_sample = np.ndarray(shape=(28,28),buffer=np.random.rand(28,28))input_sample = np.expand_dims(input_sample, axis=0)print(enc_model.predict(input_sample))
The encapsulated model returns a long list of outputs which represents the results from the intermediate layers in our “output_layer_names” list.
Fig. 5 — Multiple list outputs after preprocessing
The last output is a list of double with size 10, which represents the original output of the LeNet model.
Fig. 6 — Last list output is the same as the original inferences
Last, we can use the tfjs-converter to convert the Keras model into TensorFlow.js model which can be used directly by TensorSpace.
tensorflowjs_converter \--input_format=keras \/PATH/TO/ENC_MODEL/enc_keras_model.h5 \/PATH/TO/OUTPUT_DIR/
After preprocessing, we should have a model:
The data from the intermediate layers can be collected by TensorSpace and be used to render visualization objects in the TensorSpace model.
Fig. 7 — TensorSpace compatible model with intermediate outputs
Last, we can apply our preprocessed model to TensorSpace for visualization!
The preprocess for TensorSpace is an important step before applying TensorSpace API. The necessary intermediate data used to render 3D visualization can be gathered after the preprocess.
Now, we can collect more data from the deep learning model. The next step is about how to use and analyze the data from intermediate layers wisely. Data visualization can be a way to observe the data, which may require some tools, for example, TensorSpace.
For further information about TensorSpace.js, please check out: