Model vgg16
Description
The vgg16 model specializes in image analysis and is available from Keras. It is loaded to tensorflow (tf) using keras.applications:
import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Activation, Dense, Flatten, BatchNormalization, Conv2D, MaxPool2D # load vgg16 model vgg16_model = tf.keras.applications.vgg16.VGG16()
ID:(13759, 0)
Form model
Description
Before proceeding to train the model, the layers that should not be trained should be blocked since they were already in the structuring of the model.
# define model vgg_model = Sequential() # 16-layer lock so that they are not trained for layer in vgg16_model.layers[:-1]: vgg_model.add(layer)
ID:(13760, 0)
Show model
Description
The summary of the model can be shown with summary:
# show summary vgg_model.summary()
It should be noted that
- the MaxPooling commands reduce the size of the images from 224 to 112, 56, 28, 14 and 7
- the convolutions (Conv2D) for this they increase the dimensions of the color palette from 64 to 128, 256 and 512
Model: 'sequential' _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 224, 224, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 112, 112, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 112, 112, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 112, 112, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 56, 56, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 56, 56, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 56, 56, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 28, 28, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 14, 14, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 7, 7, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 25088) 0 _________________________________________________________________ fc1 (Dense) (None, 4096) 102764544 _________________________________________________________________ fc2 (Dense) (None, 4096) 16781312 ================================================================= Total params: 134,260,544 Trainable params: 134,260,544 Non-trainable params: 0 _________________________________________________________________\
ID:(13761, 0)
Lock already trained layers
Description
To avoid retraining the layers of the original model in the learning process, the trainable parameter is blocked:
# block layers to avoid training for layer in vgg_model.layers: layer.trainable = False
To take advantage of the information already obtained in the learning in the original model, the learning of these layers must be blocked.
ID:(13762, 0)
Add final layer
Description
The model is adapted by adding a final layer of the size of the categories to be predicted:
# add last layer (dense) vgg_model.add(Dense(units=len(classes),activation='softmax'))
The units must be equal to the number of classes to be predicted.
ID:(13763, 0)
Show modified model
Description
To show how the model was modified, you can again use the summary function:
# show the structure of the modify volumen vgg_model.summary()
In the list of layers you can see at the end the dense layer with the classes that you want to forecast:
Model: 'sequential_1'
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
dense_1 (Dense) (None, 290) 1188130
=================================================================
Total params: 135,448,674
Trainable params: 1,188,130
Non-trainable params: 134,260,544
_________________________________________________________________
ID:(13764, 0)
Build model
Description
To build the model, you must proceed to compile it:
# compile the model vgg_model.compile(optimizer=Adam(learning_rate=0.0001), loss='categorical_crossentropy',metrics=['accuracy'])
ID:(13765, 0)
Learning process
Description
# Learn process with modified vgg16 model vgg_model.fit(x=train_batches, validation_data=validate_batches, epochs=5,verbose=2)
Epoch 1/5
373/373 - 336s - loss: 2.4201 - accuracy: 0.2790 - val_loss: 2.2614 - val_accuracy: 0.3023
Epoch 2/5
373/373 - 340s - loss: 1.7795 - accuracy: 0.4362 - val_loss: 2.0955 - val_accuracy: 0.3639
Epoch 3/5
373/373 - 339s - loss: 1.5116 - accuracy: 0.5241 - val_loss: 2.0217 - val_accuracy: 0.3868
Epoch 4/5
373/373 - 339s - loss: 1.3313 - accuracy: 0.5754 - val_loss: 1.9925 - val_accuracy: 0.3926
Epoch 5/5
373/373 - 341s - loss: 1.1973 - accuracy: 0.6296 - val_loss: 1.9473 - val_accuracy: 0.4169
ID:(13766, 0)