page_no int64 1 474 | page_content stringlengths 160 3.83k |
|---|---|
201 | page_content='183 Batch normalization\nfeatures extracted from the previous layers. L3 is trying to map these inputs to yˆ to\nmake it as close as possible to the label y. While the third layer is doing that, the net-\nwork is adapting the values of the parameters from previous layers. As the parameters\n(w, b) are changing in layer 1, the activation values in the second layer are changing,\ntoo. So from the perspective of the third hidden layer, the values of the second hidden\nlayer are changing all the time: the MLP is suffering from the problem of covariate\nshift. Batch norm reduces the degree of change in the distribution of the hidden unit\nvalues, causing these values to become more stable so that the later layers of the neu-\nral network have firmer ground to stand on.\nNOTE It is important to realize that batch normalization does not cancel or\nreduce the change in the hidden unit values. What it does is ensure that the\ndistribution of that change remains the same: even if the exact values of the\nunits change, the mean and variance do not change.\n4.9.3 How does batch normalization work?\nIn their 2015 paper “Batch Normalization: Accelerating Deep Network Training by\nReducing Internal Covariate Shift” ( https:/ /arxiv.org/abs/1502.03167 ), Sergey Ioffe\nand Christian Szegedy proposed the BN technique to reduce covariate shift. Batch\nnormalization adds an operation in the neural network just before the activation func-\ntion of each layer to do the following:\n1Zero-center the inputs\n2Normalize the zero-centered inputs \n3Scale and shift the results\nThis operation lets the model learn the optimal scale and mean of the inputs for\neach layer. \nHow the math works in batch normalization\n1To zero-center the inputs, the algorithm needs to calculate the input mean\nand standard deviation (the input here means the current mini-batch: hence\nthe term batch normalization ):\nμB ← \nσ2\nB ← ( xi – μB)2\nwhere m is the number of instances in the mini-batch, μB is the mean, and σB\nis the standard deviation over the current mini-batch. \n2Normalize the input:\nxˆi ← 1\nm------- xi\ni1=m\n\uf0e5 Mini-batch mean\n1\nm-------\ni1=m\n\uf0e5 Mini-batch variance\nxiμB–\nσB2ε+-----------------------------' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 203} |
202 | page_content="184 CHAPTER 4Structuring DL projects and hyperparameter tuning\n4.9.4 Batch normalization implementation in Keras\nIt is important to know how batch normalization works so you can get a better under-\nstanding of what your code is doing. But when using BN in your network, you don’t\nhave to implement all these details yourself. Implementing BN is often done by add-\ning one line of code, using any DL framework. In Keras, the way you add batch nor-\nmalization to your neural network is by adding a BN layer after the hidden layer, to\nnormalize its results before they are fed to the next layer. \n The following code snippet shows you how to add a BN layer when building your\nneural network:\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout\nfrom keras.layers.normalization import BatchNormalization \nmodel = Sequential() \nmodel.add(Dense(hidden_units, activation= 'relu')) \nmodel.add(BatchNormalization()) \nmodel.add(Dropout(0.5)) \nmodel.add(Dense(units, activation= 'relu')) \nmodel.add(BatchNormalization()) \nmodel.add(Dense(2, activation= 'softmax' )) where xˆ is the zero-centered and normalized input. Note that there is a vari-\nable here that we added ( ε). This is a tiny number (typically 10–5) to avoid divi-\nsion by zero if σ is zero in some estimates. \n3Scale and shift the results. We multiply the normalized output by a variable γ\nto scale it and add ( β) to shift it\nyi ← γXi + β\nwhere yi is the output of the BN operation, scaled and shifted. \nNotice that BN introduces two new learnable parameters to the network: γ and β. So\nour optimization algorithm will update the parameters of γ and β just like it updates\nweights and biases. In practice, this means you may find that training is rather slow\nat first, while GD is searching for the optimal scales and offsets for each layer, but it\naccelerates once it’s found reasonably good values.\nImports the \nBatchNormalization \nlayer from the \nKeras library\nInitiates the model\nAdds the first hidden layer\nAdds the batch norm \nlayer to normalize the \nresults of layer 1\nIf you are adding dropout to \nyour network, it is preferable \nto add it after the batch norm \nlayer because you don’t want \nthe nodes that are randomly \nturned off to miss the \nnormalization step.Adds the\nsecond\nhidden\nlayer\nAdds the batch norm layer to\nnormalize the results of layer 2Output\nlayer" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 204} |
203 | page_content='185 Project: Achieve high accuracy on image classification\n4.9.5 Batch normalization recap\nThe intuition that I hope you’ll take away from this discussion is that BN applies the nor-\nmalization process not just to the input layer, but also to the values in the hidden layers\nin a neural network. This weakens the coupling of the learning process between earlier\nand later layers, allowing each layer of the network to learn more independently. \n From the perspective of the later layers in the network, the earlier layers don’t get\nto shift around as much because they are constrained to have the same mean and vari-\nance. This makes the job of learning easier in the later layers. The way this happens is\nby ensuring that the hidden units have a standardized distribution (mean and vari-\nance) controlled by two explicit parameters, γ and β, which the learning algorithm\nsets during training. \n4.10 Project: Achieve high accuracy on image classification \nIn this project, we will revisit the CIFAR-10 classification project from chapter 3 and\napply some of the improvement techniques from this chapter to increase the accu-\nracy from ~65% to ~90%. You can follow along with this example by visiting the\nbook’s website, www.manning.com/books/deep-learning-for-vision-systems or www\n.computervisionbook.com , to see the code notebook.\n We will accomplish the project by following these steps:\n1Import the dependencies.\n2Get the data ready for training:\n– Download the data from the Keras library.\n– Split it into train, validate, and test datasets.\n– Normalize the data.\n– One-hot encode the labels.\n3Build the model architecture. In addition to regular convolutional and pooling\nlayers, as in chapter 3, we add the following layers to our architecture:\n– Deeper neural network to increase learning capacity\n– Dropout layers\n– L2 regularization to our convolutional layers\n– Batch normalization layers\n4Train the model.\n5Evaluate the model.\n6Plot the learning curve.\nLet’s see how this is implemented.\nSTEP 1: I MPORT DEPENDENCIES\nHere’s the Keras code to import the needed dependencies:\nimport keras \nfrom keras.datasets import cifar10\nfrom keras.preprocessing.image import ImageDataGeneratorKeras library to download \nthe datasets, preprocess \nimages, and network \ncomponents' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 205} |
204 | page_content="186 CHAPTER 4Structuring DL projects and hyperparameter tuning\nfrom keras.models import Sequential\nfrom keras.utils import np_utils\nfrom keras.layers import Dense, Activation, Flatten, Dropout, BatchNormalization,\n Conv2D, MaxPooling2D\nfrom keras.callbacks import ModelCheckpoint\nfrom keras import regularizers, optimizers\nimport numpy as np \nfrom matplotlib import pyplot \nSTEP 2: G ET THE DATA READY FOR TRAINING\nKeras has some datasets available for us to download and experiment with. These\ndatasets are usually preprocessed and almost ready to be fed to the neural network. In\nthis project, we use the CIFAR-10 dataset, which consists of 50,000 32 × 32 color train-\ning images, labeled over 10 categories, and 10,000 test images. Check the Keras docu-\nmentation for more datasets like CIFAR-100, MNIST, Fashion-MNIST, and more.\n Keras provides the CIFAR-10 dataset already split into training and testing sets. We\nwill load them and then split the training dataset into 45,000 images for training and\n5,000 images for validation, as explained in this chapter:\n(x_train, y_train), (x_test, y_test) = cifar10.load_data() \nx_train = x_train.astype( 'float32' ) \nx_test = x_test.astype( 'float32' ) \n(x_train, x_valid) = x_train[5000:], x_train[:5000] \n(y_train, y_valid) = y_train[5000:], y_train[:5000] \nLet’s print the shape of x_train , x_valid , and x_test :\nprint('x_train =' , x_train.shape)\nprint('x_valid =' , x_valid.shape)\nprint('x_test =' , x_test.shape)\n>> x_train = (45000, 32, 32, 3)\n>> x_valid = (5000, 32, 32, 3)\n>> x_test = (1000, 32, 32, 3)\nThe format of the shape tuple is as follows: (number of instances, width, height,\nchannels).\nNormalize the data\nNormalizing the pixel values of our images is done by subtracting the mean from each\npixel and then dividing the result by the standard deviation:\nmean = np.mean(x_train,axis=(0,1,2,3))\nstd = np.std(x_train,axis=(0,1,2,3))\nx_train = (x_train-mean)/(std+1e-7)\nx_valid = (x_valid-mean)/(std+1e-7)\nx_test = (x_test-mean)/(std+1e-7)Imports numpy for \nmath operations\nImports the matplotlib \nlibrary to visualize results\nDownloads and \nsplits the data\nBreaks the training set into \ntraining and validation sets" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 206} |
205 | page_content='187 Project: Achieve high accuracy on image classification\nOne-hot encode the labels\nTo one-hot encode the labels in the train, valid, and test datasets, we use the to_\ncategorical function in Keras:\nnum_classes = 10\ny_train = np_utils.to_categorical(y_train,num_classes)\ny_valid = np_utils.to_categorical(y_valid,num_classes)\ny_test = np_utils.to_categorical(y_test,num_classes)\nData augmentation\nFor augmentation techniques, we will arbitrarily go with the following transforma-\ntions: rotation, width and height shift, and horizontal flip. When you are working on\nproblems, view the images that the network missed or provided poor detections for\nand try to understand why it is not performing well on them. Then create your\nhypothesis and experiment with it. For example, if the missed images were of shapes\nthat are rotated, you might want to try the rotation augmentation. You would apply\nthat, experiment, evaluate, and repeat. You will come to your decisions purely from\nanalyzing your data and understanding the network performance:\ndatagen = ImageDataGenerator( \n rotation_range=15,\n width_shift_range=0.1,\n height_shift_range=0.1,\n horizontal_flip= True,\n vertical_flip= False\n )\ndatagen.fit(x_train) \nSTEP 3: B UILD THE MODEL ARCHITECTURE\nIn chapter 3, we built an architecture inspired by AlexNet (3 CONV + 2 FC). In this\nproject, we will build a deeper network for increased learning capacity (6 CONV +\n1 FC). \n The network has the following configuration:\n\uf0a1Instead of adding a pooling layer after each convolutional layer, we will add one\nafter every two convolutional layers. This idea was inspired by VGGNet, a popu-\nlar neural network architecture developed by the Visual Geometry Group (Uni-\nversity of Oxford). VGGNet will be explained in chapter 5.\n\uf0a1Inspired by VGGNet, we will set the kernel_size of our convolutional layers to\n3 × 3 and the pool_size of the pooling layer to 2 × 2.\n\uf0a1We will add dropout layers every other convolutional layer, with ( p) ranges from\n0.2 and 0.4.\n\uf0a1A batch normalization layer will be added after each convolutional layer to nor-\nmalize the input for the following layer.\n\uf0a1In Keras, L2 regularization is added to the convolutional layer code.\nHere’s the code:Data \naugmentation\nComputes the data \naugmentation on the \ntraining set' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 207} |
206 | page_content="188 CHAPTER 4Structuring DL projects and hyperparameter tuning\nbase_hidden_units = 32 \nweight_decay = 1e-4 \nmodel = Sequential() \n# CONV1\nmodel.add(Conv2D(base_hidden_units, kernel_size= 3, padding= 'same', \n kernel_regularizer=regularizers.l2(weight_decay), \ninput_shape=x_train.shape[1:]))\nmodel.add(Activation( 'relu')) \nmodel.add(BatchNormalization()) \n# CONV2\nmodel.add(Conv2D(base_hidden_units, kernel_size= 3, padding= 'same', \n kernel_regularizer=regularizers.l2(weight_decay)))\nmodel.add(Activation( 'relu'))\nmodel.add(BatchNormalization())\n# POOL + Dropout\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Dropout(0.2)) \n# CONV3\nmodel.add(Conv2D(base_hidden_units * 2, kernel_size= 3, padding= 'same', \n kernel_regularizer=regularizers.l2(weight_decay)))\nmodel.add(Activation( 'relu'))\nmodel.add(BatchNormalization())\n# CONV4\nmodel.add(Conv2D(base_hidden_units * 2, kernel_size= 3, padding= 'same', \n kernel_regularizer=regularizers.l2(weight_decay)))\nmodel.add(Activation( 'relu'))\nmodel.add(BatchNormalization())\n# POOL + Dropout\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Dropout(0.3))\n# CONV5\nmodel.add(Conv2D(base_hidden_units * 4, kernel_size= 3, padding= 'same', \n kernel_regularizer=regularizers.l2(weight_decay)))\nmodel.add(Activation( 'relu'))\nmodel.add(BatchNormalization())\n# CONV6\nmodel.add(Conv2D(base_hidden_units * 4, kernel_size= 3, padding= 'same',\n kernel_regularizer=regularizers.l2(weight_decay)))\nmodel.add(Activation( 'relu'))\nmodel.add(BatchNormalization())\n# POOL + Dropout\nmodel.add(MaxPooling2D(pool_size=(2,2)))\nmodel.add(Dropout(0.4))Number of hidden units variable. We \ndeclare this variable here and use it \nin our convolutional layers to make \nit easier to update from one place.L2 regularization \nhyperparameter ( ƛ)\nCreates a sequential model \n(a linear stack of layers)Notice that we define\nthe input_shape here\nbecause this is the first\nconvolutional layer. We\ndon’t need to do that\nfor the remaining\nlayers.\nAdds L2 \nregularization to \nthe convolutional \nlayerUses a ReLU activation \nfunction for all hidden \nlayers Adds a batch\nnormalization\nlayer\nDropout\nlayer with\n20%\nprobabilityNumber of hidden\nunits = 64" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 208} |
207 | page_content="189 Project: Achieve high accuracy on image classification\n# FC7\nmodel.add(Flatten()) \nmodel.add(Dense(10, activation= 'softmax' )) \nmodel.summary() \nThe model summary is shown in figure 4.31.Flattens the feature map into a 1D \nfeatures vector (explained in chapter 3)\n10 hidden units because the \ndataset has 10 class labels. Softmax \nactivation function is used for the \noutput layer (explained in chapter 2).Prints the model \nsummary\nLayer (type)\nconv2d_1 (Conv2D)\nactivation_1 (Activation)Output Shape\n(None, 32, 32, 32)\n(None, 32, 32, 32)Param #\n896\nconv2d_3 (Conv2D) (None, 16, 16, 64) 18496conv2d_2 (Conv2D) (None, 32, 32, 32) 92480\nactivation_2 (Activation) (None, 32, 32, 32) 0\nactivation_4 (Activation) (None, 16, 16, 64) 0activation_3 (Activation) (None, 16, 16, 64) 0batch_normalization_1 (batch (None, 32, 32, 32) 128\nbatch_normalization_2 (batch (None, 32, 32, 32) 128\nbatch_normalization_3 (batch (None, 16, 16, 64) 256\nbatch_normalization_4 (batch (None, 16, 16, 64) 256\nbatch_normalization_5 (batch (None, 8, 8, 128) 512\nbatch_normalization_6 (batch (None, 8, 8, 128) 512max_pooling2d_1 (MaxPooling2 (None, 16, 16, 32) 0\nmax_pooling2d_2 (MaxPooling2 (None, 8, 8, 64) 0\nmax_pooling2d_3 (MaxPooling2 (None, 4, 4, 128) 0dropout_1 (Dropout)\nconv2d_4 (Conv2D) (None, 16, 16, 64) 36928\nconv2d_5 (Conv2D) (None, 8, 8, 128) 73856\nconv2d_6 (Conv2D) (None, 8, 8, 128) 147584(None, 16, 16, 32) 0\ndropout_2 (Dropout) (None, 8, 8, 64) 0\ndropout_3 (Dropout) (None, 4, 4, 128) 0\nflatten_1 (Flatten) (None, 2048 0\ndense_1 (Dense) (None, 10) 20490activation_5 (Activation) (None, 8, 8, 128) 0\nactivation_6 (Activation) (None, 8, 8, 128) 0\nFigure 4.31\nModel summary" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 209} |
208 | page_content="190 CHAPTER 4Structuring DL projects and hyperparameter tuning\nSTEP 4: T RAIN THE MODEL \nBefore we jump into the training code, let’s discuss the strategy behind some of the\nhyperparameter settings: \n\uf0a1batch_size —This is the mini-batch hyperparameter that we covered in this\nchapter. The higher the batch_size , the faster your algorithm learns. You can\nstart with a mini-batch of 64 and double this value to speed up training. I tried\n256 on my machine and got the following error, which means my machine was\nrunning out of memory. I then lowered it back to 128:\nResource exhausted: OOM when allocating tensor with shape[256,128,4,4]\n\uf0a1epochs —I started with 50 training iterations and found that the network was\nstill improving. So I kept adding more epochs and observing the training\nresults. In this project, I was able to achieve >90% accuracy after 125 epochs. As\nyou will see soon, there is still room for improvement if you let it train longer.\n\uf0a1Optimizer —I used the Adam optimizer. See section 4.7 to learn more about opti-\nmization algorithms.\nNOTE It is important to note that I’m using a GPU for this experiment. The\ntraining took around 3 hours. It is recommended that you use your own GPU\nor any cloud computing service to get the best results. If you don’t have access\nto a GPU, I recommend that you try a smaller number of epochs or plan to\nleave your machine training overnight or even for a couple of days, depend-\ning on your CPU specifications.\nLet’s see the training code:\nbatch_size = 128 \nepochs = 125 \ncheckpointer = ModelCheckpoint(filepath='model.100epochs.hdf5', verbose=1, \n save_best_only=True ) \noptimizer = keras.optimizers.adam(lr=0.0001,decay=1e-6) \nmodel.compile(loss='categorical_crossentropy', optimizer=optimizer, \nmetrics=['accuracy']) \nhistory = model.fit_generator(datagen.flow(x_train, y_train, \nbatch_size=batch_size), callbacks=[checkpointer], \nsteps_per_epoch=x_train.shape[0] // batch_size, epochs=epochs, \nverbose=2, validation_data=(x_valid, y_valid)) Mini-batch size\nNumber of training iterationsPath of the file where the best\nweights will be saved, and a\nBoolean True to save the\nweights only when there is an\nimprovementAdam optimizer with a \nlearning rate = 0.000 1\nCross-entropy loss \nfunction (explained \nin chapter 2)Allows you to do real-time data augmentation on images\non CPU in parallel to training your model on GPU. The\ncallback to the checkpointer saves the model weights; you\ncan add other callbacks like an early stopping function." metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 210} |
209 | page_content="191 Project: Achieve high accuracy on image classification\nWhen you run this code, you will see the verbose output of the network training for\neach epoch. Keep your eyes on the loss and val_loss values to analyze the network\nand diagnose bottlenecks. Figure 4.32 shows the verbose output of epochs 121 to 125.\nSTEP 5: E VALUATE THE MODEL \nTo evaluate the model, we use a Keras function called evaluate and print the results:\nscores = model.evaluate(x_test, y_test, batch_size=128, verbose=1)\nprint('\\nTest result: %.3f loss: %.3f' % (scores[1]*100,scores[0]))\n>> Test result: 90.260 loss: 0.398\nPlot learning curves\nPlot the learning curves to analyze the training performance and diagnose overfitting\nand underfitting (figure 4.33):\npyplot.plot(history.history[ 'acc'], label= 'train')\npyplot.plot(history.history[ 'val_acc' ], label= 'test')\npyplot.legend()\npyplot.show()Epoch 121/125\nEpoch 00120: val_loss did not improve\n30s - loss: 0.4471 - acc: 0.8741 - val_loss: 0.4124 - val_acc: 0.8886\nEpoch 122/125\nEpoch 00121: val_loss improved from 0.40342 to 0.40327, saving model to model.125epochs.hdf5\n31s - loss: 0.4510 - acc: 0.8719 - val_loss: 0.4033 - val_acc: 0.8934\nEpoch 123/125\nEpoch 00122: val_loss improved from 0.40327 to 0.40112, saving model to model.125epochs.hdf5\n30s - loss: 0.4497 - acc: 0.8735 - val_loss: 0.4031 - val_acc: 0.8959\nEpoch 124/125\nEpoch 00122: val_loss did not improve\n30s - loss: 0.4497 - acc: 0.8725 - val_loss: 0.4162 - val_acc: 0.8894\nEpoch 125/125\nEpoch 00122: val_loss did not improve\n30s - loss: 0.4471 - acc: 0.8734 - val_loss: 0.4025 - val_acc: 0.8959\nFigure 4.32 Verbose output of epochs 121 to 125\n0.8\n0.40.50.60.9\n0.7\n0 20 40 60 80 100 120Train\nTest\nFigure 4.33 Learning curves" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 211} |
210 | page_content='192 CHAPTER 4Structuring DL projects and hyperparameter tuning\nFurther improvements\nAccuracy of 90% is pretty good, but you can still improve further. Here are some ideas\nyou can experiment with:\n\uf0a1More training epochs —Notice that the network was improving until epoch 123.\nYou can increase the number of epochs to 150 or 200 and let the network train\nlonger.\n\uf0a1Deeper network —Try adding more layers to increase the model complexity, which\nincreases the learning capacity.\n\uf0a1Lower learning rate —Decrease the lr (you should train longer if you do so).\n\uf0a1Different CNN architecture —Try something like Inception or ResNet (explained\nin detail in the next chapter). You can get up to 95% accuracy with the ResNet\nneural network after 200 epochs of training.\n\uf0a1Transfer learning —In chapter 6, we will explore the technique of using a pre-\ntrained network on your dataset to get higher results with a fraction of the\nlearning time. \nSummary\n\uf0a1The general rule of thumb is that the deeper your network is, the better it learns.\n\uf0a1At the time of writing, ReLU performs best in hidden layers, and softmax per-\nforms best in the output layer.\n\uf0a1Stochastic gradient descent usually succeeds in finding a minimum. But if you\nneed fast convergence and are training a complex neural network, it’s safe to\ngo with Adam.\n\uf0a1Usually, the more you train, the better.\n\uf0a1L2 regularization and dropout work well together to reduce network complex-\nity and overfitting.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 212} |
211 | page_content='Part 2\nImage classification\nand detection\nR apid advances in AI research are enabling new applications to be built\nevery day and across different industries that weren’t possible just a few years\nago. By learning these tools, you will be empowered to invent new products and\napplications yourself. Even if you end up not working on computer vision per se,\nmany concepts here are useful for deep learning algorithms and architectures. \n After working our way through the foundations of deep learning in part 1, it’s\ntime to build a machine learning project to see what you’ve learned. Here,\nwe’ll cover strategies to quickly and efficiently get deep learning systems work-\ning, analyze results, and improve network performance, specifically by dig-\nging into advanced convolutional neural networks, transfer learning, and object\ndetection.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 213} |
212 | page_content='195Advanced CNN\narchitectures\nWelcome to part 2 of this book. Part 1 presented the foundation of neural networks\narchitectures and covered multilayer perceptrons (MLPs) and convolutional neural\nnetworks (CNNs). We wrapped up part 1 with strategies to structure your deep neu-\nral network projects and tune their hyperparameters to improve network perfor-\nmance. In part 2, we will build on this foundation to develop computer vision (CV)\nsystems that solve complex image classification and object detection problems. \n In chapters 3 and 4, we talked about the main components of CNNs and setting\nup hyperparameters such as the number of hidden layers, learning rate, optimizer,\nand so on. We also talked about other techniques to improve network perfor-\nmance, like regularization, augmentation, and dropout. In this chapter, you will see\nhow these elements come together to build a convolutional network. I will walk you\nthrough five of the most popular CNNs that were cutting edge in their time, and\nyou will see how their designers thought about building, training, and improving\nnetworks. We will start with LeNet, developed in 1998, which performed fairly well\nat recognizing handwritten characters. You will see how CNN architectures haveThis chapter covers\n\uf0a1Working with CNN design patterns\n\uf0a1Understanding the LeNet, AlexNet, VGGNet, \nInception, and ResNet network architectures' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 215} |
213 | page_content='196 CHAPTER 5Advanced CNN architectures\nevolved since then to deeper CNNs like AlexNet and VGGNet, and beyond to more\nadvanced and super-deep networks like Inception and ResNet, developed in 2014 and\n2015, respectively. \n For each CNN architecture, you will learn the following:\n\uf0a1Novel features —We will explore the novel features that distinguish these networks\nfrom others and what specific problems their creators were trying to solve.\n\uf0a1Network architecture —We will cover the architecture and components of each\nnetwork and see how they come together to form the end-to-end network.\n\uf0a1Network code implementation —We will walk step-by-step through the network imple-\nmentations using the Keras deep learning (DL) library. The goal of this section\nis for you to learn how to read research papers and implement new architec-\ntures as the need arises.\n\uf0a1Setting up learning hyperparameters —After you implement a network architecture,\nyou need to set up the hyperparameters of the learning algorithms that you\nlearned in chapter 4 (optimizer, learning rate, weight decay, and so on). We will\nimplement the learning hyperparameters as presented in the original research\npaper of each network. In this section, you will see how performance evolved\nfrom one network to another over the years.\n\uf0a1Network performance —Finally, you will see how each network performed on bench-\nmark datasets like MNIST and ImageNet, as represented in their research papers.\nThe three main objectives of this chapter follow: \n\uf0a1Understanding the architecture and learning hyperparameters of advanced\nCNNs. You will be implementing simpler CNNs like AlexNet and VGGNet for\nsimple- to medium-complexity problems. For very complex problems, you\nmight want to use deeper networks like Inception and ResNet.\n\uf0a1Understanding the novel features of each network and the reasons they were\ndeveloped. Each succeeding CNN architecture solves a specific limitation in the\nprevious one. After reading about the five networks in this chapter (and their\nresearch papers), you will build a strong foundation for reading and under-\nstanding new networks as they emerge.\n\uf0a1Learning how CNNs have evolved and their designers’ thought processes. This\nwill help you build an instinct for what works well and what problems may arise\nwhen building your own network.\nIn chapter 3, you learned about the basic building blocks of convolutional layers,\npooling layers, and fully connected layers of CNNs. As you will see in this chapter, in\nrecent years a lot of CV research has focused on how to put together these basic build-\ning blocks to form effective CNNs. One of the best ways for you to develop your intu-\nition is to examine and learn from these architectures (similar to how most of us may\nhave learned to write code by reading other people’s code). \n To get the most out of this chapter, you are encouraged to read the research\npapers linked in each section before you read my explanation. What you have learned' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 216} |
214 | page_content='197 CNN design patterns\nin part 1 of this book fully equips you to start reading research papers written by pio-\nneers in the AI field. Reading and implementing research papers is by far one of the\nmost valuable skills that you will build from reading this book.\nTIP Personally, I feel the task of going through a research paper, interpret-\ning the crux behind it, and implementing the code is a very important skill\nevery DL enthusiast and practitioner should possess. Practically implement-\ning research ideas brings out the thought process of the author and also helps\ntransform those ideas into real-world industry applications. I hope that, by\nreading this chapter, you will get comfortable reading research papers and\nimplementing their findings in your own work. The fast-paced evolution in\nthis field requires us to always stay up-to-date with the latest research. What\nyou will learn in this book (or in other publications) now will not be the latest\nand greatest in three or four years—maybe even sooner. The most valuable\nasset that I want you to take away from this book is a strong DL foundation\nthat empowers you to get out in the real world and be able to read the latest\nresearch and implement it yourself.\nAre you ready? Let’s get started!\n5.1 CNN design patterns\nBefore we jump into the details of the common CNN architectures, we are going to\nlook at some common design choices when it comes to CNNs. It might seem at first\nthat there are way too many choices to make. Every time we learn about something\nnew in deep learning, it gives us more hyperparameters to design. So it is good to be\nable to narrow down our choices by looking at some common patterns that were cre-\nated by pioneer researchers in the field so we can understand their motivation and\nstart from where they ended rather than doing things completely randomly:\n\uf0a1Pattern 1: Feature extraction and classification —Convolutional nets are typically\ncomposed of two parts: the feature extraction part, which consists of a series of\nconvolutional layers; and the classification part, which consists of a series of\nfully connected layers (figure 5.1). This is pretty much always the case with\nConvNets, starting from LeNet and AlexNet to the very recent CNNs that have\ncome out in the past few years, like Inception and ResNet.\nFC FC FC\nConvFeature extraction Classification\nConv Conv\nFigure 5.1 Convolutional nets generally include feature extraction and classification.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 217} |
215 | page_content='198 CHAPTER 5Advanced CNN architectures\n\uf0a1Pattern 2: Image depth increases, and dimensions decrease —The input data at each\nlayer is an image. With each layer, we apply a new convolutional layer over a\nnew image. This pushes us to think of an image in a more generic way. First,\nyou see that each image is a 3D object that has a height, width, and depth.\nDepth is referred to as the color channel , where depth is 1 for grayscale images\nand 3 for color images. In the later layers, the images still have depth, but they\nare not colors per se: they are feature maps that represent the features\nextracted from the previous layers. That’s why the depth increases as we go\ndeeper through the network layers. In figure 5.2, the depth of an image is\nequal to 96; this represents the number of feature maps in the layer. So, that’s\none pattern you will always see: the image depth increases, and the dimen-\nsions decrease.\n\uf0a1Pattern 3: Fully connected layers —This generally isn’t as strict a pattern as the pre-\nvious two, but it’s very helpful to know. Typically, all fully connected layers in a\nnetwork either have the same number of hidden units or decrease at each layer.\nIt is rare to find a network where the number of units in the fully connected lay-\ners increases at each layer. Research has found that keeping the number of\nunits constant doesn’t hurt the neural network, so it may be a good approach if\nyou want to limit the number of choices you have to make when designing your\nnetwork. This way, all you have to do is to pick a number of units per layer and\napply that to all your fully connected layers. \nNow that you understand the basic CNN patterns, let’s look at some architectures that\nhave implemented them. Most of these architectures are famous because they per-\nformed well in the ImageNet competition. ImageNet is a famous benchmark that33\n1313\n384\n96224224\n11\n1155\n5555Input image = H × W × channel\nChannel = {R, G, B,} = 3Image volume = H × W × feature maps\nFeature maps = 96\nStride\nof 4Max\npoolingMax\npooling25633\n2727\nFigure 5.2 Image depth increases, and the dimensions decrease.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 218} |
216 | page_content='199 LeNet-5\ncontains millions of images; DL and CV researchers use the ImageNet dataset to com-\npare algorithms. More on that later. \nNOTE The snippets in this chapter are not meant to be runnable. The goal is\nto show you how to implement the specifications that are defined in a research\npaper. Visit the book’s website ( www.manning.com/books/deep-learning-for-\nvision-systems ) or Github repo ( https:/ /github.com/moelgendy/deep_learning\n_for_vision_systems ) for the full executable code.\nNow, let’s get started with the first network we are going to discuss in this chapter:\nLeNet.\n5.2 LeNet-5 \nIn 1998, Lecun et al. introduced a pioneering CNN called LeNet-5 .1 The LeNet-5 archi-\ntecture is straightforward, and the components are not new to you (they were new\nback in 1998); you learned about convolutional, pooling, and fully connected layers in\nchapter 3. The architecture is composed of five weight layers, and hence the name\nLeNet-5: three convolutional layers and two fully connected layers. \nDEFINITION We refer to the convolutional and fully connected layers as weight\nlayers because they contain trainable weights as opposed to pooling layers that\ndon’t contain any weights. The common convention is to use the number of\nweight layers to describe the depth of the network. For example, AlexNet\n(explained next) is said to be eight layers deep because it contains five convolu-\ntional and three fully connected layers. The reason we care more about weight\nlayers is mainly because they reflect the model’s computational complexity. \n5.2.1 LeNet architecture \nThe architecture of LeNet-5 is shown in figure 5.3:\nINPUT IMAGE ⇒ C1 ⇒ TANH ⇒ S2 ⇒ C3 ⇒ TANH ⇒ S4 ⇒ C5 ⇒ TANH ⇒ FC6 ⇒\nSOFTMAX7\nwhere C is a convolutional layer, S is a subsampling or pooling layer, and FC is a fully\nconnected layer.\n Notice that Yann LeCun and his team used tanh as an activation function instead\nof the currently state-of-the-art ReLU. This is because in 1998, ReLU had not yet been\nused in the context of DL, and it was more common to use tanh or sigmoid as an acti-\nvation function in the hidden layers. Without further ado, let’s implement LeNet-5\nin Keras.\n1Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning Applied to Document Recogni-\ntion,” Proceedings of the IEEE 86 (11): 2278–2324, http:/ /yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 219} |
217 | page_content='200 CHAPTER 5Advanced CNN architectures\n5.2.2 LeNet-5 implementation in Keras\nTo implement LeNet-5 in Keras, read the original paper and follow the architecture\ninformation from pages 6–8. Here are the main takeaways for building the LeNet-5\nnetwork:\n\uf0a1Number of filters in each convolutional layer —As you can see in figure 5.3 (and as\ndefined in the paper), the depth (number of filters) of each convolutional layer\nis as follows: C1 has 6, C3 has 16, C5 has 120 layers.\n\uf0a1Kernel size of each convolutional layer —The paper specifies that the kernel_size\nis 5 × 5.\n\uf0a1Subsampling (pooling) layers —A subsampling (pooling) layer is added after each\nconvolutional layer. The receptive field of each unit is a 2 × 2 area (for example,\npool_size is 2). Note that the LeNet-5 creators used average pooling , which com-\nputes the average value of its inputs, instead of the max pooling layer that we used\nin our earlier projects, which passes the maximum value of its inputs. You can\ntry both if you are interested, to see the difference. For this experiment, we are\ngoing to follow the paper’s architecture.\n\uf0a1Activation function —As mentioned before, the creators of LeNet-5 used the tanh\nactivation function for the hidden layers because symmetric functions are believed\nto yield faster convergence compared to sigmoid functions (figure 5.4).Input\n28 × 28C1\nfeature maps\n28 × 28 × 6S2\nfeature maps\n14 × 14 × 6C3\nfeature maps\n10 × 10 × 16S4\nfeature maps\n5 × 5 × 16C5\nlayer\n120F6\nlayer\n84Output\n10\nSubsamplingFull\nconnectionFull\nconnectionGaussian\nconnectionConvolutionsConvolutions\nSubsampling\nFigure 5.3 LeNet architecture\n120FC FC CONV CONV\n5 x 5\ns= 1CONV\n5 x 5\ns= 1f= 2\ns= 2\n28 × 28 × 1 28 × 28 × 6 14 × 14 × 6 10 × 10 × 16 5 × 5 × 16ŷ\n...\n84...\n10...Avg\npool\nf= 2\ns= 2Avg\npool\nFigure 5.4 The LeNet architecture consists of convolutional kernels of size 5 × 5; pooling layers; an activation \nfunction (tanh); and three fully connected layers with 120, 84, and 10 neurons, respectively.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 220} |
218 | page_content="201 LeNet-5\nNow let’s put that in code to build the LeNet-5 architecture:\nfrom keras.models import Sequential \nfrom keras.layers import Conv2D, AveragePooling2D, Flatten, Dense \n \nmodel = Sequential() \n \n# C1 Convolutional Layer\nmodel.add(Conv2D( filters = 6, kernel_size = 5, strides = 1, activation = 'tanh', \n input_shape = (28,28,1), padding = 'same'))\n \n# S2 Pooling Layer\nmodel.add(AveragePooling2D( pool_size = 2, strides = 2, padding = 'valid'))\n \n# C3 Convolutional Layer\nmodel.add(Conv2D( filters = 16, kernel_size = 5, strides = 1,activation = 'tanh',\n padding = 'valid'))\n \n# S4 Pooling Layer\nmodel.add(AveragePooling2D( pool_size = 2, strides = 2, padding = 'valid'))\n \n# C5 Convolutional Layer\nmodel.add(Conv2D( filters = 120, kernel_size = 5, strides = 1,activation = 'tanh',\n padding = 'valid'))\n \nmodel.add(Flatten()) \n \n# FC6 Fully Connected Layer\nmodel.add(Dense( units = 84, activation = 'tanh'))\n \n# FC7 Output layer with softmax activation\nmodel.add(Dense( units = 10, activation = 'softmax' ))\n \nmodel.summary() \nLeNet-5 is a small neural network by today’s standards. It has 61,706 parameters, com-\npared to millions of parameters in more modern networks, as you will see later in this\nchapter.\nA note when reading the papers discussed in this chapter\nWhen you read the LeNet-5 paper, just know that it is harder to read than the others\nwe will cover in this chapter. Most of the ideas that I mention in this section are in\nsections 2 and 3 of the paper. The later sections of the paper talk about something\ncalled the graph transformer network , which isn’t widely used today. So if you do try\nto read the paper, I recommend focusing on section 2, which talks about the LeNet\narchitecture and the learning details; then maybe take a quick look at section 3,\nwhich includes a bunch of experiments and results that are pretty interesting.\nI recommend starting with the AlexNet paper (discussed in section 5.3), followed by\nthe VGGNet paper (section 5.4), and then the LeNet paper. It is a good classic to look\nat once you go over the other ones. Imports the Keras \nmodel and layers\nInstantiates an empty \nsequential model\nFlattens the CNN output to \nfeed it fully connected layers\nPrints the model summary (figure 5.5)" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 221} |
219 | page_content="202 CHAPTER 5Advanced CNN architectures\n5.2.3 Setting up the learning hyperparameters\nLeCun and his team used scheduled decay learning where the value of the learning\nrate was decreased using the following schedule: 0.0005 for the first two epochs,\n0.0002 for the next three epochs, 0.00005 for the next four, and then 0.00001 thereaf-\nter. In the paper, the authors trained their network for 20 epochs.\n Let’s build a lr_schedule function with this schedule. The method takes an inte-\nger epoch number as an argument and returns the learning rate ( lr):\ndef lr_schedule(epoch):\n if epoch <= 2: \n lr = 5e-4\n elif epoch > 2 and epoch <= 5:\n lr = 2e-4\n elif epoch > 5 and epoch <= 9:\n lr = 5e-5\n else: \n lr = 1e-5\n return lr\nWe use the lr_schedule function in the following code snippet to compile the model:\nfrom keras.callbacks import ModelCheckpoint, LearningRateScheduler\nlr_scheduler = LearningRateScheduler(lr_schedule)\ncheckpoint = ModelCheckpoint(filepath='path_to_save_file/file.hdf5',\n monitor='val_acc',Layer (type)\nTotal params: 61,706\nTrainable params: 61,706\nNon-trainable params: 0conv2d_1 (Conv2D)\naverage_pooling2d_1 (AverageOutput Shape\n(None, 28, 28, 6)\n(None, 14, 14, 6)Param #\n156\n0\nconv2d_2 (Conv2D) (None, 10, 10, 16) 2416\naverage_pooling2d_2 (Average (None, 5, 5, 16) 0\nconv2d_3 (Conv2D) (None, 1, 1, 120) 48120\nflatten_1 (Flatten) (None, 120) 0\ndense_1 (Dense) (None, 84) 10164\ndense_2 (Dense) (None, 10) 850\nFigure 5.5 LeNet-5 model summary\nlr is 0.0005 for the first two \nepochs, 0.0002 for the next three \nepochs (3 to 5), 0.00005 for the \nnext four (6 to 9), then 0.0000 1 \nthereafter (more than 9)." metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 222} |
220 | page_content="203 AlexNet\n verbose=1,\n save_best_only=True)\ncallbacks = [checkpoint, lr_reducer]\nmodel.compile(loss= 'categorical_crossentropy' , optimizer='sgd',\n metrics=[ 'accuracy' ])\nNow start the network training for 20 epochs, as mentioned in the paper:\nhist = model.fit(X_train, y_train, batch_size=32, epochs= 20,\n validation_data=(X_test, y_test), callbacks=callbacks, \n verbose=2, shuffle= True)\nSee the downloadable notebook included with the book’s code for the full code\nimplementation, if you want to see this in action.\n5.2.4 LeNet performance on the MNIST dataset\nWhen you train LeNet-5 on the MNIST dataset, you will get above 99% accuracy (see\nthe code notebook with the book’s code). Try to re-run this experiment with the\nReLU activation function in the hidden layers, and observe the difference in the net-\nwork performance. \n5.3 AlexNet\nLeNet performs very well on the MNIST dataset. But it turns out that the MNIST data-\nset is very simple because it contains grayscale images (1 channel) and classifies into\nonly 10 classes, which makes it an easier challenge. The main motivation behind Alex-\nNet was to build a deeper network that can learn more complex functions. \n AlexNet (figure 5.6) was the winner of the ILSVRC image classification competi-\ntion in 2012. Krizhevsky et al. created the neural network architecture and trained it\non 1.2 million high-resolution images into 1,000 different classes of the ImageNet\ndataset.2 AlexNet was state of the art at its time because it was the first real “deep” net-\nwork that opened the door for the CV community to seriously consider convolutional\nnetworks in their applications. We will explain deeper networks later in this chapter,\nlike VGGNet and ResNet, but it is good to see how ConvNets evolved and the main\ndrawbacks of AlexNet that were the main motivation for the later networks.\n As you can see in figure 5.6, AlexNet has a lot of similarities to LeNet but is much\ndeeper (more hidden layers) and bigger (more filters per layer). They have similar\nbuilding blocks: a series of convolutional and pooling layers stacked on top of each\nother followed by fully connected layers and a softmax. We’ve seen that LeNet has\naround 61,000 parameters, whereas AlexNet has about 60 million parameters and\n2Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, “ImageNet Classification with Deep Convolutional\nNeural Networks,” Communications of the ACM 60 (6): 84–90, https:/ /dl.acm.org/doi/10.1145/3065386 ." metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 223} |
221 | page_content='204 CHAPTER 5Advanced CNN architectures\n650,000 neurons, which gives it a larger learning capacity to understand more complex\nfeatures. This allowed AlexNet to achieve remarkable performance in the ILSVRC\nimage classification competition in 2012.\n ImageNet and ILSVRC\nImageNet ( http:/ /image-net.org/index ) is a large visual database designed for use in\nvisual object recognition software research. It is aimed at labeling and categorizing\nimages into almost 22,000 categories based on a defined set of words and phrases.\nThe images were collected from the web and labeled by humans using Amazon’s\nMechanical Turk crowdsourcing tool. At the time of this writing, there are over 14 mil-\nlion images in the ImageNet project. To organize such a massive amount of data, the\ncreators of ImageNet followed the WordNet hierarchy where each meaningful word/\nphrase in WordNet is called a synonym set (synset for short). Within the ImageNet\nproject, images are organized according to these synsets, with the goal being to have\n1,000+ images per synset.\nThe ImageNet project runs an annual software contest called the ImageNet Large\nScale Visual Recognition Challenge (ILSVRC, www.image-net.org/challenges/LSVRC ),\nwhere software programs compete to correctly classify and detect objects and scenes.\nWe will use the ILSVRC challenge as a benchmark to compare different networks’\nperformance.33\n1313\n384Input CONV1\n96\n3224224\n11\n1155\n5555CONV2\nCONV3 CONV4 CONV5 FC6 FC7 FC8\nInput\nimage\n(RGB) Stride\nof 4Max\npoolingMax\npoolingMax\npooling25633\n2727\n33\n13 1313 13\n384 256\n4096 4096Dense Dense\nDense\n1000\nImage input 5 convolution layers 3 fully connected\nlayers\nFigure 5.6 AlexNet architecture' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 224} |
222 | page_content='205 AlexNet\n5.3.1 AlexNet architecture\nYou saw a version of the AlexNet architecture in the project at the end of chapter 3.\nThe architecture is pretty straightforward. It consists of:\n\uf0a1Convolutional layers with the following kernel sizes: 11 × 11, 5 × 5, and 3 × 3\n\uf0a1Max pooling layers for images downsampling\n\uf0a1Dropout layers to avoid overfitting\n\uf0a1Unlike LeNet, ReLU activation functions in the hidden layers and a softmax\nactivation in the output layer\nAlexNet consists of five convolutional layers, some of which are followed by max-pooling\nlayers, and three fully connected layers with a final 1000-way softmax. The architec-\nture can be represented in text as follows:\nINPUT IMAGE ⇒ CONV1 ⇒ POOL2 ⇒ CONV3 ⇒ POOL4 ⇒ CONV5 ⇒ CONV6 ⇒\nCONV7 ⇒ POOL8 ⇒ FC9 ⇒ FC10 ⇒ SOFTMAX7\n5.3.2 Novel features of AlexNet\nBefore AlexNet, DL was starting to gain traction in speech recognition and a few\nother areas. But AlexNet was the milestone that convinced a lot of people in the CV\ncommunity to take a serious look at DL and demonstrate that it really works in CV.\nAlexNet presented some novel features that were not used in previous CNNs (like\nLeNet). You are already familiar with all of them from the previous chapters, so we’ll\ngo through them quickly here.\nRELU ACTIVATION FUNCTION\nAlexNet uses ReLu for the nonlinear part instead of the tanh and sigmoid functions that\nwere the earlier standard for traditional neural networks (like LeNet). ReLu was used in\nthe hidden layers of the AlexNet architecture because it trains much faster. This is\nbecause the derivative of the sigmoid function becomes very small in the saturating\nregion, and therefore the updates applied to the weights almost vanish. This phenome-\nnon is called the vanishing gradient problem . ReLU is represented by this equation: \nf(x) = max(0, x)\nIt’s discussed in detail in chapter 2.\nThe vanishing gradient problem\nCertain activation functions, like the sigmoid function, squish a large input space into\na small input space between 0 and 1 (–1 to 1 for tanh activations). Therefore, a large\nchange in the input of the sigmoid function causes a small change in the output. As\na result, the derivative becomes very small:' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 225} |
223 | page_content='206 CHAPTER 5Advanced CNN architectures\nDROPOUT LAYER\nAs explained in chapter 3, dropout layers are used to prevent the neural network from\noverfitting. The neurons that are “dropped out” do not contribute to the forward pass\nand do not participate in backpropagation. This means every time an input is pre-\nsented, the neural network samples a different architecture, but all of these architec-\ntures share the same weights. This technique reduces complex co-adaptations of\nneurons, since a neuron cannot rely on the presence of particular other neurons.\nTherefore, the neuron is forced to learn more robust features that are useful in con-\njunction with many different random subsets of the other neurons. Krizhevsky et al.\nused dropout with a probability of 0.5 in the two fully connected layers.\nDATA AUGMENTATION \nOne popular and very effective approach to avoid overfitting is to artificially enlarge\nthe dataset using label-preserving transformations. This happens by generating new\ninstances of the training images with transformations like image rotation, flipping,\nscaling, and many more. Data augmentation is explained in detail in chapter 4.\nLOCAL RESPONSE NORMALIZATION \nAlexNet uses local response normalization. It is different from the batch normaliza-\ntion technique (explained in chapter 4). Normalization helps to speed up conver-\ngence. Nowadays, batch normalization is used instead of local response normalization;\nwe will use BN in our implementation in this chapter.(continued)\nWe will talk more about the vanishing gradient phenomenon later in this chapter when\nwe look at the ResNet architecture.–10 –8 –6 –4 –2 0\nx2468 1 00.70.9\n0.8\n0.6\n0.5\n0.2\n0.1Sigmoid\nDerivative of sigmoid\n0.30.4\nThe vanishing gradient problem: a large change in the input of the sigmoid function causes a \nnegligible change in the output.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 226} |
224 | page_content='207 AlexNet\nWEIGHT REGULARIZATION \nKrizhevsky et al. used a weight decay of 0.0005. Weight decay is another term for the\nL2 regularization technique explained in chapter 4. This approach reduces the over-\nfitting of the DL neural network model on training data to allow the network to gener-\nalize better on new data: \nmodel.add(Conv2D(32, (3,3), kernel_regularizer=l2( ƛ)))\nThe lambda ( ƛ) value is a weight decay hyperparameter that you can tune. If you still\nsee overfitting, you can reduce it by increasing the lambda value. In this case,\nKrizhevsky and his team found that a small decay value of 0.0005 was good enough for\nthe model to learn.\nTRAINING ON MULTIPLE GPU S \nKrizhevsky et al. used a GTX 580 GPU with only 3 GB of memory. It was state-of-the-art\nat the time but not large enough to train the 1.2 million training examples in the data-\nset. Therefore, the team developed a complicated way to spread the network across\ntwo GPUs. The basic idea was that a lot of the layers were split across two different\nGPUs that communicated with each other. You don’t need to worry about these details\ntoday: there are far more advanced ways to train deep networks on distributed GPUs,\nas we will discuss later in this book. \n5.3.3 AlexNet implementation in Keras\nNow that you’ve learned the basic components of AlexNet and its novel features, let’s\napply them to build the AlexNet neural network. I suggest that you read the architec-\nture description on page 4 of the original paper and follow along.\n As depicted in figure 5.7, the network contains eight weight layers: the first five are\nconvolutional, and the remaining three are fully connected. The output of the last\nfully connected layer is fed to a 1000-way softmax that produces a distribution over the\n1,000 class labels.\nNOTE AlexNet input starts with 227 × 227 × 3 images. If you read the paper,\nyou will notice that it refers to a dimensions volume of 224 × 224 × 3 for the\ninput images. But the numbers make sense only for 227 × 227 × 3 images (fig-\nure 5.7). I suggest that this could be a typing mistake in the paper.\nThe layers are stacked together as follows:\n\uf0a1CONV1 —The authors used a large kernel size (11). They also used a large stride\n(4), which makes the input dimensions shrink by roughly a factor 4 (from 227 ×\n227 to 55 × 55). We calculate the dimensions of the output as follows:\n + 1 = 55\nand the depth is the number of filters in the convolutional layer (96). The out-\nput dimensions are 55 × 55 × 96. 227 11– ()\n4-------------------------' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 227} |
225 | page_content='208 CHAPTER 5Advanced CNN architectures\n\uf0a1POOL with a filter size of 3 × 3— This reduces the dimensions from 55 × 55 to\n27 × 27:\n + 1 = 27\nThe pooling layer doesn’t change the depth of the volume. The output dimen-\nsions are 27 × 27 × 96. \nSimilarly, we can calculate the output dimensions of the remaining layers:\n\uf0a1CONV2 —Kernel size = 5, depth = 256, and stride = 1\n\uf0a1POOL —Size = 3 × 3, which downsamples its input dimensions from 27 × 27 to\n13 × 13\n\uf0a1CONV3 —Kernel size = 3, depth = 384, and stride = 1\n\uf0a1CONV4 —Kernel size = 3, depth = 384, and stride = 1\n\uf0a1CONV5 —Kernel size = 3, depth = 256, and stride = 1\n\uf0a1POOL —Size = 3 × 3, which downsamples its input from 13 × 13 to 6 × 6\n\uf0a1Flatten layer —Flattens the dimension volume 6 × 6 × 256 to 1 × 9,216\n\uf0a1FC with 4,096 neurons\n\uf0a1FC with 4,096 neurons\n\uf0a1Softmax layer with 1,000 neurons27\n27\n40969216FC FC\n...\n4096...\n1000\nsoftmax...\n6256 256\n613\n131327\n132711\n1155CONV\n11 11,×\nstride = 4,\n96 kernels\n(227 – 11)\n/4 + 1 = 55(55 – 3)\n/2 + 1 = 27(27 + 2*2 – 5)\n/1 + 1 = 27(27 – 3)\n/2 + 1 = 13\n(13 + 2*1 – 3)\n/1 + 1 = 13(13 + 2*1 – 3)\n/1 + 1 = 13\n(13 + 2*1 – 3)\n/1 + 1 = 13(13 – 3)\n/2 + 1 = 6CONV\n5 5, pad = 2,×\n256 kernels\nCONV\n3 3, pad = 1,×\n384 kernelsCONV\n3 3, pad = 1,×\n384 kernelsCONV\n3 3, pad = 1,×\n256 kernelsOverlapping\nmax pool\n3 3, stride = 2×Overlapping\nmax pool\n3 3, stride = 2×Overlapping\nmax pool\n3 3, stride = 2× 3\n5596 96 256\n256\n227227\n13384 384\n1313\n13\nFigure 5.7 AlexNet contains eight weight layers: five convolutional and three fully connected. Two contain 4,096 \nneurons, and the output is fed to a 1,000-neuron softmax.\n55 3–()\n2-------------------' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 228} |
226 | page_content="209 AlexNet\nNOTE You might be wondering how Krizhevsky and his team decided to\nimplement this configuration. Setting up the right values of network hyper-\nparameters like kernel size, depths, stride, pooling size, etc., is tedious and\nrequires a lot of trial and error. The idea remains the same: we want to\napply many weight layers to increase the model’s capacity to learn more\ncomplex functions. We also need to add pooling layers in between to down-\nsample the input dimensions, as discussed in chapter 2. With that said, set-\nting up the exact hyperparameters comes as one of the challenges of CNNs.\nVGGNet (explained next) solves this problem by implementing a uniform\nlayer configuration to reduce the amount of trial and error when designing\nyour network.\nNote that all of the convolutional layers are followed by a batch normalization layer,\nand all of the hidden layers are followed by ReLU activations. Now, let’s put that in\ncode to build the AlexNet architecture:\nfrom keras.models import Sequential \nfrom keras.regularizers import l2 \nfrom keras.layers import Conv2D, AveragePooling2D, Flatten, Dense, \n Activation,MaxPool2D, BatchNormalization, Dropout \nmodel = Sequential() \n# 1st layer (CONV + pool + batchnorm)\nmodel.add(Conv2D( filters= 96, kernel_size= (11,11), strides=(4,4), \npadding='valid', \n input_shape = (227,227,3)))\nmodel.add(Activation('relu')) \nmodel.add(MaxPool2D(pool_size=(3,3), strides=(2,2)))\nmodel.add(BatchNormalization())\n \n# 2nd layer (CONV + pool + batchnorm)\nmodel.add(Conv2D(filters=256, kernel_size=(5,5), strides=(1,1), padding= 'same', \n kernel_regularizer=l2(0.0005)))\nmodel.add(Activation( 'relu'))\nmodel.add(MaxPool2D(pool_size=(3,3), strides=(2,2), padding= 'valid'))\nmodel.add(BatchNormalization())\n \n# layer 3 (CONV + batchnorm) \nmodel.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='same', \nkernel_regularizer=l2(0.0005)))\nmodel.add(Activation('relu'))\nmodel.add(BatchNormalization())\n \n# layer 4 (CONV + batchnorm) \nmodel.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='same',\n kernel_regularizer=l2(0.0005)))\nmodel.add(Activation('relu'))\nmodel.add(BatchNormalization())\n \n# layer 5 (CONV + batchnorm) \nmodel.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='same',\n kernel_regularizer=l2(0.0005)))Imports the \nKeras model, \nlayers, and \nregularizers\nInstantiates an empty \nsequential model\nThe activation function can \nbe added on its own layer \nor within the Conv2D \nfunction as we did in \nprevious implementations.\nNote that the AlexNet authors did \nnot add a pooling layer here.\nSimilar to layer 3" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 229} |
227 | page_content="210 CHAPTER 5Advanced CNN architectures\nmodel.add(Activation('relu'))\nmodel.add(BatchNormalization())\nmodel.add(MaxPool2D(pool_size=(3,3), strides=(2,2), padding= 'valid'))\nmodel.add(Flatten()) \n# layer 6 (Dense layer + dropout) \nmodel.add(Dense( units = 4096, activation = 'relu'))\nmodel.add(Dropout(0.5))\n# layer 7 (Dense layers) \nmodel.add(Dense( units = 4096, activation = 'relu'))\nmodel.add(Dropout(0.5))\n \n# layer 8 (softmax output layer) \nmodel.add(Dense( units = 1000, activation = 'softmax' ))\nmodel.summary() \nWhen you print the model summary, you will see that the number of total parameters\nis 62 million:\nNOTE Both LeNet and AlexNet have many hyperparameters to tune. The\nauthors of those networks had to go through many experiments to set the ker-\nnel size, strides, and padding for each layer, which makes the networks harder\nto understand and manage. VGGNet (explained next) solves this problem\nwith a very simple, uniform architecture.\n5.3.4 Setting up the learning hyperparameters\nAlexNet was trained for 90 epochs, which took 6 days on two Nvidia Geforce GTX 580\nGPUs simultaneously. This is why you will see that the network is split into two pipe-\nlines in the original paper. Krizhevsky et al. started with an initial learning rate of 0.01\nwith a momentum of 0.9. The lr is then divided by 10 when the validation error stops\nimproving:\nreduce_lr = ReduceLROnPlateau(monitor= 'val_loss', factor=np.sqrt( 0.1)) \noptimizer = keras.optimizers.sgd( lr = 0.01, momentum = 0.9) \nmodel.compile(loss= 'categorical_crossentropy' , optimizer=optimizer, \n metrics=[ 'accuracy' ]) Flattens the CNN output to \nfeed it fully connected layers\nPrints the \nmodel summary\nTotal params: 62,383, 848\nTrainable params: 62,381, 096\nNon-trainable params: 2,752\nReduces the learning rate by 0. 1\nwhen the validation error plateausSets the SGD optimizer with lr \nof 0.0 1 and momentum of 0.9\nCompiles the model" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 230} |
228 | page_content='211 AlexNet\nmodel.fit(X_train, y_train, batch_size= 128, epochs= 90, \n validation_data=(X_test, y_test), verbose=2, callbacks=[reduce_lr])\n5.3.5 AlexNet performance\nAlexNet significantly outperformed all the prior competitors in the 2012 ILSVRC\nchallenges. It achieved a winning top-5 test error rate of 15.3%, compared to 26.2%\nachieved by the second-best entry that year, which used other traditional classifiers.\nThis huge improvement in performance attracted the CV community’s attention to\nthe potential that convolutional networks have to solve complex vision problems and\nled to more advanced CNN architectures, as you will see in the following sections of\nthis chapter.\nTop-1 and top-5 error rates?\nTop-1 and top-5 are terms used mostly in research papers to describe the accuracy\nof an algorithm on a given classification task. The top-1 error rate is the percentage\nof the time that the classifier did not give the correct class the highest score, and the\ntop-5 error rate is the percentage of the time that the classifier did not include the\ncorrect class among its top five guesses.\nLet’s apply this in an example. Suppose there are 100 classes, and we show the net-\nwork an image of a cat. The classifier outputs a score or confidence value for each\nclass as follows:\n1Cat: 70%\n2Dog: 20%\n3Horse: 5%\n4Motorcycle: 4%\n5Car: 0.6%\n6Plane: 0.4%\nThis means the classifier was able to correctly predict the true class of the image in\nthe top-1. Try the same experiment for 100 images and observe how many times the\nclassifier missed the true label, and that’s your top-1 error rate. \nThe same idea holds for the top-5 error rate. In the example, if the true label is Horse ,\nthen the classifier missed the true label in the top-1 but caught it in the first five pre-\ndicted classes (for example, top-5). Calculate how many times the classifier missed\nthe true label in the top five predictions, and that’s your top-5.\nIdeally, we want the model to always predict the correct class in the top-1. But top-5\ngives a more holistic evaluation of the model’s performance by defining how close\nthe model is to the correct prediction for the missed classes.Trains the model and calls the reduce_lr value \nusing callbacks in the training method' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 231} |
229 | page_content='212 CHAPTER 5Advanced CNN architectures\n5.4 VGGNet\nVGGNet was developed in 2014 by the Visual Geometry Group at Oxford University\n(hence the name VGG).3 The building components are exactly the same as those in\nLeNet and AlexNet, except that VGGNet is an even deeper network with more convo-\nlutional, pooling, and dense layers. Other than that, no new components are intro-\nduced here.\n VGGNet, also known as VGG16, consists of 16 weight layers: 13 convolutional lay-\ners and 3 fully connected layers. Its uniform architecture makes it appealing in the DL\ncommunity because it is very easy to understand. \n5.4.1 Novel features of VGGNet\nWe’ve seen how challenging it can be to set up CNN hyperparameters like kernel size,\npadding, strides, and so on. VGGNet’s novel concept is that it has a simple architecture\ncontaining uniform components (convolutional and pooling layers). It improves on\nAlexNet by replacing large kernel-sized filters (11 and 5 in the first and second convolu-\ntional layers, respectively) with multiple 3 × 3 pool-size filters one after another. \n The architecture is composed of a series of uniform convolutional building blocks\nfollowed by a unified pooling layer, where: \n\uf0a1All convolutional layers are 3 × 3 kernel-sized filters with a strides value of 1\nand a padding value of same .\n\uf0a1All pooling layers have a 2 × 2 pool size and a strides value of 2.\nSimonyan and Zisserman decided to use a smaller 3 × 3 kernel to allow the network to\nextract finer-level features of the image compared to AlexNet’s large kernels (11 × 11\nand 5 × 5). The idea is that with a given convolutional receptive field, multiple stacked\nsmaller kernels is better than a larger kernel because having multiple nonlinear layers\nincreases the depth of the network; this enables it to learn more complex features at a\nlower cost because it has fewer learning parameters. \n For example, in their experiments, the authors noticed that a stack of two 3 × 3\nconvolutional layers (without spatial pooling in between) has an effective receptive\nfield of 5 × 5, and three 3 × 3 convolutional layers have the effect of a 7 × 7 receptive\nfield. So by using 3 × 3 convolutions with higher depth, you get the benefits of using\nmore nonlinear rectification layers (ReLU), which makes the decision function more\ndiscriminative. Second, this decreases the number of training parameters because\nwhen you use a three-layer 3 × 3 convolutional with C channels, the stack is parameter-\nised by 32C2 = 27C2 weights compared to a single 7 × 7 convolutional layer that\nrequires 72C2 = 49C2 weights, which is 81% more parameters.\n \n3Karen Simonyan and Andrew Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recog-\nnition,” 2014, https:/ /arxiv.org/pdf/1409.1556v6.pdf .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 232} |
230 | page_content='213 VGGNet\nThis unified configuration of the convolutional and pooling components simplifies the\nneural network architecture, which makes it very easy to understand and implement. \n The VGGNet architecture is developed by stacking 3 × 3 convolutional layers with\n2 × 2 pooling layers inserted after several convolutional layers. This is followed by the\ntraditional classifier, which is composed of fully connected layers and a softmax, as\ndepicted in figure 5.8.\n5.4.2 VGGNet configurations\nSimonyan and Zisserman created several configurations for the VGGNet architec-\nture, as shown in figure 5.9. All of the configurations follow the same generic design.\nConfigurations D and E are the most commonly used and are called VGG16 and\nVGG19 , referring to the number of weight layers. Each block contains a series of 3 × 3\nconvolutional layers with similar hyperparameter configuration, followed by a 2 × 2\npooling layer.\n Table 5.1 lists the number of learning parameters (in millions) for each configura-\ntion. VGG16 yields ~138 million parameters; VGG19, which is a deeper version ofReceptive field\nAs explained in chapter 3, the receptive field is the effective area of the input image\non which the output depends:\nReceptive\nfield\nReLU\nnonlinearity\n3 × 3 CONV, 64 3 × 3 CONV, 64Pool/2 Pool/2\n3 × 3 CONV, 128Pool/2\n3 × 3 CONV, 128 3 × 3 CONV, 256 3 × 3 CONV, 256Pool/2\n3 × 3 CONV, 256 3 × 3 CONV, 512 3 × 3 CONV, 512Pool/2\n3 × 3 CONV, 512 3 × 3 CONV, 512 3 × 3 CONV, 512 3 × 3 CONV, 512FC 4096\nSoftmax 1000FC 4096\nFigure 5.8 VGGNet-16 architecture' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 233} |
231 | page_content='214 CHAPTER 5Advanced CNN architectures\nVGGNet, has more than 144 million parameters. VGG16 is more commonly used\nbecause it performs almost as well as VGG19 but with fewer parameters.\nVGG16 IN KERAS\nConfigurations D (VGG16) and E (VGG19) are the most commonly used configura-\ntions because they are deeper networks that can learn more complex functions. So, in\nthis chapter, we will implement configuration D, which has 16 weight layers. VGG19\n(configuration E) can be similarly implemented by adding a fourth convolutionalTable 5.1 VGGNet architecture parameters (in millions)\nNetwork A, A-LRN B C D E\nNo. of parameters 133 133 134 138 144conv3-6411 weight\nlayersAConvNet configuration\n11 weight\nlayersA-LRN\n13 weight\nlayers16 weight\nlayersB C\n16 weight\nlayersD\n19 weight\nlayersE\nconv3-64\nLRNconv3-64\nconv3-64Input (224 x 224 RGB image)\nconv3-64\nconv3-64\nmaxpoolconv3-64\nconv3-64conv3-64\nconv3-64\nconv3-128 conv3-128 conv3-128\nconv3-128conv3-128\nconv3-128conv3-128\nconv3-128conv3-128\nconv3-128\nmaxpool\nconv3-256\nconv3-256conv3-256\nconv3-256conv3-256\nconv3-256conv3-256\nconv3-256conv3-256\nconv3-256conv3-256\nconv3-256\nconv3-256\nmaxpool\nconv3-512\nconv3-512conv3-512\nconv3-512conv3-512\nconv3-512conv3-512\nconv3-512\nconv3-512\nmaxpoolconv3-512\nconv3-512\nconv3-512conv3-512\nconv3-512\nconv3-512\nconv3-512\nmaxpool\nconv3-512\nconv3-512conv3-512\nconv3-512conv3-512\nconv3-512conv3-512\nconv3-512\nconv3-512conv3-512\nconv3-512\nconv3-512conv3-512\nconv3-512\nconv3-512\nconv3-512\nFC-4096\nFC-4096\nFC-1000conv3-256 conv3-256\nconv3-256\nFigure 5.9 VGGNet architecture configurations' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 234} |
232 | page_content="215 VGGNet\nlayer to the third, fourth, and fifth blocks as you can see in figure 5.9. This chapter’s\ndownloaded code includes a full implementation of both VGG16 and VGG19. \n Note that Simonyan and Zisserman used the following regularization techniques\nto avoid overfitting:\n\uf0a1L2 regularization with weight decay of 5 × 10–4. For simplicity, this is not added\nto the implementation that follows.\n\uf0a1Dropout regularization for the first two fully connected layers, with a dropout\nratio set to 0.5.\nThe Keras code is as follows:\nmodel = Sequential() \n# block #1\nmodel.add(Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), \nactivation='relu',\n padding='same', input_shape=(224,224, 3)))\nmodel.add(Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(MaxPool2D((2,2), strides=(2,2)))\n# block #2\nmodel.add(Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(MaxPool2D((2,2), strides=(2,2)))\n# block #3\nmodel.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(MaxPool2D((2,2), strides=(2,2)))\n# block #4\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(MaxPool2D((2,2), strides=(2,2)))Instantiates an empty \nsequential model" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 235} |
233 | page_content="216 CHAPTER 5Advanced CNN architectures\n# block #5\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), \nactivation='relu', \n padding='same'))\nmodel.add(MaxPool2D((2,2), strides=(2,2)))\n# block #6 (classifier)\nmodel.add(Flatten())\nmodel.add(Dense(4096, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(4096, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(1000, activation='softmax'))\nmodel.summary() \nWhen you print the model summary, you will see that the number of total parameters\nis ~138 million:\n5.4.3 Learning hyperparameters\nSimonyan and Zisserman followed a training procedure similar to that of AlexNet: the\ntraining is carried out using mini-batch gradient descent with momentum of 0.9. The\nlearning rate is initially set to 0.01 and then decreased by a factor of 10 when the vali-\ndation set accuracy stops improving.\n5.4.4 VGGNet performance \nVGG16 achieved a top-5 error rate of 8.1% on the ImageNet dataset compared to\n15.3% achieved by AlexNet. VGG19 did even better: it was able to achieve a top-5\nerror rate of ~7.4%. It is worth noting that in spite of the larger number of parameters\nand the greater depth of VGGNet compared to AlexNet, VGGNet required fewer\nepochs to converge due to the implicit regularization imposed by greater depth and\nsmaller convolutional filter sizes.Prints the model \nsummary\nTotal params: 138,357, 544\nTrainable params: 138,357, 544\nNon-trainable params: 0" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 236} |
234 | page_content='217 Inception and GoogLeNet\n5.5 Inception and GoogLeNet \nThe Inception network came to the world in 2014 when a group of researchers at\nGoogle published their paper, “Going Deeper with Convolutions.”4 The main hall-\nmark of this architecture is building a deeper neural network while improving the\nutilization of the computing resources inside the network. One particular incarnation\nof the Inception network is called GoogLeNet and was used in the team’s submission\nfor ILSVRC 2014. It uses a network 22 layers deep (deeper than VGGNet) while reduc-\ning the number of parameters by 12 times (from ~138 million to ~13 million) and\nachieving significantly more accurate results. The network used a CNN inspired by the\nclassical networks (AlexNet and VGGNet) but implemented a novel element dubbed\nas the inception module . \n5.5.1 Novel features of Inception\nSzegedy et al. took a different approach when designing their network architecture.\nAs we’ve seen in the previous networks, there are some architectural decisions that\nyou need to make for each layer when you are designing a network, such as these:\n\uf0a1The kernel size of the convolutional layer —We’ve seen in previous architectures that\nthe kernel size varies: 1 × 1, 3 × 3, 5 × 5, and, in some cases, 11 × 11 (as in Alex-\nNet). When designing the convolutional layer, we find ourselves trying to pick\nand tune the kernel size of each layer that fits our dataset. Recall from chapter 3\nthat smaller kernels capture finer details of the image, whereas bigger filters\nwill leave out minute details.\n\uf0a1When to use the pooling layer —AlexNet uses pooling layers every one or two con-\nvolutional layers to downsize spatial features. VGGNet applies pooling after\nevery two, three, or four convolutional layers as the network gets deeper. \nConfiguring the kernel size and positioning the pool layers are decisions we make\nmostly by trial and error and experiment with to get the optimal results. Inception\nsays, “Instead of choosing a desired filter size in a convolutional layer and deciding\nwhere to place the pooling layers, let’s apply all of them all together in one block and\ncall it the inception module. ” \n That is, rather than stacking layers on top of each other as in classical architec-\ntures, Szegedy and his team suggest that we create an inception module consisting of\nseveral convolutional layers with different kernel sizes. The architecture is then devel-\noped by stacking the inception modules on top of each other. Figure 5.10 shows how\nclassical convolutional networks are architected versus the Inception network.\n \n \n \n4 Christian Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumi-\ntru Erhan, Vincent Vanhoucke, and Andrew Rabinovich, “Going Deeper with Convolutions,” in Proceedings of\nthe IEEE Conference on Computer Vision and Pattern Recognition , 1–9, 2015, http:/ /mng.bz/YryB .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 237} |
235 | page_content='218 CHAPTER 5Advanced CNN architectures\nFrom the diagram, you can observe the following:\n\uf0a1In classical architectures like LeNet, AlexNet, and VGGNet, we stack convolu-\ntional and pooling layers on top of each other to build the feature extractors. At\nthe end, we add the dense fully connected layers to build the classifier.\n\uf0a1In the Inception architecture, we start with a convolutional layer and a pooling\nlayer, stack the inception modules and pooling layers to build the feature\nextractors, and then add the regular dense classifier layers. \nWe’ve been treating the inception modules as black boxes to understand the bigger\npicture of the Inception architecture. Now, we will unpack the inception module to\nunderstand how it works.\n5.5.2 Inception module: Naive version\nThe inception module is a combination of four layers: \n\uf0a11 × 1 convolutional layer\n\uf0a13 × 3 convolutional layer\n\uf0a15 × 5 convolutional layer\n\uf0a13 × 3 max-pooling layer \nThe outputs of these layers are concatenated into a single output volume forming the\ninput of the next stage. The naive representation of the inception module is shown in\nfigure 5.11.\n The diagram may look a little overwhelming, but the idea is simple to understand.\nLet’s follow along with this example: SoftmaxClassical CNN architecture Inception modules\nFC\nPOOL\nCONV\nCONV\nPOOL\nCONV\nInputSoftmax\nFC\nInception modules\nPOOL\nInception modules\nPOOL\nCONV\nInputDense\nclassifiers\nClassical CNN\nfeature extractorsDense\nclassifiers\nInception\nmodules feature\nextractors\nFigure 5.10 Classical convolutional networks vs. the Inception network' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 238} |
236 | page_content='219 Inception and GoogLeNet\n1Suppose we have an input dimensional volume from the previous layer of size\n32 × 32 × 200.\n2We feed this input to four convolutions simultaneously: \n– 1 × 1 convolutional layer with depth = 64 and padding = same . The output of\nthis kernel = 32 × 32 × 64.\n– 3 × 3 convolutional layer with depth = 128 and padding = same . Output = 32 ×\n32 × 128.\n– 5 × 5 convolutional layer with depth = 32 and padding = same . Output = 32 ×\n32 × 32.\n– 3 × 3 max-pooling layer with padding = same and strides = 1. Output = 32 ×\n32 × 32.\n3We concatenate the depth of the four outputs to create one output volume of\ndimensions 32 × 32 × 256.\nNow we have an inception module that takes an input volume of 32 × 32 × 200 and\noutputs a volume of 32 × 32 × 256.\nNOTE In the previous example, we use a padding value of same . In Keras,\npadding can be set to same or valid , as we saw in chapter 3. The same value\nresults in padding the input such that the output has the same length as the\noriginal input. We do that because we want the output to have width and\nheight dimensions similar to the input. And we want to output similar dimen-\nsions in the inception module to simplify the depth concatenation process.\nNow we can just add up the depths of all the outputs to concatenate them\ninto one output volume to be fed to the next layer in our network.Filter concatenation\n3 × 3 max pooling 5 × 5 convolutions 3 × 3 convolutions 1 × 1 convolutions32 × 32 × 256\n32 × 32 × 128 32 × 32 × 64Inception module\n32 × 32 × 32\nPrevious layer\n32 × 32 × 20032 × 32 × 32\nFigure 5.11 Naive representation of an inception module' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 239} |
237 | page_content='220 CHAPTER 5Advanced CNN architectures\n5.5.3 Inception module with dimensionality reduction\nThe naive representation of the inception module that we just saw has a big computa-\ntional cost problem that comes with processing larger filters like the 5 × 5 convolutional\nlayer. To get a better sense of the compute problem with the naive representation,\nlet’s calculate the number of operations that will be performed for the 5 × 5 convolu-\ntional layer in the previous example.\n The input volume with dimensions of 32 × 32 × 200 will be fed to the 5 × 5 convolu-\ntional layer of 32 filters with dimensions = 5 × 5 × 32. This means the total number\nof multiplications that the computer needs to compute is 32 × 32 × 200 multiplied\nby 5 × 5 × 32, which is more than 163 million operations. While we can perform this\nmany operations with modern computers, this is still pretty expensive. This is when\nthe dimensionality reduction layers can be very useful. \nDIMENSIONALITY REDUCTION LAYER (1 × 1 CONVOLUTIONAL LAYER )\nThe 1 × 1 convolutional layer can reduce the operational cost of 163 million opera-\ntions to about a tenth of that. That is why it is called a reduce layer . The idea here is to\nadd a 1 × 1 convolutional layer before the bigger kernels like the 3 × 3 and 5 × 5 con-\nvolutional layers, to reduce their depth, which in turn will reduce the number of\noperations. \n Let’s look at an example. Suppose we have an input dimension volume of 32 × 32 ×\n200. We then add a 1 × 1 convolutional layer with a depth of 16. This reduces the\ndimension volume from 200 to 16 channels. We can then apply the 5 × 5 convolu-\ntional layer on the output, which has much less depth (figure 5.12).\n Notice that the 32 × 32 × 200 input is processed through the two convolutional lay-\ners and outputs a volume of dimensions 32 × 32 × 32, which is the same as produced\n32 × 32 × 200\nComputational cost:\n(32 × 32 × 16) × (1 × 1 × 200) = 3.2 million32 × 32 × 16CONV 1 × 1\n16 filters\n32 × 32 × 32CONV 5 × 5\n32 filtersBottleneck layer\nTotal computational cost:\n16.3 millionComputational cost:\n(32 × 32 × 32) × (5 × 5 × 16) = 13.1 million\nFigure 5.12 Dimensionality reduction is used to reduce the computational \ncost by reducing the depth of the layer.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 240} |
238 | page_content='221 Inception and GoogLeNet\nwithout applying the dimensionality reduction layer. But here, instead of processing\nthe 5 × 5 convolutional layer on the entire 200 channels of the input volume, we take\nthis huge volume and shrink its representation to a much smaller intermediate vol-\nume that has only 16 channels. \n Now, let’s look at the computational cost involved in this operation and compare it\nto the 163 million multiplications that we got before applying the reduce layer:\nComputation\n= operations in the 1 × 1 convolutional layer + operations in the 5 × 5 convolutional layer\n= (32 × 32 × 200) multiplied by (1 × 1 × 16 + 32 × 32 × 16) multiplied by (5 × 5 × 32) \n= 3.2 million + 13.1 million\nThe total number of multiplications in this operation is 16.3 million, which is a tenth\nof the 163 million multiplications that we calculated without the reduce layers.\nThe 1 × 1 convolutional layer\nThe idea of the 1 × 1 convolutional layer is that it preserves the spatial dimensions\n(height and width) of the input volume but changes the number of channels of the\nvolume (depth):\nThe 1 × 1 convolutional layers are also known as bottleneck layers because the bot-\ntleneck is the smallest part of the bottle and reduce layers reduce the dimensionality\nof the network, making it look like a bottleneck:6 × 6 × 32* =\n1 × 1 × # filters 6 × 6 × # filters\n1 × 1 conv layers preserve the spatial dimensions but change the depth.\nInput data Output layerBottleneck layer\n1 × 1 convolutional layers are called bottleneck layers .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 241} |
239 | page_content='222 CHAPTER 5Advanced CNN architectures\nIMPACT OF DIMENSIONALITY REDUCTION ON NETWORK PERFORMANCE\nYou might be wondering whether shrinking the representation size so dramatically hurts\nthe performance of the neural network. Szegedy et al. ran experiments and found that as\nlong as you implement the reduce layer in moderation, you can shrink the representa-\ntion size significantly without hurting performance—and save a lot of computations. \n Now, let’s put the reduce layers into action and build a new inception module with\ndimensionality reduction . To do that, we will keep the same concept of concatenating the\nfour layers from the naive representation. We will add a 1 × 1 convolutional reduce\nlayer before the 3 × 3 and 5 × 5 convolutional layers to reduce their computational\ncost. We will also add a 1 × 1 convolutional layer after the 3 × 3 max-pooling layer\nbecause pooling layers don’t reduce the depth for their inputs. So, we will need to\napply the reduce layer to their output before we do the concatenation (figure 5.13).\nWe add dimensionality reduction prior to bigger convolutional layers to allow for\nincreasing the number of units at each stage significantly without an uncontrolled\nblowup in computational complexity at later stages. Furthermore, the design follows\nthe practical intuition that visual information should be processed at various scales\nand then aggregated so that the next stage can abstract features from the different\nscales simultaneously.\nRECAP OF INCEPTION MODULES\nTo summarize, if you are building a layer of a neural network and you don’t want to\nhave to decide what filter size to use in the convolutional layers or when to add pool-\ning layers, the inception module lets you use them all and concatenate the depth of all\nthe outputs. This is called the naive representation of the inception module.Depth concatenation\n1 × 1 convolutions 5 × 5 convolutions 3 × 3 convolutions\n1 × 1 convolutions\n3 × 3 max pooling 1 × 1 convolutions 1 × 1 convolutions\nPrevious layerInception module with dimensionality reduction\nFigure 5.13 Building an inception module with dimensionality reduction' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 242} |
240 | page_content='223 Inception and GoogLeNet\n We then run into the problem of computational cost that comes with using large\nfilters. Here, we use a 1 × 1 convolutional layer called the reduce layer that reduces\nthe computational cost significantly. We add reduce layers before the 3 × 3 and 5 × 5\nconvolutional layers and after the max-pooling layer to create an inception module\nwith dimensionality reduction.\n5.5.4 Inception architecture\nNow that we understand the components of the inception module, we are ready to\nbuild the Inception network architecture. We use the dimension reduction represen-\ntation of the inception module, stack inception modules on top of each other, and\nadd a 3 × 3 pooling layer in between for downsampling, as shown in figure 5.14.\n We can stack as many inception modules as we want to build a very deep convolu-\ntional network. In the original paper, the team built a specific incarnation of the\nDepthConcat\nCONV 1 × 1 + 1(S) CONV 5 × 5 + 1(S) CONV 3 × 3 + 1(S) CONV 1 × 1 + 1(S)Inception\nmodule\n3 × 3 pool layerMaxPool 3 × 3 + 1(S) CONV 1 × 1 + 1(S) CONV 1 × 1 + 1(S)\nMaxPool 3 × 3 + 2(S)\nDepthConcat\nDepthConcatCONV 1 × 1 + 1(S) CONV 5 × 5 + 1(S) CONV 3 × 3 + 1(S) CONV 1 × 1 + 1(S)\nMaxPool 3 × 3 + 1(S) CONV 1 × 1 + 1(S) CONV 1 × 1 + 1(S)Inception\nmodule\nFigure 5.14 We build the Inception network by adding a stack of inception modules on top of each other.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 243} |
241 | page_content='224 CHAPTER 5Advanced CNN architectures\ninception module and called it GoogLeNet. They used this network in their submission\nfor the ILSVRC 2014 competition. The GoogLeNet architecture is shown in figure 5.15.\nAs you can see, GoogLeNet uses a stack of a total of nine inception modules and a max\npooling layer every several blocks to reduce dimensionality. To simplify this implementa-\ntion, we are going to break down the GoogLeNet architecture into three parts:\n\uf0a1Part A —Identical to the AlexNet and LeNet architectures; contains a series of\nconvolutional and pooling layers.\n\uf0a1Part B —Contains nine inception modules stacked as follows: two inception\nmodules + pooling layer + five inception modules + pooling layer + five incep-\ntion modules.\n\uf0a1Part C —The classifier part of the network, consisting of the fully connected and\nsoftmax layers.Softmax\nFC\nGlobal AvgPool\n3 × 3 MaxPool2x\n5x\n3 × 3 MaxPool\n3 × 3 MaxPool2x\n3 × 3 CONV\n1 × 1 CONV\n3 × 3 MaxPool\n7 × 7 CONV\nInputPart C: The classifier\nPart B: Contains nine\ninception blocks and\nseparated by 3 × 3\nmax pooling layers\nPart A: Identical to\nAlexNet and LeNet;\ncontains a series of\nconvolutional and\nmax pooling layersFigure 5.15 The full GoogLeNet model \nconsists of three parts: the first part has the \nclassical CNN architecture like AlexNet and \nLeNet, the second part is a stack of inceptions \nmodules and pooling layers, and the third part \nis the traditional fully connected classifiers.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 244} |
242 | page_content="225 Inception and GoogLeNet\n5.5.5 GoogLeNet in Keras\nNow, let’s implement the GoogLeNet architecture in Keras (figure 5.16). Notice that\nthe inception module takes the features from the previous module as input, passes them\nthrough four routes, concatenates the depth of the output of all four routes, and then\npasses the concatenated output to the next module. The four routes are as follows:\n\uf0a11 × 1 convolutional layer\n\uf0a11 × 1 convolutional layer + 3 × 3 convolutional layer\n\uf0a11 × 1 convolutional layer + 5 × 5 convolutional layer\n\uf0a13 × 3 pooling layer + 1 × 1 convolutional layer\nFirst we’ll build the inception_module function. It takes the number of filters of each\nconvolutional layer as an argument and returns the concatenated output:\ndef inception_module (x, filters_1 × 1, filters_3x3_reduce, filters_3x3, \nfilters_5x5_reduce,\n filters_5x5, filters_pool_proj, name=None):\n \nconv_1x1 = Conv2D(filters_1x1, kernel_size=(1, 1), padding=' same', \nactivation=' relu',\n kernel_initializer=kernel_init, bias_initializer=bias_init) (x) \n # 3 × 3 route = 1 × 1 CONV + 3 × 3 CONV \npre_conv_3x3 = Conv2D(filters_3x3_reduce, kernel_size=(1, 1), padding=' same',\n activation=' relu', kernel_initializer=kernel_init, \n bias_initializer=bias_init) (x)\nconv_3x3 = Conv2D(filters_3x3, kernel_size=(3, 3), padding=' same', \nactivation=' relu', Depth concatenation\n1 × 1 convolutions 5 × 5 convolutions 3 × 3 convolutions\n1 × 1 convolutions\n3 × 3 max pooling 1 × 1 convolutions 1 × 1 convolutions\nPrevious layerInception module with dimensionality reduction\nFigure 5.16 The inception module of GoogLeNet\nCreates the 1 × 1\nconvolutional\nlayer that takes\nits input directly\nfrom the\nprevious layer" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 245} |
243 | page_content="226 CHAPTER 5Advanced CNN architectures\n kernel_initializer=kernel_init, \n bias_initializer=bias_init) (pre_conv_3x3)\n # 5 × 5 route = 1 × 1 CONV + 5 × 5 CONV \npre_conv_5x5 = Conv2D(filters_5x5_reduce, kernel_size=(1, 1), padding=' same',\n activation=' relu', kernel_initializer=kernel_init, \n bias_initializer=bias_init)(x)\nconv_5x5 = Conv2D(filters_5x5, kernel_size=(5, 5), padding=' same', \nactivation=' relu', \n kernel_initializer=kernel_init, \n bias_initializer=bias_init)(pre_conv_5x5)\n # pool route = POOL + 1 × 1 CONV \npool_proj = MaxPool2D((3, 3), strides=(1, 1), padding=' same')(x)\npool_proj = Conv2D(filters_pool_proj, (1, 1), padding=' same', activation=' relu',\n kernel_initializer=kernel_init, \nbias_initializer=bias_init)(pool_proj)\noutput = concatenate([conv_1x1, conv_3x3, conv_5x5, pool_proj], axis=3, \nname=name) \nreturn output\nGOOGLENET ARCHITECTURE\nNow that the inception_module function is ready, let’s build the GoogLeNet architec-\nture from figure 5.16. To get the values of the inception_module function’s argu-\nments, we will go through figure 5.17, which represents the hyperparameters set up asConcatenates together the \ndepth of the three filters\nPart Atype\nconvolution 7x7/2 112 x112x64 1 2.7K 34Mpatch size/\nstrideoutput\nsizepool\nprojdepth params ops #1x1#3x3# 5 x5#3x3# 5 x5reduce reduce\nPart CPart Bmax pool 3x3/2 56 x56x64 0\nconvolution 3x3/1 56 x56x192 2 112K 360M\nmax pool 3x3/2 28 x28x192 0\ninception (3a) 28x28x256 2 159K 128M\ninception (3b) 28x28x480 2 380K 304M\nmax pool 3x3/2 14 x14x480 0\ninception (4a 14x14x512 2 364K 73M\ninception (4b) 14x14x512 2 437K 88M\ninception (4c) 14x14x512 2 463K 100M\ninception (4d) 14x14x528 2 580K 119M\ninception (4e) 14x14x832 2 840K 170M\nmax pool 3x3/2 7 x7x832 0\ninception (5a) 7x7x832 2 1072K 54M\ninception (5b) 7x7x1024 22\n96\n128\n96\n112\n128\n144\n160\n160\n1922\n128\n192\n208\n224\n256\n288\n320\n320\n3842\n16\n32\n16\n24\n24\n32\n32\n32\n482\n32\n96\n48\n64\n64\n64\n128\n128\n1282\n32\n64\n64\n64\n64\n64\n128\n128\n12864\n128\n192\n160\n128\n112\n256\n256\n384 1388K 71M\navg pool 7x7/1 1 x1x1024 0\ndropout (40%) 1x1x1024 0\nlinear 1x1x1000 1 1000K 1M\nsoftmax 1x1x1000 0\nFigure 5.17 Hyperparameters implemented by Szegedy et al. in the original Inception paper" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 246} |
244 | page_content="227 Inception and GoogLeNet\nimplemented by Szegedy et al. in the original paper. (Note that “#3 × 3 reduce” and\n“#5 × 5 reduce” in the figure represent the 1 × 1 filters in the reduction layers that are\nused before the 3 × 3 and 5 × 5 convolutional layers.)\n Now, let’s go through the implementations of parts A, B, and C.\nPART A: B UILDING THE BOTTOM PART OF THE NETWORK\nLet’s build the bottom part of the network. This part consists of a 7 × 7 convolutional\nlayer ⇒ 3 × 3 pooling layer ⇒ 1 × 1 convolutional layer ⇒ 3 × 3 convolutional layer ⇒\n3 × 3 pooling layer, as you can see in figure 5.18.\nIn the LocalResponseNorm layer, similar to in AlexNet, local response normalization is\nused to help speed up convergence. Nowadays, batch normalization is used instead.\n Here is the Keras code for part A:\n# input layer with size = 24 × 24 × 3\ninput_layer = Input(shape=(224, 224, 3))\nkernel_init = keras.initializers.glorot_uniform()\nbias_init = keras.initializers.Constant(value=0.2)\nx = Conv2D(64, (7, 7), padding= 'same', strides=(2, 2), activation='relu', \nname='conv_1_7x7/2', \nkernel_initializer=kernel_init, bias_initializer=bias_init)(input_layer)\nx = MaxPool2D((3, 3), padding=' same', strides=(2, 2), name=' max_pool_1_3x3/2 ')(x)\nx = BatchNormalization()(x)InputMaxPool 3 × 3 + 2(S)\nLocalRespNorm\nCONV 3 × 3 + 1(S)\nCONV 1 × 1 + 1(V)\nLocalRespNorm\nMaxPool 3 × 3 + 2(S)\nCONV 7 × 7 + 2(S)\nFigure 5.18 The bottom part of the network" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 247} |
245 | page_content="228 CHAPTER 5Advanced CNN architectures\nx = Conv2D(64, (1, 1), padding=' same', strides=(1, 1), activation=' relu')(x)\nx = Conv2D(192, (3, 3), padding=' same', strides=(1, 1), activation='relu')(x)\nx = BatchNormalization()(x)\nx = MaxPool2D((3, 3), padding=' same', strides=(2, 2))(x)\nPART B: B UILDING THE INCEPTION MODULES AND MAX-POOLING LAYERS\nTo build inception modules 3a and 3b and the first max-pooling layer, we use table 5.2\nto start. The code is as follows:\nx = inception_module(x, filters_1x1=64, filters_3x3_reduce=96, filters_3x3=128, \n filters_5x5_reduce=16, filters_5x5=32, filters_pool_proj=32,\n name='inception_3a')\nx = inception_module(x, filters_1x1=128, filters_3x3_reduce=128, filters_3x3=192,\n filters_5x5_reduce=32, filters_5x5=96, filters_pool_proj=64,\n name='inception_3b')\nx = MaxPool2D((3, 3), padding=' same', strides=(2, 2))(x)\nSimilarly, let’s create inception modules 4a, 4b, 4c, 4d, and 4e and the max pooling layer:\nx = inception_module(x, filters_1x1=192, filters_3x3_reduce=96, filters_3x3=208, \n filters_5x5_reduce=16, filters_5x5=48, filters_pool_proj=64,\n name= 'inception_4a' )\nx = inception_module(x, filters_1x1=160, filters_3x3_reduce=112, filters_3x3=224,\n filters_5x5_reduce=24, filters_5x5=64, filters_pool_proj=64,\n name= 'inception_4b' )\nx = inception_module(x, filters_1x1=128, filters_3x3_reduce=128, filters_3x3=256,\n filters_5x5_reduce=24, filters_5x5=64, filters_pool_proj=64,\n name= 'inception_4c' )\nx = inception_module(x, filters_1x1=112, filters_3x3_reduce=144, filters_3x3=288,\n filters_5x5_reduce=32, filters_5x5=64, filters_pool_proj=64,\n name= 'inception_4d' )\nx = inception_module(x, filters_1x1=256, filters_3x3_reduce=160, filters_3x3=320,\n filters_5x5_reduce=32, filters_5x5=128, filters_pool_proj=128,\n name= 'inception_4e' )\nx = MaxPool2D((3, 3), padding= 'same', strides=(2, 2), name= 'max_pool_4_3x3/2' )(x)Table 5.2 Inception modules 3a and 3b\nType #1 × 1 #3 × 3 reduce #3 × 3 #5 × 5 reduce #5 × 5 Pool proj\nInception (3a) 064 096 128 16 32 32\nInception (3b) 128 128 192 32 96 64" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 248} |
246 | page_content="229 Inception and GoogLeNet\nNow, let’s create modules 5a and 5b:\nx = inception_module(x, filters_1x1=256, filters_3x3_reduce=160, filters_3x3=320, \n filters_5x5_reduce=32, filters_5x5=128, \nfilters_pool_proj=128, \n name= 'inception_5a' )\nx = inception_module(x, filters_1x1=384, filters_3x3_reduce=192, filters_3x3=384, \n filters_5x5_reduce=48, filters_5x5=128, \nfilters_pool_proj=128, \n name= 'inception_5b' )\nPART C: B UILDING THE CLASSIFIER PART\nIn their experiments, Szegedy et al. found that adding an 7 × 7 average pooling layer\nimproved the top-1 accuracy by about 0.6%. They then added a dropout layer with\n40% probability to reduce overfitting:\nx = AveragePooling2D(pool_size=(7,7), strides=1, padding='valid')(x)\nx = Dropout(0.4)(x)\nx = Dense(10, activation='softmax', name='output')(x)\n5.5.6 Learning hyperparameters\nThe team used a SGD gradient descent optimizer with 0.9 momentum. They also\nimplemented a fixed learning rate decay schedule of 4% every 8 epochs. An example\nof how to implement the training specifications similar to the paper is as follows:\nepochs = 25\ninitial_lrate = 0.01\ndef decay(epoch, steps=100): \n initial_lrate = 0.01 \n drop = 0.96 \n epochs_drop = 8 \n lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))\n return lrate \nlr_schedule = LearningRateScheduler(decay, verbose=1)\nsgd = SGD(lr=initial_lrate, momentum=0.9, nesterov= False)\nmodel.compile(loss= 'categorical_crossentropy' , optimizer= sgd, \nmetrics=[ 'accuracy' ])\nmodel.fit(X_train, y_train, batch_size=256, epochs=epochs, \nvalidation_data=(X_test, y_test), callbacks=[lr_schedule], verbose=2, \nshuffle= True)\n5.5.7 Inception performance on the CIFAR dataset\nGoogLeNet was the winner of the ILSVRC 2014 competition. It achieved a top-5 error\nrate of 6.67%, which was very close to human-level performance and much better\nthan previous CNNs like AlexNet and VGGNet.Implements the learning\nrate decay function" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 249} |
247 | page_content='230 CHAPTER 5Advanced CNN architectures\n5.6 ResNet\nThe Residual Neural Network (ResNet) was developed in 2015 by a group from the\nMicrosoft Research team.5 They introduced a novel residual module architecture with\nskip connections . The network also features heavy batch normalization for the hidden\nlayers. This technique allowed the team to train very deep neural networks with 50,\n101, and 152 weight layers while still having lower complexity than smaller networks\nlike VGGNet (19 layers). ResNet was able to achieve a top-5 error rate of 3.57% in the\nILSVRC 2015 competition, which beat the performance of all prior ConvNets. \n5.6.1 Novel features of ResNet\nLooking at how neural network architectures evolved from LeNet, AlexNet, VGGNet,\nand Inception, you might have noticed that the deeper the network, the larger its\nlearning capacity, and the better it extracts features from images. This mainly happens\nbecause very deep networks are able to represent very complex functions, which\nallows the network to learn features at many different levels of abstraction, from edges\n(at the lower layers) to very complex features (at the deeper layers). \n Earlier in this chapter, we saw deep neural networks like VGGNet-19 (19 layers) and\nGoogLeNet (22 layers). Both performed very well in the ImageNet challenge. But can\nwe build even deeper networks? We learned from chapter 4 that one downside of add-\ning too many layers is that doing so makes the network more prone to overfit the train-\ning data. This is not a major problem because we can use regularization techniques like\ndropout, L2 regularization, and batch normalization to avoid overfitting. So, if we can\ntake care of the overfitting problem, wouldn’t we want to build networks that are 50,\n100, or even 150 layers deep? The answer is yes. We definitely should try to build very\ndeep neural networks. We need to fix just one other problem, to unblock the capability\nof building super-deep networks: a phenomenon called vanishing gradients .\n5Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep Residual Learning for Image Recognition,”\n2015, http:/ /arxiv.org/abs/1512.03385 .Vanishing and exploding gradients\nThe problem with very deep networks is that the signal required to change the weights\nbecomes very small at earlier layers. To understand why, let’s consider the gradient\ndescent process explained in chapter 2. As the network backpropagates the gradient\nof the error from the final layer back to the first layer, it is multiplied by the weight\nmatrix at each step; thus the gradient can decrease exponentially quickly to zero,\nleading to a vanishing gradient phenomenon that prevents the earlier layers from\nlearning. As a result, the network’s performance gets saturated or even starts to\ndegrade rapidly.\nIn other cases, the gradient grows exponentially quickly and “explodes” to take very\nlarge values. This phenomenon is called exploding gradients .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 250} |
248 | page_content='231 ResNet\nTo solve the vanishing gradient problem, He et al. created a shortcut that allows the\ngradient to be directly backpropagated to earlier layers. These shortcuts are called skip\nconnections : they are used to flow information from earlier layers in the network to\nlater layers, creating an alternate shortcut path for the gradient to flow through.\nAnother important benefit of the skip connections is that they allow the model to\nlearn an identity function, which ensures that the layer will perform at least as well as\nthe previous layer (figure 5.19).\nAt left in figure 5.19 is the traditional stacking of convolutional layers one after the\nother. On the right, we still stack convolutional layers as before, but we also add the orig-\ninal input to the output of the convolutional block. This is a skip connection. We then\nadd both signals: skip connection + main path. \n Note that the shortcut arrow points to the end of the second convolutional layer—\nnot after it. The reason is that we add both paths before we apply the ReLU activation\nfunction of this layer. As you can see in figure 5.20, the X signal is passed along the\nshortcut path and then added to the main path, f(x). Then, we apply the ReLU activa-\ntion to f(x) + x to produce the output signal: relu( f(x) + x).Without skip\nconnectionWith skip\nconnection\nFigure 5.19 Traditional network without skip connections (left); \nnetwork with a skip connection (right).\nAdd both paths = ( ) + fx x\nx\nCONV CONV + ReLuShortcut path = x\nMain path = ( ) fxReLu relu( fx x( ) + )\nFigure 5.20 Adding the paths and applying the ReLU activation function to solve the \nvanishing gradient problem that usually comes with very deep networks' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 251} |
249 | page_content="232 CHAPTER 5Advanced CNN architectures\nThe code implementation of the skip connection is straightforward:\nX_shortcut = X \nX = Conv2D(filters = F1, kernel_size = (3, 3), strides = (1,1))(X) \nX = Activation( 'relu')(X) \nX = Conv2D(filters = F1, kernel_size = (3, 3), strides = (1,1))(X) \nX = Add()([X, X_shortcut]) \nX = Activation( 'relu')(X) \nThis combination of the skip connection and convolutional layers is called a residual\nblock. Similar to the Inception network, ResNet is composed of a series of these resid-\nual block building blocks that are stacked on top of each other (figure 5.21).\nFrom the figure, you can observe the following:\n\uf0a1Feature extractors —To build the feature extractor part of ResNet, we start with a\nconvolutional layer and a pooling layer and then stack residual blocks on top of\neach other to build the network. When we are designing our ResNet network,\nwe can add as many residual blocks as we want to build even deeper networks. Stores the value of the shortcut \nto be equal to the input xPerforms the \nmain path \noperations: \nCONV + ReLU \n+ CONV\nAdds both paths together\nApplies the ReLU activation function\nSoftmaxClassical CNN architecture Inception modules\nFC\nPOOL\nCONV\nCONV\nPOOL\nCONV\nInputResidual blocks\nSoftmax\nFC\nInception modules\nPOOL\nInception modules\nPOOL\nCONV\nInputSoftmax\nFC\nResidual blockPOOL\nResidual block\nResidual block\nPOOL\nCONV\nInput\nFigure 5.21 Classical CNN architecture (left). The Inception network consists of a set \nof inception modules (middle). The residual network consists of a set of residual blocks \n(right)." metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 252} |
250 | page_content='233 ResNet\n\uf0a1Classifiers —The classification part is still the same as we learned for other net-\nworks: fully connected layers followed by a softmax. \nNow that you know what a skip connection is and you are familiar with the high-level\narchitecture of ResNet, let’s unpack residual blocks to understand how they work.\n5.6.2 Residual blocks\nA residual module consists of two branches:\n\uf0a1Shortcut path (figure 5.22)—Connects the input to an addition of the second\nbranch.\n\uf0a1Main path —A series of convolutions and activations. The main path consists of\nthree convolutional layers with ReLU activations. We also add batch normaliza-\ntion to each convolutional layer to reduce overfitting and speed up training.\nThe main path architecture looks like this: [CONV ⇒ BN ⇒ ReLU] × 3.\nSimilar to what we explained earlier, the shortcut path is added to the main path right\nbefore the activation function of the last convolutional layer. Then we apply the ReLU\nfunction after adding the two paths. \n Notice that there are no pooling layers in the residual block. Instead, He et al.\ndecided to do dimension downsampling using bottleneck 1 × 1 convolutional layers,\nsimilar to the Inception network. So, each residual block starts with a 1 × 1 convolu-\ntional layer to downsample the input dimension volume, and a 3 × 3 convolutional\nlayer and another 1 × 1 convolutional layer to downsample the output. This is a good\ntechnique to keep control of the volume dimensions across many layers. This configu-\nration is called a bottleneck residual block .\n When we are stacking residual blocks on top of each other, the volume dimensions\nchange from one block to another. And as you might recall from the matrices intro-\nduction in chapter 2, to be able to perform matrix addition operations, the matrices\nshould have similar dimensions. To fix this problem, we need to downsample the\nshortcut path as well, before merging both paths. We do that by adding a bottleneckx\nCONV2D +Batch\nnormReLu CONV2DBatch\nnormReLu ReLu CONV2DBatch\nnormShortcut path = x\nMain path ( ) fxAdd both\npathsResidual blocks\nFigure 5.22 The output of the main path is added to the input value through the shortcut before they are fed to \nthe ReLU function.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 253} |
251 | page_content='234 CHAPTER 5Advanced CNN architectures\nlayer (1 × 1 convolutional layer + batch normalization) to the shortcut path, as shown\nin figure 5.23. This is called the reduce shortcut .\nBefore we jump into the code implementation, let’s recap the discussion of residual\nblocks:\n\uf0a1Residual blocks contain two paths: the shortcut path and the main path.\n\uf0a1The main path consists of three convolutional layers, and we add a batch nor-\nmalization layer to them:\n– 1 × 1 convolutional layer\n– 3 × 3 convolutional layer\n– 1 × 1 convolutional layer\n\uf0a1There are two ways to implement the shortcut path:\n–Regular shortcut —Add the input dimensions to the main path.\n–Reduce shortcut —Add a convolutional layer in the shortcut path before merg-\ning with the main path.\nWhen we are implementing the ResNet network, we will use both regular and reduce\nshortcuts. This will be clearer when you see the full implementation. But for now, we\nwill implement bottleneck_residual_block function that takes a reduce Boolean\nargument. When reduce is True , this means we want to use the reduce shortcut; other-\nwise, it will implement the regular shortcut. The bottleneck_residual_block func-\ntion takes the following arguments:\n\uf0a1X—Input tensor of shape (number of samples, height, width, channel)\n\uf0a1f—Integer specifying the shape of the middle convolutional layer’s window for\nthe main path\n\uf0a1filters —Python list of integers defining the number of filters in the convolu-\ntional layers of the main pathx\nCONV2D +Batch\nnorm1 × 1 conv 3 × 3 conv 1 × 1 conv\nReLu CONV2DBatch\nnormReLu ReLu CONV2DBatch\nnorm\nMain path ( ) fxAdd both\npathsShortcut path = + 1 × 1 conv + BN x\nCONV2DBatch\nnormBottleneck residual block with reduce shortcut\nFigure 5.23 To reduce the input dimensionality, we add a bottleneck layer (1 × 1 convolutional layer + batch \nnormalization) to the shortcut path. This is called the reduce shortcut .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 254} |
252 | page_content="235 ResNet\n\uf0a1reduce —Boolean: True identifies the reduction layer \n\uf0a1s—Integer (strides)\nThe function returns X: the output of the residual block, which is a tensor of shape\n(height, width, channel).\n The function is as follows:\ndef bottleneck_residual_block (X, kernel_size, filters, reduce= False, s=2):\n F1, F2, F3 = filters \n \n X_shortcut = X \n \n if reduce: \n X_shortcut = Conv2D(filters = F3, kernel_size = (1, 1), strides = \n(s,s))(X_shortcut) \n X_shortcut = BatchNormalization(axis = 3)(X_shortcut) \n X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (s,s), padding = \n'valid')(X) \n X = BatchNormalization(axis = 3)(X)\n X = Activation( 'relu')(X)\n \n else: \n # First component of main path\n X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = \n'valid')(X)\n X = BatchNormalization(axis = 3)(X)\n X = Activation( 'relu')(X)\n \n # Second component of main path\n X = Conv2D(filters = F2, kernel_size = kernel_size, strides = (1,1), padding = \n'same')(X)\n X = BatchNormalization(axis = 3)(X)\n X = Activation( 'relu')(X)\n # Third component of main path\n X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = \n'valid')(X)\n X = BatchNormalization(axis = 3)(X)\n # Final step\n X = Add()([X, X_shortcut]) \n X = Activation( 'relu')(X) \n \n return X\n5.6.3 ResNet implementation in Keras\nYou’ve learned a lot about residual blocks so far. Let’s add these blocks on top of each\nother to build the full ResNet architecture. Here, we will implement ResNet50: a ver-\nsion of the ResNet architecture that contains 50 weight layers (hence the name). YouUnpacks the tuple to retrieve the \nfilters of each convolutional layer\nSaves the input value to use it later \nto add back to the main pathCondition\nif reduce\nis True\nTo reduce the\nspatial size,\napplies a 1 × 1\nconvolutional\nlayer to the\nshortcut path.\nTo do that, we\nneed both\nconvolutional\nlayers to have\nsimilar strides.If reduce, sets the strides of the \nfirst convolutional layer to be \nsimilar to the shortcut strides.\nAdds the shortcut value to \nmain path and passes it \nthrough a ReLU activation" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 255} |
253 | page_content='236 CHAPTER 5Advanced CNN architectures\ncan use the same approach to develop ResNet with 18, 34, 101, and 152 layers by fol-\nlowing the architecture in figure 5.24 from the original paper.\nWe know from the previous section that each residual module contains 3 × 3 convolu-\ntional layers, and we now can compute the total number of weight layers inside the\nResNet50 network as follows:\n\uf0a1Stage 1: 7 × 7 convolutional layer \n\uf0a1Stage 2: 3 residual blocks, each containing [1 × 1 convolutional layer + 3 × 3\nconvolutional layer + 1 × 1 convolutional layer] = 9 convolutional layers\n\uf0a1Stage 3: 4 residual blocks = total of 12 convolutional layers\n\uf0a1Stage 4: 6 residual blocks = total of 18 convolutional layers\n\uf0a1Stage 5: 3 residual blocks = total of 9 convolutional layers\n\uf0a1Fully connected softmax layer\nWhen we sum all these layers together, we get a total of 50 weight layers that describe\nthe architecture of ResNet50. Similarly, you can compute the number of weight layers\nin the other ResNet versions.\nNOTE In the following implementation, we use the residual block with reduce\nshortcut at the beginning of each stage to reduce the spatial size of the outputLayer name\n3x3, 64\n3x3, 64x21x1, 64\n3x3, 64\n1x1, 256x3Output size\nconv1 112x112\n1x1\nFLOPs 1.8x109 3.6x109 3.8x109 7.6x109 11.3x10956x567x7, 64, stride 2\nconv2_x\n28x28 conv3_x\n14x14\n7x7conv4_x\nconv5_x3x3, maxpool, stride 2\nAverage pool, 1000-d fc, softmax18-layer 34-layer 50-layer 101-layer 152-layer\n3x3, 64\n3x3, 64x31x1, 64\n3x3, 64\n1x1, 256x31x1, 64\n3x3, 64\n1x1, 256x3\n1x1, 128\n3x3, 128\n1x1, 512x33x3, 128\n3x3, 128x41x1, 128\n3x3, 128\n1x1, 512x41x1, 128\n3x3, 128\n1x1, 512x8\n1x1, 256\n3x3, 256\n1x1, 1024x33x3, 256\n3x3, 256x61x1, 256\n3x3, 256\n1x1, 1024x231x1, 256\n3x3, 256\n1x1, 1024x36\n1x1, 512\n3x3, 512\n1x1, 2048x33x3, 512\n3x3, 512x33x3, 128\n3x3, 128x2\n3x3, 256\n3x3, 256x2\n3x3, 512\n3x3, 512x21x1, 512\n3x3, 512\n1x1, 2048x31x1, 512\n3x3, 512\n1x1, 2048x3\nFigure 5.24 Architecture of several ResNet variations from the original paper' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 256} |
254 | page_content="237 ResNet\nfrom the previous layer. Then we use the regular shortcut for the remaining\nlayers of that stage. Recall from our implementation of the bottleneck_\nresidual_block function that we will set the argument reduce to True to\napply the reduce shortcut.\nNow let’s follow the 50-layer architecture from figure 5.24 to build the ResNet50 net-\nwork. We build a ResNet50 function that takes input_shape and classes as argu-\nments and outputs the model:\ndef ResNet50 (input_shape, classes):\n X_input = Input(input_shape) \n # Stage 1\n X = Conv2D(64, (7, 7), strides=(2, 2), name= 'conv1')(X_input)\n X = BatchNormalization(axis=3, name= 'bn_conv1' )(X)\n X = Activation( 'relu')(X)\n X = MaxPooling2D((3, 3), strides=(2, 2))(X)\n # Stage 2\n X = bottleneck_residual_block(X, 3, [64, 64, 256], reduce= True, s=1)\n X = bottleneck_residual_block(X, 3, [64, 64, 256])\n X = bottleneck_residual_block(X, 3, [64, 64, 256])\n # Stage 3 \n X = bottleneck_residual_block(X, 3, [128, 128, 512], reduce= True, s=2)\n X = bottleneck_residual_block(X, 3, [128, 128, 512])\n X = bottleneck_residual_block(X, 3, [128, 128, 512])\n X = bottleneck_residual_block(X, 3, [128, 128, 512])\n # Stage 4 \n X = bottleneck_residual_block(X, 3, [256, 256, 1024], reduce= True, s=2)\n X = bottleneck_residual_block(X, 3, [256, 256, 1024])\n X = bottleneck_residual_block(X, 3, [256, 256, 1024])\n X = bottleneck_residual_block(X, 3, [256, 256, 1024])\n X = bottleneck_residual_block(X, 3, [256, 256, 1024])\n X = bottleneck_residual_block(X, 3, [256, 256, 1024])\n # Stage 5 \n X = bottleneck_residual_block(X, 3, [512, 512, 2048], reduce= True, s=2)\n X = bottleneck_residual_block(X, 3, [512, 512, 2048])\n X = bottleneck_residual_block(X, 3, [512, 512, 2048])\n # AVGPOOL \n X = AveragePooling2D((1,1))(X)\n # output layer\n X = Flatten()(X)\n X = Dense(classes, activation= 'softmax' , name='fc' + str(classes))(X)\n \n model = Model(inputs = X_input, outputs = X, name= 'ResNet50' ) \n return modelDefines the input as a tensor \nwith shape input_shape\nCreates the model" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 257} |
255 | page_content="238 CHAPTER 5Advanced CNN architectures\n5.6.4 Learning hyperparameters\nHe et al. followed a training procedure similar to that of AlexNet: the training is car-\nried out using mini-batch GD with momentum of 0.9. The team set the learning rate\nto start with a value of 0.1 and then decreased it by a factor of 10 when the validation\nerror stopped improving. They also used L2 regularization with a weight decay of\n0.0001 (not implemented in this chapter for simplicity). As you saw in the earlier\nimplementation, they used batch normalization right after each convolutional and\nbefore activation to speed up training:\nfrom keras.callbacks import ReduceLROnPlateau\nepochs = 200 \nbatch_size = 256 \nreduce_lr = ReduceLROnPlateau(monitor= 'val_loss', factor=np.sqrt( 0.1),\n patience =5, min_lr=0.5e-6) \nmodel.compile(loss= 'categorical_crossentropy' , optimizer=SGD, \nmetrics=[ 'accuracy' ]) \nmodel.fit(X_train, Y_train, batch_size=batch_size, validation_data =(X_test, \nY_test),\n epochs=epochs, callbacks=[reduce_lr]) \n5.6.5 ResNet performance on the CIFAR dataset\nSimilar to the other networks explained in this chapter, the performance of ResNet\nmodels is benchmarked based on their results in the ILSVRC competition. ResNet-152\nwon first place in the 2015 classification competition with a top-5 error rate of 4.49%\nwith a single model and 3.57% using an ensemble of models. This was much better\nthan all the other networks, such as GoogLeNet (Inception), which achieved a top-5\nerror rate of 6.67%. ResNet also won first place in many object detection and image\nlocalization challenges, as we will see in chapter 7. More importantly, the residual\nblocks concept in ResNet opened the door to new possibilities for efficiently training\nsuper-deep neural networks with hundreds of layers.\nUsing open source implementations\nNow that you have learned some of the most popular CNN architectures, I want to\nshare some practical advice on how to use them. It turns out that a lot of these neural\nnetworks are difficult or finicky to replicate due to details of tuning hyperparameters\nsuch as learning decay and other things that make a difference for performance. DL\nresearchers can even have a hard time replicating someone else’s polished work\nbased on reading their paper. Sets the training \nparametersmin_lr is the lower bound on\nthe learning rate, and factor is\nthe factor by which the learning\nrate will be reduced.\nCompiles\nthe model\nTrains the model, calling the \nreduce_lr value using callbacks \nin the training method" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 258} |
256 | page_content='239 Summary\nSummary\n\uf0a1Classical CNN architectures have the same classical architecture of stacking\nconvolutional and pooling layers on top of each other with different configura-\ntions for their layers.\n\uf0a1LeNet consists of five weight layers: three convolutional and two fully connected\nlayers, with a pooling layer after the first and second convolutional layers.\n\uf0a1AlexNet is deeper than LeNet and contains eight weight layers: five convolu-\ntional and three fully connected layers.\n\uf0a1VGGNet solved the problem of setting up the hyperparameters of the convolu-\ntional and pooling layers by creating a uniform configuration for them to be\nused across the entire network. \n\uf0a1Inception tried to solve the same problem as VGGNet: instead of having to\ndecide which filter size to use and where to add the pooling layer, Inception\nsays, “Let’s use them all.” \n\uf0a1ResNet followed the same approach as Inception and created residual blocks\nthat, when stacked on top of each other, form the network architecture. ResNet\nattempted to solve the vanishing gradient problem that made learning plateau\nor degrade when training very deep neural networks. The ResNet team intro-\nduced skip connections that allow information to flow from earlier layers in the\nnetwork to later layers, creating an alternate shortcut path for the gradient to\nflow through. The fundamental breakthrough with ResNet was that it allowed\nus to train extremely deep neural networks with hundreds of layers.Fortunately, many DL researchers routinely open source their work on the internet.\nA simple search for the network implementation on GitHub will point you toward\nimplementations in several DL libraries that you can clone and train. If you can locate\nthe author’s implementation, you can usually get going much faster than by trying to\nre-implement a network from scratch—although sometimes, re-implementing from\nscratch can be a good exercise, like what we did earlier.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 259} |
257 | page_content='240Transfer learning\nTransfer learning is one of the most important techniques of deep learning. When\nbuilding a vision system to solve a specific problem, you usually need to collect and\nlabel a huge amount of data to train your network. You can build convnets, as you\nlearned in chapter 3, and start the training from scratch; that is an acceptable\napproach. But what if you could download an existing neural network that some-\none else has tuned and trained, and use it as a starting point for your new task?\nTransfer learning allows you to do just that. You can download an open source\nmodel that someone else has already trained and tuned and use their optimized\nparameters (weights) as a starting point to train your model on a smaller dataset\nfor a given task. This way, you can train your network a lot faster and achieve\nhigher results.This chapter covers\n\uf0a1Understanding the transfer learning technique\n\uf0a1Using a pretrained network to solve your problem \n\uf0a1Understanding network fine-tuning\n\uf0a1Exploring open source image datasets for training \na model\n\uf0a1Building two end-to-end transfer learning projects' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 260} |
258 | page_content='241 What problems does transfer learning solve?\n DL researchers and practitioners have posted many research papers and open\nsource projects of trained algorithms that they have worked on for weeks and months\nand trained on GPUs to get state-of-the-art results on an array of problems. Often,\nthe fact that someone else has done this work and gone through the painful high-\nperformance research process means you can download an open source architecture\nand weights and use them as a good start for your own neural network. This is transfer\nlearning : the transfer of knowledge from a pretrained network in one domain to your\nown problem in a different domain.\n In this chapter, I will explain transfer learning and outline reasons why using it is\nimportant. I will also detail different transfer learning scenarios and how to use them.\nFinally, we will see examples of using transfer learning to solve real-world problems.\nReady? Let’s get started!\n6.1 What problems does transfer learning solve?\nAs the name implies, transfer learning means transferring what a neural network has\nlearned from being trained on a specific dataset to another related problem (figure 6.1).\nTransfer learning is currently very popular in the field of DL because it enables you to\ntrain deep neural networks with comparatively little data in a short training time. The\nimportance of transfer learning comes from the fact that in most real-world problems,\nwe typically do not have millions of labeled images to train such complex models.\nThe idea is pretty straightforward. First we train a deep neural network on a very large\namount of data. During the training process, the network extracts a large number of\nuseful features that can be used to detect objects in this dataset. We then transfer\nthese extracted features ( feature maps ) to a new network and train this new network on\nour new dataset to solve a different problem. Transfer learning is a great way to short-\ncut the process of collecting and training huge amounts of data simply by reusing theKnowledge\n(extracted features)\nFigure 6.1 Transfer learning is the transfer of \nthe knowledge that the network has acquired \nfrom one task to a new task. In the context of \nneural networks, the acquired knowledge is the \nextracted features.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 261} |
259 | page_content='242 CHAPTER 6Transfer learning\nmodel weights from pretrained models that were developed for standard CV bench-\nmark datasets, such as the ImageNet image-recognition tasks. Top-performing models\ncan be downloaded and used directly, or integrated into a new model for your own\nCV problems.\n The question is, why would we want to use transfer learning? Why don’t we just\ntrain a neural network directly on our new dataset to solve our problem? To answer\nthis question, we first need to know the main problems that transfer learning solves.\nWe’ll discuss those now; then I’ll go into the details of how transfer learning works\nand the different approaches to apply it.\n Deep neural networks are immensely data-hungry and rely on huge amounts of\nlabeled data to achieve high performance. In practice, very few people train an\nentire convolutional network from scratch. This is due to two main problems:\n\uf0a1Data problem —Training a network from scratch requires a lot of data in\norder to get decent results, which is not feasible in most cases. It is relatively\nrare to have a dataset of sufficient size to solve your problem. It is also very\nexpensive to acquire and label data: this is mostly a manual process done by\nhumans capturing images and labeling them one by one, which makes it a\nnontrivial task.\n\uf0a1Computation problem —Even if you are able to acquire hundreds of thousands\nof images for your problem, it is computationally very expensive to train a\ndeep neural network on millions of images because doing so usually requires\nweeks of training on multiple GPUs. Also keep in mind that training a neural\nnetwork is an iterative process. So, even if you happen to have the computing\npower required to train a complex neural network, spending weeks experi-\nmenting with different hyperparameters in each training iteration until you\nfinally reach satisfactory results will make the project very costly.\nAdditionally, an important benefit of using transfer learning is that it helps the model\ngeneralize its learnings and avoid overfitting. When you apply a DL model in the wild,\nit is faced with countless conditions it may never have seen before and does not know\nhow to deal with; each client has its own preferences and generates data that is differ-\nent from the data used for training. The model is asked to perform well on many tasks\nthat are related to but not exactly similar to the task it was trained for. \n For example, when you deploy a car classifier model to production, people usu-\nally have different camera types, each with its own image quality and resolution.\nAlso, images can be taken during different weather conditions. These image nuances\nvary from one user to another. To train the model on all these different cases, you\neither have to account for every case and acquire a lot of images to train the net-\nwork on, or try to build a more robust model that is better at generalizing to new use\ncases. This is what transfer learning does. Since it is not realistic to account for all\nthe cases the model may face in the wild, transfer learning can help us deal with\nnovel scenarios. It is necessary for production-scale use of DL that goes beyond tasks' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 262} |
260 | page_content='243 What is transfer learning?\nand domains where labeled data is plentiful. Transferring features extracted from\nanother network that has seen millions of images will make our model less prone to\noverfit and help it generalize better when faced with novel scenarios. You will be\nable to fully grasp this concept when we explain how transfer learning works in the\nfollowing sections.\n6.2 What is transfer learning?\nArmed with the understanding of the problems that transfer learning solves, let’s look\nat its formal definition. Transfer learning is the transfer of the knowledge (feature\nmaps) that the network has acquired from one task, where we have a large amount of\ndata, to a new task where data is not abundantly available. It is generally used where a\nneural network model is first trained on a problem similar to the problem that is\nbeing solved. One or more layers from the trained model are then used in a new\nmodel trained on the problem of interest.\n As we discussed earlier, to train an image classifier that will achieve image\nclassification accuracy near to or above the human level, we’ll need massive amounts\nof data, large compute power, and lots of time on our hands. I’m sure most of us\ndon’t have all these things. Knowing that this would be a problem for people with\nlittle-to-no resources, researchers built state-of-the-art models that were trained on\nlarge image datasets like ImageNet, MS COCO, Open Images, and so on, and then\nshared their models with the general public for reuse. This means you should never\nhave to train an image classifier from scratch again, unless you have an exception-\nally large dataset and a very large computation budget to train everything from\nscratch by yourself. Even if that is the case, you might be better off using transfer\nlearning to fine-tune the pretrained network on your large dataset. Later in this\nchapter, we will discuss the different transfer learning approaches, and you will\nunderstand what fine-tuning means and why it is better to use transfer learning even\nwhen you have a large dataset. We will also talk briefly about some of the popular\ndatasets mentioned here.\nNOTE When we talk about training a model from scratch, we mean that the\nmodel starts with zero knowledge of the world, and the model’s structure and\nparameters begin as random guesses. Practically speaking, this means the\nweights of the model are randomly initialized, and they need to go through a\ntraining process to be optimized.\nThe intuition behind transfer learning is that if a model is trained on a large and gen-\neral enough dataset, this model will effectively serve as a generic representation of the\nvisual world. We can then use the feature maps it has learned, without having to train\non a large dataset, by transferring what it learned to our model and using that as a\nbase starting model for our own task.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 263} |
261 | page_content='244 CHAPTER 6Transfer learning\nIn transfer learning, we first train a base network on a base dataset and task, and\nthen we repurpose the learned features, or transfer them to a second target network to\nbe trained on a target dataset and task. This process will tend to work if the features\nare general, meaning suitable to both base and target tasks, instead of specific to the\nbase task.\n —Jason Yosinski et al.1\nLet’s jump directly to an example to get a better intuition for how to use transfer\nlearning. Suppose we want to train a model that classifies dog and cat images, and we\nhave only two classes in our problem: dog and cat. We need to collect hundreds of\nthousands of images for each class, label them, and train our network from scratch.\nAnother option is to use transfer knowledge from another pretrained network.\n First, we need to find a dataset that has similar features to our problem at\nhand. This involves spending some time exploring different open source datasets\nto find the one closest to our problem. For the sake of this example, let’s use\nImageNet, since we are already familiar with it from the previous chapter and it\nhas a lot of dog and cat images. So the pretrained network is familiar with dog and\ncat features and will require minimum training. (Later in this chapter, we will explore\nother datasets.) Next, we need to choose a network that has been trained on Image-\nNet and achieved good results. In chapter 5, we learned about state-of-the-art\narchitectures like VGGNet, GoogLeNet, and ResNet. Any of them would work fine.\nFor this example, we will go with a VGG16 network that has been trained on Image-\nNet datasets.\n To adapt the VGG16 network to our problem, we are going to download it with\nthe pretrained weights, remove the classifier part, add our own classifier, and then\nretrain the new network (figure 6.2). This is called using a pretrained network as a fea-\nture extractor . We will discuss the different types of transfer learning later in this\nchapter.\nDEFINITION A pretrained model is a network that has been previously trained on\na large dataset, typically on a large-scale image classification task. We can\neither use the pretrained model directly as is to run our predictions, or use\nthe pretrained feature extraction part of the network and add our own classi-\nfier. The classifier here could be one or more dense layers or even traditional\nML algorithms like support vector machines (SVMs). \n1Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson, “How Transferable Are Features in Deep Neural\nNetworks?” Advances in Neural Information Processing Systems 27 (Dec. 2014): 3320–3328, https:/ /arxiv.org/\nabs/1411.1792 .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 264} |
262 | page_content='245 What is transfer learning?\nTo fully understand how to use transfer learning, let’s implement this example in\nKeras. (Luckily, Keras has a set of pretrained networks that are ready for us to down-\nl o a d a n d u s e : t h e c o m p l e t e l i s t o f m o d e l s i s a t https:/ /keras.io/api/applications .)\nHere are the steps:Freeze the weights in the\nfeature extraction layers.\nRemove the\nclassifier.\nAdd a softmax layer\nwith 2 units.3 × 3 CONV, 64\n3 × 3 CONV, 64\nPool/2\nPool/23 × 3 CONV, 128\nPool/23 × 3 CONV, 128\n3 × 3 CONV, 256\n3 × 3 CONV, 256\nPool/23 × 3 CONV, 256\n3 × 3 CONV, 512\n3 × 3 CONV, 512\nPool/23 × 3 CONV, 512\n3 × 3 CONV, 512\n3 × 3 CONV, 512\n3 × 3 CONV, 512\nFC 4096\nSoftmax 1000\nSoftmax 2+FC 4096\nFigure 6.2 Example of applying transfer \nlearning to a VGG16 network. We freeze the \nfeature extraction part of the network and \nremove the classifier part. Then we add our \nnew classifier softmax layer with two \nhidden units.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 265} |
263 | page_content='246 CHAPTER 6Transfer learning\n1Download the open source code of the VGG16 network and its weights to cre-\nate our base model, and remove the classification layers from the VGG network\n(FC_4096 > FC_4096 > Softmax_1000 ): \nfrom keras.applications.vgg16 import VGG16 \nbase_model = VGG16(weights = "imagenet" , include_top= False, \n input_shape = (224,224, 3)) \nbase_model.summary()\n2When you print a summary of the base model, you will notice that we down-\nloaded the exact VGG16 architecture that we implemented in chapter 5. This is\na fast approach to download popular networks that are supported by the DL\nlibrary you are using. Alternatively, you can build the network yourself, as we\ndid in chapter 5, and download the weights separately. I’ll show you how in the\nproject at the end of this chapter. But for now, let’s look at the base_model sum-\nmary that we just downloaded:\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 224, 224, 3) 0 \n_________________________________________________________________\nblock1_conv1 (Conv2D) (None, 224, 224, 64) 1792 \n_________________________________________________________________\nblock1_conv2 (Conv2D) (None, 224, 224, 64) 36928 \n_________________________________________________________________\nblock1_pool (MaxPooling2D) (None, 112, 112, 64) 0 \n_________________________________________________________________\nblock2_conv1 (Conv2D) (None, 112, 112, 128) 73856 \n_________________________________________________________________\nblock2_conv2 (Conv2D) (None, 112, 112, 128) 147584 \n_________________________________________________________________\nblock2_pool (MaxPooling2D) (None, 56, 56, 128) 0 \n_________________________________________________________________\nblock3_conv1 (Conv2D) (None, 56, 56, 256) 295168 \n_________________________________________________________________\nblock3_conv2 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_conv3 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_pool (MaxPooling2D) (None, 28, 28, 256) 0 \n_________________________________________________________________\nblock4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 \n_________________________________________________________________\nblock4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________Imports the VGG 16 \nmodel from Keras\nDownloads the model’s pretrained weights and saves them in the variable base_model.\nWe specify that Keras should download the ImageNet weights. include_top is false to\nignore the fully connected classifier part on top of the model.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 266} |
264 | page_content="247 What is transfer learning?\nblock4_pool (MaxPooling2D) (None, 14, 14, 512) 0 \n_________________________________________________________________\nblock5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_pool (MaxPooling2D) (None, 7, 7, 512) 0 \n=================================================================\nTotal params: 14,714,688\nTrainable params: 14,714,688\nNon-trainable params: 0\n_________________________________________________________________\nNotice that this downloaded architecture does not contain the classifier part\n(three fully connected layers) at the top of the network because we set the\ninclude_top argument to False . More importantly, notice the number of train-\nable and non-trainable parameters in the summary. The downloaded network\nas it is makes all the network parameters trainable. As you can see, our base_\nmodel has more than 14 million trainable parameters. Next, we want to freeze\nall the downloaded layers and add our own classifier. \n3Freeze the feature extraction layers that have been trained on the ImageNet\ndataset. Freezing layers means freezing their trained weights to prevent them\nfrom being retrained when we run our training:\nfor layer in base_model.layers: \n layer.trainable = False\nbase_model.summary()\nThe model summary is omitted in this case for brevity, as it is similar to the pre-\nvious one. The difference is that all the weights have been frozen, the trainable\nparameters are now equal to zero, and all the parameters of the frozen layers\nare non-trainable: \nTotal params: 14,714,688\nTrainable params: 0\nNon-trainable params: 14,714,688\n4Add our own classification dense layer. Here, we will add a softmax layer with\ntwo units because we have only two classes in our problem (see figure 6.3):\nfrom keras.layers import Dense, Flatten \nfrom keras.models import Model\nlast_layer = base_model.get_layer( 'block5_pool' ) \nlast_output = last_layer.output Iterates through layers \nand locks them to make them \nnon-trainable with this code\nImports Keras modules\nUses the get_layer \nmethod to save the last \nlayer of the network\nSaves the output of the last layer \nto be the input of the next layer" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 267} |
265 | page_content="248 CHAPTER 6Transfer learning\nx = Flatten()(last_output) \nx = Dense(2, activation= 'softmax' , name='softmax' )(x) \n5Build a new_model that takes the input of the base model as its input and the\noutput of the last softmax layer as an output. The new model is composed of all\nthe feature extraction layers in VGGNet with the pretrained weights, plus our\nnew, untrained , softmax layer. In other words, when we train the model, we are\nonly going to train the softmax layer in this example to detect the specific fea-\ntures of our new problem (German Shepherd, Beagle, Neither): \nnew_model = Model(inputs=base_model.input, outputs=x) \nnew_model.summary() \n_________________________________________________________________\nLayer (type) Output Shape Param # \n===================================================\ninput_1 (InputLayer) (None, 224, 224, 3) 0 \n_________________________________________________________________\nblock1_conv1 (Conv2D) (None, 224, 224, 64) 1792 \n_________________________________________________________________\nblock1_conv2 (Conv2D) (None, 224, 224, 64) 36928 \n_________________________________________________________________\nblock1_pool (MaxPooling2D) (None, 112, 112, 64) 0 \n_________________________________________________________________\nblock2_conv1 (Conv2D) (None, 112, 112, 128) 73856 \n_________________________________________________________________\nblock2_conv2 (Conv2D) (None, 112, 112, 128) 147584 \n_________________________________________________________________\nblock2_pool (MaxPooling2D) (None, 56, 56, 128) 0 \n_________________________________________________________________Flattens the classifier input, which is the \noutput of the last layer of the VGG 16 modelAdds our new softmax layer\nwith two hidden units\nRemove the\nclassifier.\nAdd a softmax layer\nwith 2 units.Pool/2\nFC 4096\nSoftmax 1000\nSoftmax 2+FC 4096\nFigure 6.3 Remove the classifier part of \nthe network, and add a softmax layer with \ntwo hidden nodes.\nInstantiates a new_model using\nKeras’s Model class\nPrints the new_model summary" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 268} |
266 | page_content='249 What is transfer learning?\nblock3_conv1 (Conv2D) (None, 56, 56, 256) 295168 \n_________________________________________________________________\nblock3_conv2 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_conv3 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_pool (MaxPooling2D) (None, 28, 28, 256) 0 \n_________________________________________________________________\nblock4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 \n_________________________________________________________________\nblock4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_pool (MaxPooling2D) (None, 14, 14, 512) 0 \n_________________________________________________________________\nblock5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_pool (MaxPooling2D) (None, 7, 7, 512) 0 \n_________________________________________________________________\nflatten_layer (Flatten) (None, 25088) 0 \n_________________________________________________________________\nsoftmax (Dense) (None, 2) 50178 \n===================================================\nTotal params: 14,789,955\nTrainable params: 50,178\nNon-trainable params: 14,714,688\n_________________________________________________________________\nTraining the new model is a lot faster than training the network from scratch. To ver-\nify that, look at the number of trainable params in this model (~50,000) compared\nto the number of non-trainable params in the network (~14 million). These “non-\ntrainable” parameters are already trained on a large dataset, and we froze them to\nuse the extracted features in our problem. With this new model, we don’t have to\ntrain the entire VGGNet from scratch because we only have to deal with the newly\nadded softmax layer.\n Additionally, we get much better performance with transfer learning because the\nnew model has been trained on millions of images (ImageNet dataset + our small\ndataset). This allows the network to understand the finer details of object nuances,\nwhich in turn makes it generalize better on new, previously unseen images.\n Note that in this example, we only explored the part where we build the model, to\nshow how transfer learning is used. At the end of this chapter, I’ll walk you through\ntwo end-to-end projects to demonstrate how to train the new network on your small\ndataset. But now, let’s see how transfer learning works.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 269} |
267 | page_content='250 CHAPTER 6Transfer learning\n6.3 How transfer learning works\nSo far, we learned what the transfer learning technique is and the main problems it\nsolves. We also saw an example of how to take a pretrained network that was trained\non ImageNet and transfer its learnings to our specific task. Now, let’s see why transfer\nlearning works , what is really being transferred from one problem to another, and how\na network that is trained on one dataset can perform well on a different, possibly\nunrelated, dataset.\n The following quick questions are reminders from previous chapters to get us to\nthe core of what is happening in transfer learning:\n1What is really being learned by the network during training? The short answer\nis: feature maps . \n2How are these features learned? During the backpropagation process, the\nweights are updated until we get to the optimized weights that minimize the error\nfunction. \n3What is the relationship between features and weights? A feature map is the\nresult of passing the weights filter on the input image during the convolution\nprocess (figure 6.4).\n4What is really being transferred from one network to another? To transfer fea-\ntures, we download the optimized weights of the pretrained network. These\nweights are then reused as the starting point for the training process and\nretrained to adapt to the new problem. \nOkay, let’s dive into the details to understand what we mean when we say pretrained net-\nwork. When we’re training a convolutional neural network, the network extracts fea-\ntures from an image in the form of feature maps: outputs of each layer in a neural\nnetwork after applying the weights filter. They are representations of the features thatConvolution kernel\nwith optimized weightsInput imageConvolved image\n(feature map)\n–10 0\n4/g0 = –1 –1\n–10 0\nFigure 6.4 Example of generating a feature map by applying a convolutional kernel to the input image' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 270} |
268 | page_content='251 How transfer learning works\nexist in the training set. They are called feature maps because they map where a certain\nkind of feature is found in the image. CNNs look for features such as straight lines,\nedges, and even objects. Whenever they spot these features, they report them to the\nfeature map. Each weight filter is looking for something different that is reflected in\nthe feature maps: one filter could be looking for straight lines, another for curves, and\nso on (figure 6.5).\nNow, recall that neural networks iteratively update their weights during the training\ncycle of feedforward and backpropagation. We say the network has been trained when\nwe go through a series of training iterations and hyperparameter tuning until the net-\nwork yields satisfactory results. When training is complete, we output two main items:\nthe network architecture and the trained weights. So, when we say that we are going to\nuse a pretrained network , we mean that we will download the network architecture\ntogether with the weights. \n During training, the model learns only the features that exist in this training data-\nset. But when we download large models (like Inception) that have been trained on\nhuge numbers of datasets (like ImageNet), all the features that have already been\nextracted from these large datasets are now available for us to use. I find that really\nexciting because these pretrained models have spotted other features that weren’t in\nour dataset and will help us build better convolutional networks.\n In vision problems, there’s a huge amount of stuff for neural networks to learn\nabout the training dataset. There are low-level features like edges, corners, round\nshapes, curvy shapes, and blobs; and then there are mid- and higher-level features\nlike eyes, circles, squares, and wheels. There are many details in the images that\nCNNs can pick up on—but if we have only 1,000 images or even 25,000 images in\nour training dataset, this may not be enough data for the model to learn all those\nthings. By using a pretrained network, we can basically download all this knowledge\ninto our neural network to give it a huge and much faster start with even higher per-\nformance levels.Input\nOutput Feature map 1 Feature map 2 Feature map 3 Feature map 4 Feature map 5\nFigure 6.5 The network extracts features from an image in the form of feature maps. They are representations \nof the features that exist in the training set after applying the weight filters.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 271} |
269 | page_content='252 CHAPTER 6Transfer learning\n6.3.1 How do neural networks learn features?\nA neural network learns the features in a dataset step by step in increasing levels of\ncomplexity, one layer after another. These are called feature maps . The deeper you go\nthrough the network layers, the more image-specific features are learned. In figure 6.6,\nthe first layer detects low-level features such as edges and curves. The output of the first\nlayer becomes input to the second layer, which produces higher-level features like semi-\ncircles and squares. The next layer assembles the output of the previous layer into parts\nof familiar objects, and a subsequent layer detects the objects. As we go through more\nlayers, the network yields an activation map that represents more complex features. As we\ngo deeper into the network, the filters begin to be more responsive to a larger region of\nthe pixel space. Higher-level layers amplify aspects of the received inputs that are\nimportant for discrimination and suppress irrelevant variations.\nConsider the example in figure 6.6. Suppose we are building a model that detects\nhuman faces. We notice that the network learns low-level features like lines, edges,\nand blobs in the first layer. These low-level features appear not to be specific to a\nparticular dataset or task; they are general features that are applicable to many data-\nsets and tasks. The mid-level layers assemble those lines to be able to recognize\nshapes, corners, and circles. Notice that the extracted features start to get a little\nLow-level generic features\n(edges, blobs, etc.)Mid-level features:\ncombinations of edges and other\nfeatures that are more specific to\nthe training datasetHigh-level features that are very\nspecific to the training dataset\nJane\nAlice\nJohn\nMaxLabels\nFigure 6.6 An example of how CNNs detect low-level generic features at the early layers of the \nnetwork. The deeper you go through the network layers, the more image-specific features are learned.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 272} |
270 | page_content='253 How transfer learning works\nmore specific to our task (human faces): mid-level features contain combinations of\nshapes that form objects in the human face like eyes and noses. As we go deeper\nthrough the network, we notice that features eventually transition from general to\nspecific and, by the last layer of the network, form high-level features that are very\nspecific to our task. We start seeing parts of human faces that distinguish one person\nfrom another. \n Now, let’s take this example and compare the feature maps extracted from four\nmodels that are trained to classify faces, cars, elephants, and chairs (see figure 6.7).\nNotice that the earlier layers’ features are very similar for all the models. They repre-\nsent low-level features like edges, lines, and blobs. This means models that are trained\non one task capture similar relations in the data types in the earlier layers of the net-\nwork and can easily be reused for different problems in other domains. The deeper\nwe go into the network, the more specific the features, until the network overfits its\ntraining data and it becomes harder to generalize to different tasks. The lower-level\nfeatures are almost always transferable from one task to another because they contain\ngeneric information like the structure and nature of how images look. Transferring\ninformation like lines, dots, curves, and small parts of objects is very valuable for the\nnetwork to learn faster and with less data on the new task.\nFaces Cars Elephants Chairs\nFigure 6.7 Feature maps extracted from four models that are trained to classify faces, cars, elephants, and \nchairs' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 273} |
271 | page_content='254 CHAPTER 6Transfer learning\n6.3.2 Transferability of features extracted at later layers\nThe transferability of features that are extracted at later layers depends on the similar-\nity of the original and new datasets. The idea is that all images must have shapes and\nedges, so the early layers are usually transferable between different domains. We can\nonly identify differences between objects when we start extracting higher-level fea-\ntures: say, the nose on a face or the tires on a car. Only then can we say, “Okay, this is a\nperson, because it has a nose. And this is a car, because it has tires.” Based on the sim-\nilarity of the source and target domains, we can decide whether to transfer only the\nlow-level features from the source domain, or the high-level features, or somewhere in\nbetween. This is motivated by the observation that the later layers of the network\nbecome progressively more specific to the details of the classes contained in the origi-\nnal dataset, as we are going to discuss in the next section. \nDEFINITIONS The source domain is the original dataset that the pretrained net-\nwork is trained on. The target domain is the new dataset that we want to train\nthe network on.\n6.4 Transfer learning approaches\nThere are three major transfer learning approaches: pretrained network as a classifier,\npretrained network as a feature extractor, and fine-tuning. Each approach can be\neffective and save significant time in developing and training a deep CNN model. It\nmay not be clear which use of a pretrained model may yield the best results on your\nnew CV task, so some experimentation may be required. In this section, we will\nexplain these three scenarios and give examples of how to implement them.\n6.4.1 Using a pretrained network as a classifier \nUsing a pretrained network as a classifier doesn’t involve freezing any layers or doing\nextra model training. Instead, we just take a network that was trained on a similar\nproblem and deploy it directly to our task. The pretrained model is used directly to\nclassify new images with no changes applied to it and no extra training. All we do is\ndownload the network architecture and its pretrained weights and then run the pre-\ndictions directly on our new data. In this case, we are saying that the domain of our\nnew problem is very similar to the one that the pretrained network was trained on,\nand it is ready to be deployed.\n In the dog breed example, we could have used a VGG16 network that was trained\non an ImageNet dataset directly to run predictions. ImageNet already contains a lot\nof dog images, so a significant portion of the representational power of the pre-\ntrained network may be devoted to features that are specific to differentiating\nbetween dog breeds.\n Let’s see how to use a pretrained network as a classifier. In this example, we will use\na VGG16 network that was pretrained on the ImageNet dataset to classify the image of\nthe German Shepherd dog in figure 6.8.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 274} |
272 | page_content='255 Transfer learning approaches\nThe steps are as follows:\n1Import the necessary libraries:\nfrom keras.preprocessing.image import load_img\nfrom keras.preprocessing.image import img_to_array\nfrom keras.applications.vgg16 import preprocess_input\nfrom keras.applications.vgg16 import decode_predictions\nfrom keras.applications.vgg16 import VGG16\n2Download the pretrained model of VGG16 and its ImageNet weights. We set\ninclude_top to True because we want to use the entire network as a classifier:\nmodel = VGG16(weights = "imagenet" , include_top= True, input_shape = \n(224,224, 3))\n3Load and preprocess the input image: \nimage = load_img( \'path/to/image.jpg\' , target_size=(224, 224)) \nimage = img_to_array(image) \nimage = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))\nimage = preprocess_input(image) \nFigure 6.8 A sample image of a German Shepherd that we will use to run \npredictions\nLoads an image from a file\nConverts the image \npixels to a NumPy array\nReshapes the data \nfor the model Prepares the image \nfor the VGG model' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 275} |
273 | page_content="256 CHAPTER 6Transfer learning\n4Now our input image is ready for us to run predictions:\nyhat = model.predict(image) \nlabel = decode_predictions(yhat) \nlabel = label[0][0] \nprint('%s (%.2f%%)' % (label[1], label[2]*100)) \nWhen you run this code, you will get the following output:\n>> German_shepherd (99.72%)\nYou can see that the model was already trained to predict the correct dog breed with a\nhigh confidence score (99.72%). This is because the ImageNet dataset has more than\n20,000 labeled dog images classified into 120 classes. Go to the book’s website to play\nwith the code yourself with your own images: www.manning.com/books/deep-learning-\nfor-vision-systems or www.computervisionbook.com . Feel free to explore the classes\navailable in ImageNet and run this experiment on your own images.\n6.4.2 Using a pretrained network as a feature extractor\nThis approach is similar to the dog breed example that we implemented earlier in this\nchapter: we take a pretrained CNN on ImageNet, freeze its feature extraction part,\nremove the classifier part, and add our own new, dense classifier layers. In figure 6.9,\nwe use a pretrained VGG16 network, freeze the weights in all 13 convolutional layers,\nand replace the old classifier with a new one to be trained from scratch.\n We usually go with this scenario when our new task is similar to the original data-\nset that the pretrained network was trained on. Since the ImageNet dataset has a lot\nof dog and cat examples, the feature maps that the network has learned contain a\nlot of dog and cat features that are very applicable to our new task. This means we\ncan use the high-level features that were extracted from the ImageNet dataset in this\nnew task. \n To do that, we freeze all the layers from the pretrained network and only train the\nclassifier part that we just added on the new dataset. This approach is called using a\npretrained network as a feature extractor because we freeze the feature extractor part\nto transfer all the learned feature maps to our new problem. We only add a new classi-\nfier, which will be trained from scratch, on top of the pretrained model so that we can\nrepurpose the previously learned feature maps for our dataset. \n We remove the classification part of the pretrained network because it is often\nvery specific to the original classification task, and subsequently it is specific to the\nset of classes on which the model was trained. For example, ImageNet has 1,000Predicts the probability \nacross all output classes\nConverts the probabilities \nto class labels\nRetrieves the most likely result \nwith the highest probability\nPrints the \nclassification" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 276} |
274 | page_content='257 Transfer learning approaches\nFreeze the weights in the\nfeature extraction layers.\nRemove the\nclassifier.\nAdd a softmax layer\nwith 2 units.3 × 3 CONV, 64\n3 × 3 CONV, 64\nPool/2\nPool/23 × 3 CONV, 128\nPool/23 × 3 CONV, 128\n3 × 3 CONV, 256\n3 × 3 CONV, 256\nPool/23 × 3 CONV, 256\n3 × 3 CONV, 512\n3 × 3 CONV, 512\nPool/23 × 3 CONV, 512\n3 × 3 CONV, 512\n3 × 3 CONV, 512\n3 × 3 CONV, 512\nAdd the new\nclassifier.FC 4096\nSoftmax 1000\n+FC 4096\nFC 4096\nSoftmax 2FC 4096\nFigure 6.9 Load a pretrained VGG16 network, remove the classifier, and add \nyour own classifier.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 277} |
275 | page_content='258 CHAPTER 6Transfer learning\nclasses. The classifier part has been trained to overfit the training data to classify\nthem into 1,000 classes. But in our new problem, let’s say cats versus dogs, we have\nonly two classes. So, it is a lot more effective to train a new classifier from scratch to\noverfit these two classes.\n6.4.3 Fine-tuning \nSo far, we’ve seen two basic approaches of using a pretrained network in transfer\nlearning: using a pretrained network as a classifier or as a feature extractor. We gener-\nally use these approaches when the target domain is somewhat similar to the source\ndomain. But what if the target domain is different from the source domain? What if it\nis very different? Can we still use transfer learning? Yes. Transfer learning works great\neven when the domains are very different. We just need to extract the correct feature\nmaps from the source domain and fine-tune them to fit the target domain.\n In figure 6.10, we show the different approaches of transferring knowledge from a\npretrained network. If you are downloading the entire network with no changes and\njust running predictions, then you are using the network as a classifier. If you are\nfreezing the convolutional layers only, then you are using the pretrained network as a\nfeature extractor and transferring all of its high-level feature maps to your domain.\nThe formal definition of fine-tuning is freezing a few of the network layers that are\nused for feature extraction, and jointly training both the non-frozen layers and the\nnewly added classifier layers of the pretrained model. It is called fine-tuning because\nwhen we retrain the feature extraction layers, we fine-tune the higher-order feature\nrepresentations to make them more relevant for the new task dataset. \n In more practical terms, if we freeze features maps 1 and 2 in figure 6.10, the new\nnetwork will take feature maps 2 as its input and will start learning from this point to\nadapt the features of the later layers to the new dataset. This saves the network the\ntime that it would have spent learning feature maps 1 and 2.\n...Feature map 1\nInputFeature map 2 Feature map 3 Feature map 4Classifier\nPretrained as\na classifierPretrained as a\nfeature extractorOr here?Flatten\nOr here?\nFine-tuning rangeFreeze here?Retrain the\nentire network....\nFigure 6.10 The network learns features through its layers. In transfer learning, we make a decision to freeze \nspecific layers of a pretrained network to preserve the learned features. For example, if we freeze the network at \nfeature maps of layer 3, we preserve what it has learned in layers 1, 2, and 3.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 278} |
276 | page_content='259 Transfer learning approaches\nAs we discussed earlier, feature maps that are extracted early in the network are\ngeneric. The feature maps get progressively more specific as we go deeper in the net-\nwork. This means feature maps 4 in figure 6.10 are very specific to the source domain.\nBased on the similarity of the two domains, we can decide to freeze the network at the\nappropriate level of feature maps:\n\uf0a1If the domains are similar, we might want to freeze the network up to the last\nfeature map level (feature maps 4, in the example).\n\uf0a1If the domains are very different, we might decide to freeze the pretrained net-\nwork after feature maps 1 and retrain all the remaining layers.\nBetween these two possibilities are a range of fine-tuning options that we can apply.\nWe can retrain the entire network, or freeze the pretrained network at any level of\nfeature maps 1, 2, 3, or 4 and retrain the remainder of the network. We typically\ndecide the appropriate level of fine-tuning by trial and error. But there are guidelines\nthat we can follow to intuitively decide on the fine-tuning level for the pretrained net-\nwork. The decision is a function of two factors: the amount of data we have and the\nlevel of similarity between the source and target domains. We will explain these fac-\ntors and the four possible scenarios to choose the appropriate level of fine-tuning in\nsection 6.5. \nWHY IS FINE-TUNING BETTER THAN TRAINING FROM SCRATCH ?\nWhen we train a network from scratch, we usually randomly initialize the weights and\napply a gradient descent optimizer to find the best set of weights that optimizes our\nerror function (as discussed in chapter 2). Since these weights start with random val-\nues, there is no guarantee that they will begin with values that are close to the desired\noptimal values. And if the initialized value is far from the optimal value, the optimizer\nwill take a long time to converge. This is when fine-tuning can be very useful. The pre-\ntrained network’s weights have been already optimized to learn from its dataset. Thus,\nwhen we use this network in our problem, we start with the weight values that it ended\nwith. So, the network converges much faster than if it had to randomly initialize the\nweights. We are basically fine-tuning the already-optimized weights to fit our new prob-\nlem instead of training the entire network from scratch with random weights. Even if\nwe decide to retrain the entire pretrained network, starting with the trained weights\nwill converge faster than having to train the network from scratch with randomly ini-\ntialized weights.\nUSING A SMALLER LEARNING RATE WHEN FINE-TUNING\nIt’s common to use a smaller learning rate for ConvNet weights that are being fine-\ntuned, in comparison to the (randomly initialized) weights for the new linear classi-\nfier that computes the class scores of a new dataset. This is because we expect that the\nConvNet weights are relatively good, so we don’t want to distort them too quickly and\ntoo much (especially while the new classifier above them is being trained from ran-\ndom initialization).' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 279} |
277 | page_content='260 CHAPTER 6Transfer learning\n6.5 Choosing the appropriate level of transfer learning\nRecall that early convolutional layers extract generic features and become more spe-\ncific to the training data the deeper we go through the network. With that said, we can\nchoose the level of detail for feature extraction from an existing pretrained model.\nFor example, if a new task is quite different from the source domain of the pretrained\nnetwork (for example, different from ImageNet), then perhaps the output of the pre-\ntrained model after the first few layers would be appropriate. If a new task is similar to\nthe source domain, then perhaps the output from layers much deeper in the model\ncan be used, or even the output of the fully connected layer prior to the softmax layer.\n As mentioned earlier, choosing the appropriate level for transfer learning is a func-\ntion of two important factors:\n\uf0a1Size of the target dataset (small or large) —When we have a small dataset, the net-\nwork probably won’t learn much from training more layers, so it will tend to\noverfit the new data. In this case, we most likely want to do less fine-tuning and\nrely more on the source dataset.\n\uf0a1Domain similarity of the source and target datasets —How similar is our new problem\nto the domain of the original dataset? For example, if your problem is to classify\ncars and boats, ImageNet could be a good option because it contains a lot of\nimages of similar features. On the other hand, if your problem is to classify lung\ncancer on X-ray images, this is a completely different domain that will likely\nrequire a lot of fine-tuning. \nThese two factors lead to the four major scenarios:\n1The target dataset is small and similar to the source dataset.\n2The target dataset is large and similar to the source dataset.\n3The target dataset is small and very different from the source dataset.\n4The target dataset is large and very different from the source dataset.\nLet’s discuss these scenarios one by one to learn the common rules of thumb for navi-\ngating our options.\n6.5.1 Scenario 1: Target dataset is small and similar \nto the source dataset\nSince the original dataset is similar to our new dataset, we can expect that the higher-\nlevel features in the pretrained ConvNet are relevant to our dataset as well. Then it\nmight be best to freeze the feature extraction part of the network and only retrain the\nclassifier. \n Another reason it might not be a good idea to fine-tune the network is that our\nnew dataset is small. If we fine-tune the feature extraction layers on a small dataset,\nthat will force the network to overfit to our data. This is not good because, by defini-\ntion, a small dataset doesn’t have enough information to cover all possible features\nof its objects, which makes it fail to generalize to new, previously unseen, data. So in' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 280} |
278 | page_content='261 Choosing the appropriate level of transfer learning\nthis case, the more fine-tuning we do, the more the network is prone to overfit the\nnew data.\n For example, suppose all the images in our new dataset contain dogs in a specific\nweather environment—snow, for example. If we fine-tuned on this dataset, we would\nforce the new network to pick up features like snow and a white background as dog-\nspecific features and make it fail to classify dogs in other weather conditions. Thus the\ngeneral rule of thumb is: if you have a small amount of data, be careful of overfitting\nwhen you fine-tune your pretrained network.\n6.5.2 Scenario 2: Target dataset is large and similar \nto the source dataset\nSince both domains are similar, we can freeze the feature extraction part and retrain\nthe classifier, similar to what we did in scenario 1. But since we have more data in the\nnew domain, we can get a performance boost from fine-tuning through all or part of\nthe pretrained network with more confidence that we won’t overfit. Fine-tuning\nthrough the entire network is not really needed because the higher-level features\nare related (since the datasets are similar). So a good start is to freeze approximately\n60–80% of the pretrained network and retrain the rest on the new data.\n6.5.3 Scenario 3: Target dataset is small and different \nfrom the source dataset \nSince the dataset is different, it might not be best to freeze the higher-level features of\nthe pretrained network, because they contain more dataset-specific features. Instead,\nit would work better to retrain layers from somewhere earlier in the network—or to\nnot freeze any layers and fine-tune the entire network. However, since you have a\nsmall dataset, fine-tuning the entire network on the dataset might not be a good idea,\nbecause doing so will make it prone to overfitting. A midway solution will work better\nin this case. A good start is to freeze approximately the first third or half of the pre-\ntrained network. After all, the early layers contain very generic feature maps that will\nbe useful for your dataset even if it is very different.\n6.5.4 Scenario 4: Target dataset is large and different \nfrom the source dataset\nSince the new dataset is large, you might be tempted to just train the entire network\nfrom scratch and not use transfer learning at all. However, in practice, it is often still\nvery beneficial to initialize weights from a pretrained model, as we discussed earlier.\nDoing so makes the model converge faster. In this case, we have a large dataset that\nprovides us with the confidence to fine-tune through the entire network without hav-\ning to worry about overfitting.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 281} |
279 | page_content='262 CHAPTER 6Transfer learning\n6.5.5 Recap of the transfer learning scenarios\nWe’ve explored the two main factors that help us define which transfer learning\napproach to use (size of our data and similarity between the source and target data-\nsets). These two factors give us the four major scenarios defined in table 6.1. Figure 6.11\nsummarizes the guidelines for the appropriate fine-tuning level to use in each of the\nscenarios.\n6.6 Open source datasets\nThe CV research community has been pretty good about posting datasets on the inter-\nnet. So, when you hear names like ImageNet, MS COCO, Open Images, MNIST,\nCIFAR, and many others, these are datasets that people have posted online and that a\nlot of computer researchers have used as benchmarks to train their algorithms and\nget state-of-the-art results. Table 6.1 Transfer learning scenarios\nScenario Size of the \ntarget dataSimilarity of the original \nand new datasetsApproach \n1 Small Similar Pretrained network as a feature extractor\n2 Large Similar Fine-tune through the full network\n3 Small Very different Fine-tune from activations earlier in the \nnetwork \n4 Large Very different Fine-tune through the entire network\n...\nScenario #1: You have a small dataset\nthat is similar to the source dataset....\nScenario #2: You have a large dataset\nthat is similar to the source dataset.\nScenario #3: You have a small dataset\nthat is different from the source dataset.\nScenario #4: You have a large dataset\nthat is different from the source dataset.\nFigure 6.11 Guidelines for the appropriate fine-tuning level to use in each of the four scenarios' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 282} |
280 | page_content='263 Open source datasets\n In this section, we will review some of the popular open source datasets to help\nguide you in your search to find the most suitable dataset for your problem. Keep in\nmind that the ones listed in this chapter are the most popular datasets used in the CV\nresearch community at the time of writing; we do not intend to provide a comprehen-\nsive list of all the open source datasets out there. A great many image datasets are\navailable, and the number is growing every day. Before starting your project, I encour-\nage you to do your own research to explore the available datasets.\n6.6.1 MNIST \nMNIST ( http:/ /yann.lecun.com/exdb/mnist ) stands for Modified National Institute\nof Standards and Technology. It contains labeled handwritten images of digits from 0\nto 9. The goal of this dataset is to classify handwritten digits. MNIST has been popular\nwith the research community for benchmarking classification algorithms. In fact, it is\nconsidered the “hello, world!” of image datasets. But nowadays, the MNIST dataset is\ncomparatively pretty simple, and a basic CNN can achieve more than 99% accuracy, so\nMNIST is no longer considered a benchmark for CNN performance. We imple-\nmented a CNN classification project using MNIST dataset in chapter 3; feel free to go\nback and review it.\n MNIST consists of 60,000 training images and 10,000 test images. All are grayscale\n(one-channel), and each image is 28 pixels high and 28 pixels wide. Figure 6.12 shows\nsome sample images from the MNIST dataset.\nFigure 6.12 Samples from the MNIST dataset' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 283} |
281 | page_content='264 CHAPTER 6Transfer learning\n6.6.2 Fashion-MNIST\nFashion-MNIST was created with the intention of replacing the original MNIST data-\nset, which has become too simple for modern convolutional networks. The data is\nstored in the same format as MNIST, but instead of handwritten digits, it contains\n60,000 training images and 10,000 test images of 10 fashion clothing classes: t-shirt/top,\ntrouser, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot. Visit https:/ /\ngithub.com/zalandoresearch/fashion-mnist to explore and download the dataset. Fig-\nure 6.13 shows a sample of the represented classes.\n6.6.3 CIFAR\nCIFAR-10 ( www.cs.toronto.edu/~kriz/cifar.html ) is considered another benchmark\ndataset for image classification in the CV and ML literature. CIFAR images are more\ncomplex than those in MNIST in the sense that MNIST images are all grayscale with\nAnkle boot T-shirt/top T-shirt/top Dress T-shirt/top\nPullover Sneaker Pullover Sandal Sandal\nT-shirt/top Ankle boot Sandal Sandal Sneaker\nAnkle boot Trouser T-shirt/top Shirt Coat\nDress Trouser Coat Bag Coat\nFigure 6.13 Sample images from the Fashion-MNIST dataset' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 284} |
282 | page_content='265 Open source datasets\nperfectly centered objects, whereas CIFAR images are color (three channels) with dra-\nmatic variation in how the objects appear. The CIFAR-10 dataset consists of 32×32\ncolor images in 10 classes, with 6,000 images per class. There are 50,000 training\nimages and 10,000 test images. Figure 6.14 shows the classes in the dataset.\nCIFAR-100 is the bigger brother of CIFAR-10: it contains 100 classes with 600 images\neach. These 100 classes are grouped into 20 superclasses. Each image comes with a fine\nlabel (the class to which it belongs) and a coarse label (the superclass to which it belongs).\n6.6.4 ImageNet\nWe’ve discussed the ImageNet dataset several times in the previous chapters and used it\nextensively in chapter 5 and this chapter. But for completeness of this list, we are discuss-\ning it here as well. At the time of writing, ImageNet is considered the current bench-\nmark and is widely used by CV researchers to evaluate their classification algorithms. \n ImageNet is a large visual database designed for use in visual object recognition\nsoftware research. It is aimed at labeling and categorizing images into almost 22,000\ncategories based on a defined set of words and phrases. The images were collected\nfrom the web and labeled by humans via Amazon’s Mechanical Turk crowdsourcing\nAirplane\nAutomobile\nBird\nCat\nDeer\nDog\nFrog\nHorse\nShip\nTruck\nFigure 6.14 Sample images from the CIFAR-10 dataset' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 285} |
283 | page_content='266 CHAPTER 6Transfer learning\ntool. At the time of this writing, there are over 14 million images in the ImageNet\nproject. To organize such a massive amount of data, the creators of ImageNet followed\nthe WordNet hierarchy: each meaningful word/phrase in WordNet is called a synonym\nset (synset for short). Within the ImageNet project, images are organized according to\nthese synsets, with the goal being to have 1,000+ images per synset. Figure 6.15 shows\na collage of ImageNet examples put together by Stanford University.\nThe CV community usually refers to the ImageNet Large Scale Visual Recognition\nChallenge (ILSVRC) when talking about ImageNet. In this challenge, software pro-\ngrams compete to correctly classify and detect objects and scenes. We will be using the\nILSVRC challenge as a benchmark to compare the different networks’ performance.\n6.6.5 MS COCO\nMS COCO ( http:/ /cocodataset.org ) is short for Microsoft Common Objects in Con-\ntext. It is an open source database that aims to enable future research for object detec-\ntion, instance segmentation, image captioning, and localizing person keypoints. It\nFigure 6.15 A collage of ImageNet examples compiled by Stanford University' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 286} |
284 | page_content='267 Open source datasets\ncontains 328,000 images. More than 200,000 of them are labeled, and they include 1.5\nmillion object instances and 80 object categories that would be easily recognizable by\na 4-year-old. The original research paper by the creators of the dataset describes the\nmotivation for and content of this dataset.2 Figure 6.16 shows a sample of the dataset\nprovided on the MS COCO website.\n6.6.6 Google Open Images\nOpen Images ( https:/ /storage.googleapis.com/openimages/web/index.html ) is an\nopen source image database created by Google. It contains more than 9 million images\nas of this writing. What makes it stand out is that these images are mostly of complex\nscenes that span thousands of classes of objects. Additionally, more than 2 million of\nthese images are hand-annotated with bounding boxes, making Open Images by far the\nlargest existing dataset with object-location annotations (see figure 6.17). In this subset\nof images, there are ~15.4 million bounding boxes of 600 classes of objects. Similar to\nImageNet and ILSVRC, Open Images has a challenge called the Open Images Chal-\nlenge ( http:/ /mng.bz/aRQz ).\n6.6.7 Kaggle\nIn addition to the datasets listed in this section, Kaggle ( www.kaggle.com ) is another\ngreat source for datasets. Kaggle is a website that hosts ML and DL challenges where\npeople from all around the world can participate and submit algorithms for evaluations. \n You are strongly encouraged to explore these datasets and search for the many\nother open source datasets that come up every day, to gain a better understanding of\nthe classes and use cases they support. We mostly use ImageNet in this chapter’s proj-\nects; and throughout the book, we will be using MS COCO, especially in chapter 7.\n2 Tsung-Yi Lin, Michael Maire, Serge Belongie, et al., “Microsoft COCO: Common Objects in Context” (Feb-\nruary 2015), https:/ /arxiv.org/pdf/1405.0312.pdf .\nFigure 6.16 A sample of the MS COCO dataset \n(Image copyright © 2015, COCO Consortium, used by permission under Creative Commons Attribution 4.0 \nLicense.)' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 287} |
285 | page_content='268 CHAPTER 6Transfer learning\n6.7 Project 1: A pretrained network as a feature extractor \nIn this project, we use a very small amount of data to train a classifier that detects\nimages of dogs and cats. This is a pretty simple project, but the goal of the exercise is\nto see how to implement transfer learning when you have a very small amount of data\nand the target domain is similar to the source domain (scenario 1). As explained in\nthis chapter, in this case, we will use the pretrained convolutional network as a feature\nextractor. This means we are going to freeze the feature extractor part of the network,\nadd our own classifier, and then retrain the network on our new small dataset.\n One other important takeaway from this project is learning how to preprocess cus-\ntom data and make it ready to train your neural network. In previous projects, we used\nthe CIFAR and MNIST datasets: they are preprocessed by Keras, so all we had to do\nwas download them from the Keras library and use them directly to train the network.\nThis project provides a tutorial of how to structure your data repository and use the\nKeras library to get your data ready. \n Visit the book’s website at www.manning.com/books/deep-learning-for-vision-\nsystems or www.computervisionbook.com to download the code notebook and the\ndataset used for this project. Since we are using transfer learning, the training does\nnot require high computation power, so you can run this notebook on your personal\ncomputer; you don’t need a GPU. \n For this implementation, we’ll be using the VGG16. Although it didn’t record the\nlowest error in the ILSVRC, I found that it worked well for the task and was quicker to\ntrain than other models. I got an accuracy of about 96%, but you can feel free to use\nGoogLeNet or ResNet to experiment and compare results. \nTree\nPerson\nClothingHuman head\nHuman face\nHuman nosePerson\nClothingHuman head\nHuman face\nHuman nose\nPerson\nClothingHuman head\nHuman faceHuman nose\nBuilding\nShelf\nFurnitureBed\nTable\nChair\nFigure 6.17 Annotated images from the Open Images dataset, taken from the Google AI Blog (Vittorio Ferrari, \n“An Update to Open Images—Now with Bounding-Boxes,” July 2017, http:/ /mng.bz/yyVG ).' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 288} |
286 | page_content='269 Project 1: A pretrained network as a feature extractor\n The process to use a pretrained model as a feature extractor is well established:\n1Import the necessary libraries.\n2Preprocess the data to make it ready for the neural network.\n3Load pretrained weights from the VGG16 network trained on a large dataset.\n4Freeze all the weights in the convolutional layers (feature extraction part).\nRemember, the layers to freeze are adjusted depending on the similarity of the\nnew task to the original dataset. In our case, we observed that ImageNet has a\nlot of dog and cat images, so the network has already been trained to extract\nthe detailed features of our target object.\n5Replace the fully connected layers of the network with a custom classifier. You\ncan add as many fully connected layers as you see fit, and each can have as\nmany hidden units as you want. For simple problems like this, we will just add\none hidden layer with 64 units. You can observe the results and tune up if the\nmodel is underfitting or down if the model is overfitting. For the softmax layer,\nthe number of units must be set equal to the number of classes (two units, in\nour case). \n6Compile the network, and run the training process on the new data of cats and\ndogs to optimize the model for the smaller dataset.\n7Evaluate the model.\nNow, let’s go through these steps and implement this project:\n1Import the necessary libraries:\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.preprocessing import image\nfrom keras.applications import imagenet_utils\nfrom keras.applications import vgg16\nfrom keras.applications import mobilenet\nfrom keras.optimizers import Adam, SGD\nfrom keras.metrics import categorical_crossentropy\nfrom keras.layers import Dense, Flatten, Dropout, BatchNormalization\nfrom keras.models import Model\nfrom sklearn.metrics import confusion_matrix\nimport itertools\nimport matplotlib.pyplot as plt\n%matplotlib inline\n2Preprocess the data to make it ready for the neural network. Keras has an\nImageDataGenerator class that allows us to easily perform image augmentation\non the fly; you can read about it at https:/ /keras.io/api/preprocessing/image .\nIn this example, we use ImageDataGenerator to generate our image tensors,\nbut for simplicity, we will not implement image augmentation. \nThe ImageDataGenerator class has a method called flow_from_directory()\nthat is used to read images from folders containing images. This method expects\nyour data directory to be structured as in figure 6.18.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 289} |
287 | page_content='270 CHAPTER 6Transfer learning\nI have the data structured in the book’s code so it’s ready for you to use flow_\nfrom_directory() . Now, load the data into train_path , valid_path , and test\n_path variables, and then generate the train, valid, and test batches:\ntrain_path = \'data/train\'\nvalid_path = \'data/valid\'\ntest_path = \'data/test\'\ntrain_batches = ImageDataGenerator().flow_from_directory(train_path, \n target_size=(224,224),\n batch_size=10)\nvalid_batches = ImageDataGenerator().flow_from_directory(valid_path,\n target_size=(224,224),\n batch_size=30)\ntest_batches = ImageDataGenerator().flow_from_directory(test_path, \n target_size=(224,224),\n batch_size=50,\n shuffle= False)\n3Load in pretrained weights from the VGG16 network trained on a large dataset.\nSimilar to the examples in this chapter, we download the VGG16 network from\nKeras and download its weights after they are pretrained on the ImageNet data-\nset. Remember that we want to remove the classifier part from this network, so\nwe set the parameter include_top=False :\nbase_model = vgg16.VGG16(weights = "imagenet" , include_top= False, \n input_shape = (224,224, 3))\n4Freeze all the weights in the convolutional layers (feature extraction part). We\nfreeze the convolutional layers from the base_model created in the previousData\nValid Test\nClass_b Class_a\nclass_a_500.jpg class_b_500.jpgTrain\nClass_b Class_a\nclass_a_1.jpg class_b_1.jpgTest_folder\ntest_1.jpg\nFigure 6.18 The required directory structure for your dataset to use the .flow_from_directory() method \nfrom Keras\nImageDataGenerator generates batches of\ntensor image data with real-time data\naugmentation. The data will be looped over\n(in batches). In this example, we won’t be\ndoing any image augmentation.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 290} |
288 | page_content="271 Project 1: A pretrained network as a feature extractor\nstep and use that as a feature extractor, and then add a classifier on top of it in\nthe next step: \nfor layer in base_model.layers: \n layer.trainable = False\n5Add the new classifier, and build the new model. We add a few layers on top of\nthe base model. In this example, we add one fully connected layer with 64 hid-\nden units and a softmax with 2 hidden units. We also add batch norm and drop-\nout layers to avoid overfitting:\nlast_layer = base_model.get_layer( 'block5_pool' ) \nlast_output = last_layer.output\nx = Flatten()(last_output) \nx = Dense(64, activation= 'relu', name='FC_2')(x) \nx = BatchNormalization()(x) \nx = Dropout(0.5)(x) \nx = Dense(2, activation= 'softmax' , name='softmax' )(x) \nnew_model = Model(inputs=base_model.input, outputs=x) \nnew_model.summary()\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 224, 224, 3) 0 \n_________________________________________________________________\nblock1_conv1 (Conv2D) (None, 224, 224, 64) 1792 \n_________________________________________________________________\nblock1_conv2 (Conv2D) (None, 224, 224, 64) 36928 \n_________________________________________________________________\nblock1_pool (MaxPooling2D) (None, 112, 112, 64) 0 \n_________________________________________________________________\nblock2_conv1 (Conv2D) (None, 112, 112, 128) 73856 \n_________________________________________________________________\nblock2_conv2 (Conv2D) (None, 112, 112, 128) 147584 \n_________________________________________________________________\nblock2_pool (MaxPooling2D) (None, 56, 56, 128) 0 \n_________________________________________________________________\nblock3_conv1 (Conv2D) (None, 56, 56, 256) 295168 \n_________________________________________________________________\nblock3_conv2 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_conv3 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_pool (MaxPooling2D) (None, 28, 28, 256) 0 \n_________________________________________________________________Iterates through layers and locks them to \nmake them non-trainable with this code\nUses the get_layer method to save the last\nlayer of the network. Then saves the output of\nthe last layer to be the input of the next layer.\nFlattens the classifier input, which is output \nof the last layer of the VGG 16 model\nAdds one fully \nconnected layer \nthat has 64 units \nand batchnorm, \ndropout, and \nsoftmax layersInstantiates\na new_model\nusing Keras’s\nModel class" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 291} |
289 | page_content="272 CHAPTER 6Transfer learning\nblock4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 \n_________________________________________________________________\nblock4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_pool (MaxPooling2D) (None, 14, 14, 512) 0 \n_________________________________________________________________\nblock5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_pool (MaxPooling2D) (None, 7, 7, 512) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 25088) 0 \n_________________________________________________________________\nFC_2 (Dense) (None, 64) 1605696 \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, 64) 256 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 64) 0 \n_________________________________________________________________\nsoftmax (Dense) (None, 2) 130 \n=================================================================\nTotal params: 16,320,770\nTrainable params: 1,605,954\nNon-trainable params: 14,714,816\n_________________________________________________________________\n6Compile the model and run the training process: \nnew_model.compile(Adam(lr=0.0001), loss= 'categorical_crossentropy' , \n metrics=[ 'accuracy' ])\nnew_model.fit_generator(train_batches, steps_per_epoch=4,\n validation_data=valid_batches, validation_steps=2,\n epochs=20, verbose=2)\nWhen you run the previous code snippet, the verbose training is printed after\neach epoch as follows:\nEpoch 1/20\n - 28s - loss: 1.0070 - acc: 0.6083 - val_loss: 0.5944 - val_acc: 0.6833\nEpoch 2/20\n - 25s - loss: 0.4728 - acc: 0.7754 - val_loss: 0.3313 - val_acc: 0.8605\nEpoch 3/20\n - 30s - loss: 0.1177 - acc: 0.9750 - val_loss: 0.2449 - val_acc: 0.8167\nEpoch 4/20\n - 25s - loss: 0.1640 - acc: 0.9444 - val_loss: 0.3354 - val_acc: 0.8372\nEpoch 5/20\n - 29s - loss: 0.0545 - acc: 1.0000 - val_loss: 0.2392 - val_acc: 0.8333\nEpoch 6/20" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 292} |
290 | page_content="273 Project 1: A pretrained network as a feature extractor\n - 25s - loss: 0.0941 - acc: 0.9505 - val_loss: 0.2019 - val_acc: 0.9070\nEpoch 7/20\n - 28s - loss: 0.0269 - acc: 1.0000 - val_loss: 0.1707 - val_acc: 0.9000\nEpoch 8/20\n - 26s - loss: 0.0349 - acc: 0.9917 - val_loss: 0.2489 - val_acc: 0.8140\nEpoch 9/20\n - 28s - loss: 0.0435 - acc: 0.9891 - val_loss: 0.1634 - val_acc: 0.9000\nEpoch 10/20\n - 26s - loss: 0.0349 - acc: 0.9833 - val_loss: 0.2375 - val_acc: 0.8140\nEpoch 11/20\n - 28s - loss: 0.0288 - acc: 1.0000 - val_loss: 0.1859 - val_acc: 0.9000\nEpoch 12/20\n - 29s - loss: 0.0234 - acc: 0.9917 - val_loss: 0.1879 - val_acc: 0.8372\nEpoch 13/20\n - 32s - loss: 0.0241 - acc: 1.0000 - val_loss: 0.2513 - val_acc: 0.8500\nEpoch 14/20\n - 29s - loss: 0.0120 - acc: 1.0000 - val_loss: 0.0900 - val_acc: 0.9302\nEpoch 15/20\n - 36s - loss: 0.0189 - acc: 1.0000 - val_loss: 0.1888 - val_acc: 0.9000\nEpoch 16/20\n - 30s - loss: 0.0142 - acc: 1.0000 - val_loss: 0.1672 - val_acc: 0.8605\nEpoch 17/20\n - 29s - loss: 0.0160 - acc: 0.9917 - val_loss: 0.1752 - val_acc: 0.8667\nEpoch 18/20\n - 25s - loss: 0.0126 - acc: 1.0000 - val_loss: 0.1823 - val_acc: 0.9070\nEpoch 19/20\n - 29s - loss: 0.0165 - acc: 1.0000 - val_loss: 0.1789 - val_acc: 0.8833\nEpoch 20/20\n - 25s - loss: 0.0112 - acc: 1.0000 - val_loss: 0.1743 - val_acc: 0.8837\nNotice that the model was trained very quickly using regular CPU computing\npower. Each epoch took approximately 25 to 29 seconds, which means the\nmodel took less than 10 minutes to train for 20 epochs. \n7Evaluate the model. First, let’s define the load_dataset() method that we will\nuse to convert our dataset into tensors:\nfrom sklearn.datasets import load_files\nfrom keras.utils import np_utils\nimport numpy as np\ndef load_dataset(path):\n data = load_files(path)\n paths = np.array(data[ 'filenames' ])\n targets = np_utils.to_categorical(np.array(data[ 'target' ]))\n return paths, targets\ntest_files, test_targets = load_dataset( 'small_data/test' )\nThen, we create test_tensors to evaluate the model on them:\nfrom keras.preprocessing import image \nfrom keras.applications.vgg16 import preprocess_input\nfrom tqdm import tqdm" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 293} |
291 | page_content="274 CHAPTER 6Transfer learning\ndef path_to_tensor(img_path):\n img = image.load_img(img_path, target_size=(224, 224)) \n x = image.img_to_array(img) \n return np.expand_dims(x, axis=0) \ndef paths_to_tensor(img_paths):\n list_of_tensors = [path_to_tensor(img_path) for img_path in \ntqdm(img_paths)]\n return np.vstack(list_of_tensors)\ntest_tensors = preprocess_input(paths_to_tensor(test_files))\nNow we can run Keras’s evaluate() method to calculate the model accuracy:\nprint('\\nTesting loss: {:.4f}\\n Testing accuracy: \n{:.4f}'.format(*new_model.evaluate(test_tensors, test_targets)))\nTesting loss: 0.1042\nTesting accuracy: 0.9579\nThe model has achieved an accuracy of 95.79% in less than 10 minutes of training.\nThis is very good, given our very small dataset. \n6.8 Project 2: Fine-tuning\nIn this project, we are going to explore scenario 3, discussed earlier in this chapter,\nwhere the target dataset is small and very different from the source dataset. The goal\nof this project is to build a sign language classifier that distinguishes 10 classes: the\nsign language digits from 0 to 9. Figure 6.19 shows a sample of our dataset.Loads an RGB image as\nPIL.Image.Image typeConverts the PIL.Image.Image \ntype to a 3D tensor with shape \n(224, 224, 3)\nConverts the 3D tensor to a 4D tensor with shape \n(1, 224, 224, 3) and returns the 4D tensor\nFigure 6.19 A sample from \nthe sign language dataset" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 294} |
292 | page_content="275 Project 2: Fine-tuning\nFollowing are the details of our dataset:\n\uf0a1Number of classes = 10 (digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9)\n\uf0a1Image size = 100 × 100\n\uf0a1Color space = RGB\n\uf0a11,712 images in the training set\n\uf0a1300 images in the validation set\n\uf0a150 images in the test set\nIt is very noticeable how small our dataset is. If you try to train a network from scratch\non this very small dataset, you will not achieve good results. On the other hand, we\nwere able to achieve an accuracy higher than 98% by using transfer learning, even\nthough the source and target domains were very different. \nNOTE Please take this evaluation with a grain of salt, because the network\nhasn't been thoroughly tested with a lot of data. We only have 50 test images\nin this dataset. Transfer learning is expected to achieve good results anyway,\nbut I wanted to highlight this fact.\nVisit the book’s website at www.manning.com/books/deep-learning-for-vision-systems\nor www.computervisionbook.com to download the source code notebook and the\ndataset used for this project. Similar to project 1, the training does not require high\ncomputation power, so you can run this notebook on your personal computer; you\ndon’t need a GPU. \n For ease of comparison with the previous project, we will use the VGG16 network\ntrained on the ImageNet dataset. The process to fine-tune a pretrained network is\nas follows:\n1Import the necessary libraries.\n2Preprocess the data to make it ready for the neural network.\n3Load in pretrained weights from the VGG16 network trained on a large dataset\n(ImageNet).\n4Freeze part of the feature extractor part.\n5Add the new classifier layers.\n6Compile the network, and run the training process to optimize the model for\nthe smaller dataset.\n7Evaluate the model.\nNow let’s implement this project:\n1Import the necessary libraries:\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.preprocessing import image\nfrom keras.applications import imagenet_utils\nfrom keras.applications import vgg16\nfrom keras.optimizers import Adam, SGD\nfrom keras.metrics import categorical_crossentropy" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 295} |
293 | page_content='276 CHAPTER 6Transfer learning\nfrom keras.layers import Dense, Flatten, Dropout, BatchNormalization\nfrom keras.models import Model\nfrom sklearn.metrics import confusion_matrix\nimport itertools\nimport matplotlib.pyplot as plt\n%matplotlib inline\n2Preprocess the data to make it ready for the neural network. Similar to proj-\nect 1, we use the ImageDataGenerator class from Keras and the flow_from_\ndirectory() method to preprocess our data. The data is already structured for\nyou to directly create your tensors:\ntrain_path = \'dataset/train\'\nvalid_path = \'dataset/valid\'\ntest_path = \'dataset/test\'\ntrain_batches = ImageDataGenerator().flow_from_directory(train_path, \n target_size=(224,224),\n batch_size=10)\nvalid_batches = ImageDataGenerator().flow_from_directory(valid_path,\n target_size=(224,224),\n batch_size=30)\ntest_batches = ImageDataGenerator().flow_from_directory(test_path, \n target_size=(224,224), \n batch_size=50, \n shuffle= False)\nFound 1712 images belonging to 10 classes.\nFound 300 images belonging to 10 classes.\nFound 50 images belonging to 10 classes.\n3Load in pretrained weights from the VGG16 network trained on a large data-\nset (ImageNet). We download the VGG16 architecture from the Keras library\nwith ImageNet weights. Note that we use the parameter pooling=\'avg\' here:\nthis basically means global average pooling will be applied to the output of\nthe last convolutional layer, and thus the output of the model will be a 2D ten-\nsor. We use this as an alternative to the Flatten layer before adding the fully\nconnected layers:\nbase_model = vgg16.VGG16(weights = "imagenet" , include_top= False, \n input_shape = (224,224, 3), pooling= \'avg\')\n4Freeze some of the feature extractor part, and fine-tune the rest on our new\ntraining data. The level of fine-tuning is usually determined by trial and error.\nVGG16 has 13 convolutional layers: you can freeze them all or freeze a few of\nthem, depending on how similar your data is to the source data. In the signImageDataGenerator generates batches of\ntensor image data with real-time data\naugmentation. The data will be looped over\n(in batches). In this example, we won’t be\ndoing any image augmentation.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 296} |
294 | page_content='277 Project 2: Fine-tuning\nlanguage case, the new domain is very different from our domain, so we will\nstart with fine-tuning only the last five layers; if we don’t get satisfying results,\nwe can fine-tune more. It turns out that after we trained the new model, we\ngot 98% accuracy, so this was a good level of fine-tuning. But in other cases, if\nyou find that your network doesn’t converge, try fine-tuning more layers.\nfor layer in base_model.layers[:-5]: \n layer.trainable = False\nbase_model.summary()\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 224, 224, 3) 0 \n_________________________________________________________________\nblock1_conv1 (Conv2D) (None, 224, 224, 64) 1792 \n_________________________________________________________________\nblock1_conv2 (Conv2D) (None, 224, 224, 64) 36928 \n_________________________________________________________________\nblock1_pool (MaxPooling2D) (None, 112, 112, 64) 0 \n_________________________________________________________________\nblock2_conv1 (Conv2D) (None, 112, 112, 128) 73856 \n_________________________________________________________________\nblock2_conv2 (Conv2D) (None, 112, 112, 128) 147584 \n_________________________________________________________________\nblock2_pool (MaxPooling2D) (None, 56, 56, 128) 0 \n_________________________________________________________________\nblock3_conv1 (Conv2D) (None, 56, 56, 256) 295168 \n_________________________________________________________________\nblock3_conv2 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_conv3 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_pool (MaxPooling2D) (None, 28, 28, 256) 0 \n_________________________________________________________________\nblock4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 \n_________________________________________________________________\nblock4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_pool (MaxPooling2D) (None, 14, 14, 512) 0 \n_________________________________________________________________\nblock5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_pool (MaxPooling2D) (None, 7, 7, 512) 0 \n_________________________________________________________________\nglobal_average_pooling2d_1 ( (None, 512) 0 \n=================================================================Iterates through layers \nand locks them, except \nfor the last five layers' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 297} |
295 | page_content="278 CHAPTER 6Transfer learning\nTotal params: 14,714,688\nTrainable params: 7,079,424\nNon-trainable params: 7,635,264\n_________________________________________________________________\n5Add the new classifier layers, and build the new model:\nlast_output = base_model.output \nx = Dense(10, activation= 'softmax' , name='softmax' )(last_output) \nnew_model = Model(inputs=base_model.input, outputs=x) \nnew_model.summary() \nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) (None, 224, 224, 3) 0 \n_________________________________________________________________\nblock1_conv1 (Conv2D) (None, 224, 224, 64) 1792 \n_________________________________________________________________\nblock1_conv2 (Conv2D) (None, 224, 224, 64) 36928 \n_________________________________________________________________\nblock1_pool (MaxPooling2D) (None, 112, 112, 64) 0 \n_________________________________________________________________\nblock2_conv1 (Conv2D) (None, 112, 112, 128) 73856 \n_________________________________________________________________\nblock2_conv2 (Conv2D) (None, 112, 112, 128) 147584 \n_________________________________________________________________\nblock2_pool (MaxPooling2D) (None, 56, 56, 128) 0 \n_________________________________________________________________\nblock3_conv1 (Conv2D) (None, 56, 56, 256) 295168 \n_________________________________________________________________\nblock3_conv2 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_conv3 (Conv2D) (None, 56, 56, 256) 590080 \n_________________________________________________________________\nblock3_pool (MaxPooling2D) (None, 28, 28, 256) 0 \n_________________________________________________________________\nblock4_conv1 (Conv2D) (None, 28, 28, 512) 1180160 \n_________________________________________________________________\nblock4_conv2 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_conv3 (Conv2D) (None, 28, 28, 512) 2359808 \n_________________________________________________________________\nblock4_pool (MaxPooling2D) (None, 14, 14, 512) 0 \n_________________________________________________________________\nblock5_conv1 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_conv2 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________Saves the output of base_model \nto be the input of the next layerAdds our new softmax layer \nwith 10 hidden units\nInstantiates\na new_model\nusing Keras’s\nModel classPrints the new_model summary" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 298} |
296 | page_content="279 Project 2: Fine-tuning\nblock5_conv3 (Conv2D) (None, 14, 14, 512) 2359808 \n_________________________________________________________________\nblock5_pool (MaxPooling2D) (None, 7, 7, 512) 0 \n_________________________________________________________________\nglobal_average_pooling2d_1 ( (None, 512) 0 \n_________________________________________________________________\nsoftmax (Dense) (None, 10) 5130 \n=================================================================\nTotal params: 14,719,818\nTrainable params: 7,084,554\nNon-trainable params: 7,635,264\n6Compile the network, and run the training process to optimize the model for\nthe smaller dataset:\nnew_model.compile(Adam(lr=0.0001), loss= 'categorical_crossentropy' , \n metrics=[ 'accuracy' ])\nfrom keras.callbacks import ModelCheckpoint\ncheckpointer = ModelCheckpoint(filepath= 'signlanguage.model.hdf5' , \n save_best_only= True)\nhistory = new_model.fit_generator(train_batches, steps_per_epoch=18,\n validation_data=valid_batches, validation_steps=3, \n epochs=20, verbose=1, callbacks=[checkpointer])\nEpoch 1/150\n18/18 [==============================] - 40s 2s/step - loss: 3.2263 - acc: \n0.1833 - val_loss: 2.0674 - val_acc: 0.1667\nEpoch 2/150\n18/18 [==============================] - 41s 2s/step - loss: 2.0311 - acc: \n0.1833 - val_loss: 1.7330 - val_acc: 0.3000\nEpoch 3/150\n18/18 [==============================] - 42s 2s/step - loss: 1.5741 - acc: \n0.4500 - val_loss: 1.5577 - val_acc: 0.4000\nEpoch 4/150\n18/18 [==============================] - 42s 2s/step - loss: 1.3068 - acc: \n0.5111 - val_loss: 0.9856 - val_acc: 0.7333\nEpoch 5/150\n18/18 [==============================] - 43s 2s/step - loss: 1.1563 - acc: \n0.6389 - val_loss: 0.7637 - val_acc: 0.7333\nEpoch 6/150\n18/18 [==============================] - 41s 2s/step - loss: 0.8414 - acc: \n0.6722 - val_loss: 0.7550 - val_acc: 0.8000\nEpoch 7/150\n18/18 [==============================] - 41s 2s/step - loss: 0.5982 - acc: \n0.8444 - val_loss: 0.7910 - val_acc: 0.6667\nEpoch 8/150\n18/18 [==============================] - 41s 2s/step - loss: 0.3804 - acc: \n0.8722 - val_loss: 0.7376 - val_acc: 0.8667\nEpoch 9/150\n18/18 [==============================] - 41s 2s/step - loss: 0.5048 - acc: \n0.8222 - val_loss: 0.2677 - val_acc: 0.9000" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 299} |
297 | page_content="280 CHAPTER 6Transfer learning\nEpoch 10/150\n18/18 [==============================] - 39s 2s/step - loss: 0.2383 - acc: \n0.9276 - val_loss: 0.2844 - val_acc: 0.9000\nEpoch 11/150\n18/18 [==============================] - 41s 2s/step - loss: 0.1163 - acc: \n0.9778 - val_loss: 0.0775 - val_acc: 1.0000\nEpoch 12/150\n18/18 [==============================] - 41s 2s/step - loss: 0.1377 - acc: \n0.9667 - val_loss: 0.5140 - val_acc: 0.9333\nEpoch 13/150\n18/18 [==============================] - 41s 2s/step - loss: 0.0955 - acc: \n0.9556 - val_loss: 0.1783 - val_acc: 0.9333\nEpoch 14/150\n18/18 [==============================] - 41s 2s/step - loss: 0.1785 - acc: \n0.9611 - val_loss: 0.0704 - val_acc: 0.9333\nEpoch 15/150\n18/18 [==============================] - 41s 2s/step - loss: 0.0533 - acc: \n0.9778 - val_loss: 0.4692 - val_acc: 0.8667\nEpoch 16/150\n18/18 [==============================] - 41s 2s/step - loss: 0.0809 - acc: \n0.9778 - val_loss: 0.0447 - val_acc: 1.0000\nEpoch 17/150\n18/18 [==============================] - 41s 2s/step - loss: 0.0834 - acc: \n0.9722 - val_loss: 0.0284 - val_acc: 1.0000\nEpoch 18/150\n18/18 [==============================] - 41s 2s/step - loss: 0.1022 - acc: \n0.9611 - val_loss: 0.0177 - val_acc: 1.0000\nEpoch 19/150\n18/18 [==============================] - 41s 2s/step - loss: 0.1134 - acc: \n0.9667 - val_loss: 0.0595 - val_acc: 1.0000\nEpoch 20/150\n18/18 [==============================] - 39s 2s/step - loss: 0.0676 - acc: \n0.9777 - val_loss: 0.0862 - val_acc: 0.9667\nNotice the training time of each epoch from the verbose output. The model\nwas trained very quickly using regular CPU computing power. Each epoch took\napproximately 40 seconds, which means it took the model less than 15 minutes\nto train for 20 epochs. \n7Evaluate the accuracy of the model. Similar to the previous project, we create a\nload_dataset() method to create test_targets and test_tensors and then\nuse the evaluate() method from Keras to run inferences on the test images\nand get the model accuracy:\nprint('\\nTesting loss: {:.4f}\\n Testing accuracy: \n{:.4f}'.format(*new_model.evaluate(test_tensors, test_targets)))\nTesting loss: 0.0574\nTesting accuracy: 0.9800\nA deeper level of evaluating your model involves creating a confusion matrix.\nWe explained the confusion matrix in chapter 4: it is a table that is often used\nto describe the performance of a classification model, to provide a deeper" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 300} |
298 | page_content="281 Project 2: Fine-tuning\nunderstanding of how the model performed on the test dataset. See chapter 4\nfor details on the different model evaluation metrics. Now, let’s build the confu-\nsion matrix for our model (see figure 6.20):\nfrom sklearn.metrics import confusion_matrix\nimport numpy as np\ncm_labels = [ '0','1','2','3','4','5','6','7','8','9']\ncm = confusion_matrix(np.argmax(test_targets, axis=1),\n np.argmax(new_model.predict(test_tensors), axis=1))\nplt.imshow(cm, cmap=plt.cm.Blues)\nplt.colorbar()\nindexes = np.arange( len(cm_labels))\nfor i in indexes:\n for j in indexes:\n plt.text(j, i, cm[i, j])\nplt.xticks(indexes, cm_labels, rotation=90)\nplt.xlabel( 'Predicted label' )\nplt.yticks(indexes, cm_labels)\nplt.ylabel( 'True label' )\nplt.title( 'Confusion matrix' )\nplt.show()\nTo read this confusion matrix, look at the number on the Predicted Label axis\nand check whether it was correctly classified on the True Label axis. For exam-\nple, look at number 0 on the Predicted Label axis: all five images were classified\nas 0, and no images were mistakenly classified as any other number. Similarly,Predicted labelConfusion matrix\nTrue label55\n012340 0 0 0 0 0 0 0 0\n0 5 0 0 0 0 0 0 0 0\n0 0 5 0 0 0 0 0 0 0\n0 0 0 5 0 0 0 0 0 0\n0 0 0 0 5 0 0 0 0 0\n0 0 0 0 0 5 0 0 0 0\n0 0 0 0 0 0 5 0 0 0\n0 0 0 0 0 0 0 5 0 0\n0 0 0 0 0 0 0 1 4 0\n00\n1\n2\n3\n4\n5\n6\n7\n8\n9 0 0 0 0 0 0 0 0 5\n0123456789\nFigure 6.20 Confusion matrix for the sign language classifier" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 301} |
299 | page_content='282 CHAPTER 6Transfer learning\ngo through the rest of the numbers on the Predicted Label axis. You will notice\nthat the model successfully made the correct predictions for all the test images\nexcept the image with true label = 8. In that case, the model mistakenly classi-\nfied an image of number 8 as number = 7. \nSummary\n\uf0a1Transfer learning is usually the go-to approach when starting a classification and\nobject detection project, especially when you don’t have a lot of training data.\n\uf0a1Transfer learning migrates the knowledge learned from the source dataset to\nthe target dataset, to save training time and computational cost. \n\uf0a1The neural network learns the features in your dataset step by step in increasing\nlevels of complexity. The deeper you go through the network layers, the more\nimage-specific the features that are learned.\n\uf0a1Early layers in the network learn low-level features like lines, blobs, and edges.\nThe output of the first layer becomes input to the second layer, which produces\nhigher-level features. The next layer assembles the output of the previous layer\ninto parts of familiar objects, and a subsequent layer detects the objects. \n\uf0a1The three main transfer learning approaches are using a pretrained network as\na classifier, using a pretrained network as a feature extractor, and fine-tuning.\n\uf0a1Using a pretrained network as a classifier means using the network directly to\nclassify new images without freezing layers or applying model training.\n\uf0a1Using a pretrained network as a feature extractor means freezing the classifier\npart of the network and retraining the new classifier.\n\uf0a1Fine-tuning means freezing a few of the network layers that are used for feature\nextraction, and jointly training both the non-frozen layers and the newly added\nclassifier layers of the pretrained model. \n\uf0a1The transferability of features from one network to another is a function of\nthe size of the target data and the domain similarity between the source and\ntarget data.\n\uf0a1Generally, fine-tuning parameters use a smaller learning rate, while training the\noutput layer from scratch can use a larger learning rate.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 302} |
300 | page_content='283Object detection with\nR-CNN, SSD, and YOLO\nIn the previous chapters, we explained how we can use deep neural networks for\nimage classification tasks. In image classification, we assume that there is only one\nmain target object in the image, and the model’s sole focus is to identify the target\ncategory. However, in many situations, we are interested in multiple targets in the\nimage. We want to not only classify them, but also obtain their specific positions in\nthe image. In computer vision, we refer to such tasks as object detection . Figure 7.1\nexplains the difference between image classification and object detection tasks.\n Object detection is a CV task that involves both main tasks: localizing one or\nmore objects within an image and classifying each object in the image (see table 7.1).\nThis is done by drawing a bounding box around the identified object with its pre-\ndicted class. This means the system doesn’t just predict the class of the image, as in\nimage classification tasks; it also predicts the coordinates of the bounding box thatThis chapter covers\n\uf0a1Understanding image classification vs. object \ndetection\n\uf0a1Understanding the general framework of object \ndetection projects\n\uf0a1Using object detection algorithms like R-CNN, \nSSD, and YOLO' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 303} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.