page_no
int64
1
474
page_content
stringlengths
160
3.83k
301
page_content='284 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nfits the detected object. This is a challenging CV task because it requires both success-\nful object localization, in order to locate and draw a bounding box around each\nobject in an image, and object classification to predict the correct class of object that\nwas localized. \n Object detection is widely used in many fields. For example, in self-driving technol-\nogy, we need to plan routes by identifying the locations of vehicles, pedestrians, roads,\nand obstacles in a captured video image. Robots often perform this type of task toTable 7.1 Image classification vs. object detection\nImage classification Object detection\nThe goal is to predict the type or class \nof an object in an image.\n\uf0a1Input: an image with a single object\n\uf0a1Output: a class label (cat, dog, \netc.)\n\uf0a1Example output: class probability \n(for example, 84% cat)The goal is to predict the location of objects in an image via \nbounding boxes and the classes of the located objects.\n\uf0a1Input: an image with one or more objects\n\uf0a1Output: one or more bounding boxes (defined by coordi-\nnates) and a class label for each bounding box\n\uf0a1Example output for an image with two objects:\n– box1 coordinates ( x, y, w, h) and class probability\n– box2 coordinates and class probability\nNote that the image coordinates ( x, y, w, h) are as follows: \n(x and y) are the coordinates of the bounding-box center point, \nand ( w and h) are the width and height of the box.\nCat, Cat, Duck, DogObject detection\n(classification and localization)\nCat, Cat, Duck, Dog\n CatImage classification\nFigure 7.1 Image classification vs. object detection tasks. In classification tasks, \nthe classifier outputs the class probability (cat), whereas in object detection tasks, \nthe detector outputs the bounding box coordinates that localize the detected \nobjects (four boxes in this example) and their predicted classes (two cats, one \nduck, and one dog).' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 304}
302
page_content='285 General object detection framework\ndetect targets of interest. And systems in the security field need to detect abnormal\ntargets, such as intruders or bombs.\n This chapter’s layout is as follows:\n1We will explore the general framework of the object detection algorithms. \n2We will dive deep into three of the most popular detection algorithms: the R-CNN\nfamily of networks, SSD, and the YOLO family of networks.\n3We will use what we’ve learned in a real-world project to train an end-to-end\nobject detector.\nBy the end of this chapter, we will have gained an understanding of how DL is applied\nto object detection, and how the different object detection models inspire and diverge\nfrom one another. Let’s get started!\n7.1 General object detection framework\nBefore we jump into the object detection systems like R-CNN, SSD, and YOLO, let’s\ndiscuss the general framework of these systems to understand the high-level workflow\nthat DL-based systems follow to detect objects and the metrics they use to evaluate\ntheir detection performance. Don’t worry about the code implementation details of\nobject detectors yet. The goal of this section is to give you an overview of how different\nobject detection systems approach this task and introduce you to a new way of think-\ning about this problem and a set of new concepts to set you up to understand the DL\narchitectures that we will explain in sections 7.2, 7.3, and 7.4. \n Typically, an object detection framework has four components:\n1Region proposal —An algorithm or a DL model is used to generate regions of\ninterest (RoIs) to be further processed by the system. These are regions that the\nnetwork believes might contain an object; the output is a large number of\nbounding boxes, each of which has an objectness score . Boxes with large object-\nness scores are then passed along the network layers for further processing. \n2Feature extraction and network predictions —Visual features are extracted for each\nof the bounding boxes. They are evaluated, and it is determined whether and\nwhich objects are present in the proposals based on visual features (for exam-\nple, an object classification component).\n3Non-maximum suppression (NMS) —In this step, the model has likely found multi-\nple bounding boxes for the same object. NMS helps avoid repeated detection of\nthe same instance by combining overlapping boxes into a single bounding box\nfor each object.\n4Evaluation metrics —Similar to accuracy, precision, and recall metrics in image\nclassification tasks (see chapter 4), object detection systems have their own\nmetrics to evaluate their detection performance. In this section, we will explain\nthe most popular metrics, like mean average precision (mAP), precision-recall\ncurve (PR curve), and intersection over union (IoU).' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 305}
303
page_content='286 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nNow, let’s dive one level deeper into each one of these components to build an intu-\nition about what their goals are.\n7.1.1 Region proposals\nIn this step, the system looks at the image and proposes RoIs for further analysis. RoIs\nare regions that the system believes have a high likelihood of containing an object,\ncalled the objectness score (figure 7.2). Regions with high objectness scores are passed to\nthe next steps; regions with low scores are abandoned.\nThere are several approaches to generate region proposals. Originally, the selective\nsearch algorithm was used to generate object proposals; we will talk more about this\nalgorithm when we discuss the R-CNN network. Other approaches use more complex\nvisual features extracted from the image by a deep neural network to generate regions\n(for example, based on the features from a DL model). \n We will talk in more detail about how different object detection systems approach\nthis task. The important thing to note is that this step produces a lot (thousands) of\nbounding boxes to be further analyzed and classified by the network. During this step,\nthe network analyzes these regions in the image and classifies each region as fore-\nground (object) or background (no object) based on its objectness score. If the object-\nness score is above a certain threshold, then this region is considered a foreground\nand pushed forward in the network. Note that this threshold is configurable based on\nyour problem. If the threshold is too low, your network will exhaustively generate all\npossible proposals, and you will have a better chance of detecting all objects in the\nimage. On the flip side, this is very computationally expensive and will slow down\nLow objectness\nscore\nHigh objectness\nscore\nFigure 7.2 Regions of interest (RoIs) proposed by the system. Regions with high \nobjectness score represent areas of high likelihood to contain objects (foreground), \nand the ones with low objectness score are ignored because they have a low likelihood \nof containing objects (background).' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 306}
304
page_content='287 General object detection framework\ndetection. So, the trade-off with generating region proposals is the number of regions\nversus computational complexity—and the right approach is to use problem-specific\ninformation to reduce the number of RoIs. \n7.1.2 Network predictions\nThis component includes the pretrained CNN network that is used for feature extraction\nto extract features from the input image that are representative for the task at hand and\nto use these features to determine the class of the image. In object detection frame-\nworks, people typically use pretrained image classification models to extract visual fea-\ntures, as these tend to generalize fairly well. For example, a model trained on the MS\nCOCO or ImageNet dataset is able to extract fairly generic features. \n In this step, the network analyzes all the regions that have been identified as\nhaving a high likelihood of containing an object and makes two predictions for\neach region:\n\uf0a1Bounding-box prediction —The coordinates that locate the box surrounding the\nobject. The bounding box coordinates are represented as the tuple ( x, y, w, h),\nwhere x and y are the coordinates of the center point of the bounding box and\nw and h are the width and height of the box.\n\uf0a1Class prediction : The classic softmax function that predicts the class probability\nfor each object.\nSince thousands of regions are proposed, each object will always have multiple bound-\ning boxes surrounding it with the correct classification. For example, take a look at\nthe image of the dog in figure 7.3. The network was clearly able to find the object\n(dog) and successfully classify it. But the detection fired a total of five times because\nFigure 7.3 The bounding-box detector \nproduces more than one bounding box for \nan object. We want to consolidate these \nboxes into one bounding box that fits the \nobject the most.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 307}
305
page_content='288 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nthe dog was present in the five RoIs produced in the previous step: hence the five\nbounding boxes around the dog in the figure. Although the detector was able to\nsuccessfully locate the dog in the image and classify it correctly, this is not exactly\nwhat we need. We need just one bounding box for each object for most problems.\nIn some problems, we only want the one box that fits the object the most. What if we\nare building a system to count dogs in an image? Our current system will count five\ndogs. We don’t want that. This is when the non-maximum suppression technique\ncomes in handy.\n7.1.3 Non-maximum suppression (NMS)\nAs you can see in figure 7.4, one of the problems of an object detection algorithm is\nthat it may find multiple detections of the same object. So, instead of creating only\none bounding box around the object, it draws multiple boxes for the same object.\nNMS is a technique that makes sure the detection algorithm detects each object only\nonce. As the name implies, NMS looks at all the boxes surrounding an object to find\nthe box that has the maximum prediction probability, and it suppresses or eliminates the\nother boxes (hence the name). \nThe general idea of NMS is to reduce the number of candidate boxes to only one\nbounding box for each object. For example, if the object in the frame is fairly large\nand more than 2,000 object proposals have been generated, it is quite likely that some\nof them will have significant overlap with each other and the object. \nAfter applying non-maximum suppression Predictions before NMS\nFigure 7.4 Multiple regions are proposed for the same object. After NMS, only the box \nthat fits the object the best remains; the rest are ignored, as they have large overlaps with \nthe selected box.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 308}
306
page_content='289 General object detection framework\n Let’s see the steps of how the NMS algorithm works:\n1Discard all bounding boxes that have predictions that are less than a certain\nthreshold, called the confidence threshold . This threshold is tunable, which\nmeans a box will be suppressed if the prediction probability is less than the set\nthreshold.\n2Look at all the remaining boxes, and select the bounding box with the highest\nprobability.\n3Calculate the overlap of the remaining boxes that have the same class predic-\ntion. Bounding boxes that have high overlap with each other and that predict\nthe same class are averaged together. This overlap metric is called intersection\nover union (IoU) . IoU is explained in detail in the next section.\n4Suppress any box that has an IoU value smaller than a certain threshold (called\nthe NMS threshold ). Usually the NMS threshold is equal to 0.5, but it is tunable as\nwell if you want to output fewer or more bounding boxes. \nNMS techniques are typically standard across the different detection frameworks, but\nit is an important step that may require tweaking hyperparameters such as the confi-\ndence threshold and the NMS threshold based on the scenario. \n7.1.4 Object-detector evaluation metrics\nWhen evaluating the performance of an object detector, we use two main evaluation\nmetrics: frames per second and mean average precision. \nFRAMES PER SECOND (FPS) TO MEASURE DETECTION SPEED \nThe most common metric used to measure detection speed is the number of frames\nper second (FPS). For example, Faster R-CNN operates at only 7 FPS, whereas SSD\noperates at 59 FPS. In benchmarking experiments, you will see the authors of a paper\nstate their network results as: “Network X achieves mAP of Y% at Z FPS,” where X is\nthe network name, Y is the mAP percentage, and Z is the FPS. \nMEAN AVERAGE PRECISION (MAP) TO MEASURE NETWORK PRECISION\nThe most common evaluation metric used in object recognition tasks is mean average\nprecision (mAP) . It is a percentage from 0 to 100, and higher values are typically better,\nbut its value is different from the accuracy metric used in classification.\n To understand how mAP is calculated, you first need to understand intersection\nover union (IoU) and the precision-recall curve (PR curve). Let’s explain IoU and the\nPR curve and then come back to mAP.\nINTERSECTION OVER UNION (IOU) \nThis measure evaluates the overlap between two bounding boxes: the ground truth\nbounding box ( Bground truth ) and the predicted bounding box ( Bpredicted ). By applying\nthe IoU, we can tell whether a detection is valid (True Positive) or not (False Positive).\nFigure 7.5 illustrates the IoU between a ground truth bounding box and a predicted\nbounding box.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 309}
307
page_content='290 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nThe intersection over the union value ranges from 0 (no overlap at all) to 1 (the two\nbounding boxes overlap each other 100%). The higher the overlap between the\ntwo bounding boxes (IoU value), the better (figure 7.6).\nTo calculate the IoU of a prediction, we need the following:\n\uf0a1The ground truth bounding box ( Bground truth ): the hand-labeled bounding box\ncreated during the labeling process\n\uf0a1The predicted bounding box ( Bpredicted ) from our model\nWe calculate IoU by dividing the area of overlap by the area of the union, as in the fol-\nlowing equation: \nScore =Area of\noverlap\nArea of\nunionPredicted person\nbounding box\nGround truth person\nbounding box\nFigure 7.5 The IoU score is the overlap between the ground truth bounding box and \nthe predicted bounding box.\nPoorIoU: 0.4034\nGood ExcellentIoU: 0.7330 IoU: 0.9264\nFigure 7.6 IoU scores range from 0 (no overlap) to 1 (100% overlap). \nThe higher the overlap (IoU) between the two bounding boxes, the better.\nIoUBground truth Bpredicted∩\nBground truth Bpredicted∪---------------------------------------------------------- - =' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 310}
308
page_content='291 General object detection framework\nIoU is used to define a correct prediction , meaning a prediction (True Positive) that has\nan IoU greater than some threshold. This threshold is a tunable value depending on\nthe challenge, but 0.5 is a standard value. For example, some challenges, like Micro-\nsoft COCO, use mAP@0.5 (IoU threshold of 0.5) or mAP@0.75 (IoU threshold of\n0.75). If the IoU value is above this threshold, the prediction is considered a True Pos-\nitive (TP); and if it is below the threshold, it is considered a False Positive (FP).\nPRECISION -RECALL CURVE (PR CURVE )\nWith the TP and FP defined, we can now calculate the precision and recall of our\ndetection for a given class across the testing dataset. As explained in chapter 4, we cal-\nculate the precision and recall as follows (recall that FN stands for False Negative):\nRecall = \nPrecision = \nAfter calculating the precision and recall for all classes, the PR curve is then plotted as\nshown in figure 7.7.\nThe PR curve is a good way to evaluate the performance of an object detector, as the\nconfidence is changed by plotting a curve for each object class. A detector is consid-\nered good if its precision stays high as recall increases, which means if you vary theTP\nTP FN+--------------------\nTP\nTP FP+-------------------\n1.0\nPrecision\nRecall0.8\n0.6\n0.4\n0.2\n0.0\n0.0 1.0 0.8 0.6 0.4 0.2\nFigure 7.7 A precision-recall curve is used to evaluate the performance of \nan object detector.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 311}
309
page_content='292 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nconfidence threshold, the precision and recall will still be high. On the other hand, a\npoor detector needs to increase the number of FPs (lower precision) in order to\nachieve a high recall. That’s why the PR curve usually starts with high precision values,\ndecreasing as recall increases. \n Now that we have the PR curve, we can calculate the average precision (AP) by cal-\nculating the area under the curve (AUC). Finally, the mAP for object detection is the\naverage of the AP calculated for all the classes. It is also important to note that some\nresearch papers use AP and mAP interchangeably.\nRECAP\nTo recap, the mAP is calculated as follows:\n1Get each bounding box’s associated objectness score (probability of the box\ncontaining an object). \n2Calculate precision and recall. \n3Compute the PR curve for each class by varying the score threshold. \n4Calculate the AP: the area under the PR curve. In this step, the AP is computed\nfor each class. \n5Calculate the mAP: the average AP over all the different classes.\nThe last thing to note about mAP is that it is more complicated to calculate than other\ntraditional metrics like accuracy. The good news is that you don’t need to compute\nmAP values yourself: most DL object detection implementations handle computing\nthe mAP for you, as you will see later in this chapter.\n Now that we understand the general framework of object detection algorithms,\nlet’s dive deeper into three of the most popular. In this chapter, we will discuss the\nR-CNN family of networks, SSD, and YOLO networks in detail to see how object detec-\ntors have evolved over time. We will also examine the pros and cons of each network\nso you can choose the most appropriate algorithm for your problem. \n7.2 Region-based convolutional neural networks (R-CNNs)\nThe R-CNN family of object detection techniques usually referred to as R-CNNs , which\nis short for region-based convolutional neural networks , was developed by Ross Girshick et\nal. in 2014.1 The R-CNN family expanded to include Fast-RCNN2 and Faster-RCN3 in\n2015 and 2016, respectively. In this section, I’ll quickly walk you through the evolution\nof the R-CNN family from R-CNNs to Fast R-CNN to Faster R-CNN, and then we will\ndive deeper into the Faster R-CNN architecture and code implementation. \n1Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik, “Rich Feature Hierarchies for Accurate\nObject Detection and Semantic Segmentation,” 2014, http:/ /arxiv.org/abs/1311.2524 .\n2Ross Girshick, “Fast R-CNN,” 2015, http:/ /arxiv.org/abs/1504.08083 .\n3Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun, “Faster R-CNN: Towards Real-Time Object Detec-\ntion with Region Proposal Networks,” 2016, http:/ /arxiv.org/abs/1506.01497 .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 312}
310
page_content='293 Region-based convolutional neural networks (R-CNNs)\n7.2.1 R-CNN\nR-CNN is the least sophisticated region-based architecture in its family, but it is the basis\nfor understanding how multiple object-recognition algorithms work for all of them. It\nwas one of the first large, successful applications of convolutional neural networks to the\nproblem of object detection and localization, and it paved the way for the other\nadvanced detection algorithms. The approach was demonstrated on benchmark data-\nsets, achieving then-state-of-the-art results on the PASCAL VOC-2012 dataset and the\nILSVRC 2013 object detection challenge. Figure 7.8 shows a summary of the R-CNN\nmodel architecture.\nThe R-CNN model consists of four components:\n\uf0a1Extract regions of interest —Also known as extracting region proposals . These regions\nhave a high probability of containing an object. An algorithm called selective\nsearch scans the input image to find regions that contain blobs, and proposes\nthem as RoIs to be processed by the next modules in the pipeline. The pro-\nposed RoIs are then warped to have a fixed size; they usually vary in size, but as\nwe learned in previous chapters, CNNs require a fixed input image size.\n\uf0a1Feature extraction module —We run a pretrained convolutional network on top of\nthe region proposals to extract features from each candidate region. This is the\ntypical CNN feature extractor that we learned about in previous chapters. \n\uf0a1Classification module —We train a classifier like a support vector machine (SVM),\na traditional machine learning algorithm, to classify candidate detections based\non the extracted features from the previous step. \n\uf0a1Localization module —Also known as a bounding-box regressor . Let’s take a step back\nto understand regression. ML problems are categorized as classification or regres-\nsion problems. Classification algorithms output discrete, predefined classes (dog,\ncat, elephant), whereas regression algorithms output continuous value predic-\ntions. In this module, we want to predict the location and size of the bounding\nInput image Extract regions of interest\n(ROI) using selective\nsearch algorithmA pretrained CNN\nto extract featuresA classifier and\nbounding-box\nregressorWarped\nregionCNNPerson? yes.\nTV monitor? no.Airplane? no.\nFigure 7.8 Summary of the R-CNN model architecture. (Modified from Girshick et al., “Rich Feature Hierarchies \nfor Accurate Object Detection and Semantic Segmentation.”)' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 313}
311
page_content='294 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nbox that surrounds the object. The bounding box is represented by identifying\nfour values: the x and y coordinates of the box’s origin ( x, y), the width, and\nthe height of the box ( w, h). Putting this together, the regressor predicts the\nfour real-valued numbers that define the bounding box as the following tuple:\n(x, y, w, h).\nSelective search\nSelective search is a greedy search algorithm that is used to provide region proposals\nthat potentially contain objects. It tries to find areas that might contain an object by\ncombining similar pixels and textures into rectangular boxes. Selective search com-\nbines the strength of both the exhaustive search algorithm (which examines all possible\nlocations in the image) and the bottom-up segmentation algorithm (which hierarchically\ngroups similar regions) to capture all possible object locations. \nThe selective search algorithm works by applying a segmentation algorithm to find\nblobs in an image, in order to figure out what could be an object (see the image on\nthe right in the following figure).\nBottom-up segmentation recursively combines these groups of regions together into\nlarger ones to create about 2,000 areas to be investigated, as follows:\n1The similarities between all neighboring regions are calculated.\n2The two most similar regions are grouped together, and new similarities are\ncalculated between the resulting region and its neighbors. \n3This process is repeated until the entire object is covered in a single region.\nNote that a review of the selective search algorithm and how it calculates regions’\nsimilarity is outside the scope of this book. If you are interested in learning moreSegmentation\nThe selective search algorithm looks for blob-like areas in the image to \nextract regions. At right, the segmentation algorithm defines blobs that \ncould be objects. Then the selective search algorithm selects these \nareas to be passed along for further investigation.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 314}
312
page_content='295 Region-based convolutional neural networks (R-CNNs)\nFigure 7.9 illustrates the R-CNN architecture in an intuitive way. As you can see, the\nnetwork first proposes RoIs, then extracts features, and then classifies those regionsabout this technique, you can refer to the original paper.a For the purpose of under-\nstanding R-CNNs, you can treat the selective search algorithm as a black box that\nintelligently scans the image and proposes RoI locations for us to use.\naJ.R.R. Uijlings, K.E.A. van de Sande, T. Gevers, and A.W.M. Smeulders, “Selective Search for Object \nRecognition,” 2012, www.huppelen.nl/publications/selectiveSearchDraft.pdf .Input image\n Proposed regions\n After a few iterations After the first iteration\nAn example of bottom-up segmentation using the selective search algorithm. It combines similar \nregions in every iteration until the entire object is covered in a single region.\nConvNetSVMs Bbox reg\nConvNetSVMs Bbox reg\nConvNetSVMs Bbox reg\n1. Selective search algorithm\nis used to extract RoIs from\nthe input image.2.Extracted regions are warped\nbefore being fed to the ConvNet.3.Forward each region through\nthe pretrained ConvNet to\nextract features.4.The network produces\nbounding-box and classification\npredictions.\nFigure 7.9 Illustration of the R-CNN architecture. Each proposed RoI is passed through the CNN \nto extract features, followed by a bounding-box regressor and an SVM classifier to produce the \nnetwork output prediction.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 315}
313
page_content='296 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nbased on their features. In essence, we have turned object detection into an image\nclassification problem.\nTRAINING R-CNN S\nWe learned in the previous section that R-CNNs are composed of four modules: selec-\ntive search region proposal, feature extractor, classifier, and bounding-box regressor.\nAll of the R-CNN modules need to be trained except the selective search algorithm.\nSo, in order to train R-CNNs, we need to do the following:\n1Train the feature extractor CNN. This is a typical CNN training process. We\neither train a network from scratch, which rarely happens, or fine-tune a pre-\ntrained network, as we learned to do in chapter 6. \n2Train the SVM classifier. The SVM algorithm is not covered in this book, but it is\na traditional ML classifier that is no different from DL classifiers in the sense\nthat it needs to be trained on labeled data. \n3Train the bounding-box regressors. This model outputs four real-valued num-\nbers for each of the K object classes to tighten the region bounding boxes.\nLooking through the R-CNN learning steps, you could easily find out that training an\nR-CNN model is expensive and slow. The training process involves training three sepa-\nrate modules without much shared computation. This multistage pipeline training is\none of the disadvantages of R-CNNs, as we will see next.\nDISADVANTAGES OF R-CNN\nR-CNN is very simple to understand, and it achieved state-of-the-art results when it\nfirst came out, especially when using deep ConvNets to extract features. However, it is\nnot actually a single end-to-end system that learns to localize via a deep neural net-\nwork. Rather, it is a combination of standalone algorithms, added together to perform\nobject detection. As a result, it has the following notable drawbacks:\n\uf0a1Object detection is very slow. For each image, the selective search algorithm pro-\nposes about 2,000 RoIs to be examined by the entire pipeline (CNN feature\nextractor and classifier). This is very computationally expensive because it per-\nforms a ConvNet forward pass for each object proposal without sharing computa-\ntion, which makes it incredibly slow. This high computation need means R-CNN\nis not a good fit for many applications, especially real-time applications that\nrequire fast inferences like self-driving cars and many others.\n\uf0a1Training is a multi-stage pipeline. As discussed earlier, R-CNNs require the training\nof three modules: CNN feature extractor, SVM classifier, and bounding-box\nregressors. Thus the training process is very complex and not an end-to-end\ntraining.\n\uf0a1Training is expensive in terms of space and time. When training the SVM classifier\nand bounding-box regressor, features are extracted from each object proposal\nin each image and written to disk. With very deep networks, such as VGG16, the\ntraining process for a few thousand images takes days using GPUs. The training' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 316}
314
page_content='297 Region-based convolutional neural networks (R-CNNs)\nprocess is expensive in space as well, because the extracted features require\nhundreds of gigabytes of storage. \nWhat we need is an end-to-end DL system that fixes the disadvantages of R-CNN while\nimproving its speed and accuracy.\n7.2.2 Fast R-CNN\nFast R-CNN was an immediate descendant of R-CNN, developed in 2015 by Ross Gir-\nshick. Fast R-CNN resembled the R-CNN technique in many ways but improved on its\ndetection speed while also increasing detection accuracy through two main changes:\n\uf0a1Instead of starting with the regions proposal module and then using the feature\nextraction module, like R-CNN, Fast-RCNN proposes that we apply the CNN\nfeature extractor first to the entire input image and then propose regions. This\nway, we run only one ConvNet over the entire image instead of 2,000 ConvNets\nover 2,000 overlapping regions.\n\uf0a1It extends the ConvNet’s job to do the classification part as well, by replacing\nthe traditional SVM machine learning algorithm with a softmax layer. This way,\nwe have a single model to perform both tasks: feature extraction and object\nclassification. \nFAST R-CNN ARCHITECTURE\nAs shown in figure 7.10, Fast R-CNN generates region proposals based on the last fea-\nture map of the network, not from the original image like R-CNN. As a result, we can\ntrain just one ConvNet for the entire image. In addition, instead of training many dif-\nferent SVM algorithms to classify each object class, a single softmax layer outputs the\nclass probabilities directly. Now we only have one neural net to train, as opposed to\none neural net and many SVMs.\n The architecture of Fast R-CNN consists of the following modules:\n1Feature extractor module —The network starts with a ConvNet to extract features\nfrom the full image. \n2RoI extractor —The selective search algorithm proposes about 2,000 region can-\ndidates per image. \n3RoI pooling layer —This is a new component that was introduced to extract a\nfixed-size window from the feature map before feeding the RoIs to the fully\nconnected layers. It uses max pooling to convert the features inside any valid\nRoI into a small feature map with a fixed spatial extent of height × width ( H × W).\nThe RoI pooling layer will be explained in more detail in the Faster R-CNN sec-\ntion; for now, understand that it is applied on the last feature map layer extracted\nfrom the CNN, and its goal is to extract fixed-size RoIs to feed to the fully con-\nnected layers and then the output layers.\n4Two-head output layer —The model branches into two heads:\n– A softmax classifier layer that outputs a discrete probability distribution per RoI\n– A bounding-box regressor layer to predict offsets relative to the original RoI' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 317}
315
page_content='298 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nMULTI-TASK LOSS FUNCTION IN FAST R-CNN S\nSince Fast R-CNN is an end-to-end learning architecture to learn the class of an object\nas well as the associated bounding box position and size, the loss is multi-task loss. With\nmulti-task loss, the output has the softmax classifier and bounding-box regressor, as\nshown in figure 7.10.\n In any optimization problem, we need to define a loss function that our optimizer\nalgorithm is trying to minimize. (Chapter 2 gives more details about optimization and\nloss functions.) In object detection problems, our goal is to optimize for two goals:\nobject classification and object localization. Therefore, we have two loss functions in\nthis problem: Lcls for the classification loss and Lloc for the bounding box prediction\ndefining the object location. \n A Fast R-CNN network has two sibling output layers with two loss functions: \n\uf0a1Classification —The first outputs a discrete probability distribution (per RoI)\nover K + 1 categories (we add one class for the background). The probability P\nis computed by a softmax over the K + 1 outputs of a fully connected layer. The\nclassification loss function is a log loss for the true class u\nLcls(p,u) = –log pu\nwhere u is the true label, u ∈ 0, 1, 2, . . . ( K + 1); where u = 0 is the background;\nand p is the discrete probability distribution per RoI over K + 1 classes.\nInput imageFeature extractorRoI extractor\n(selective searchRoI pooling layerFully connected layersTwo output layers\nFCsBounding-box\nregressorSoftmax\nclassifier\nFixed-size RoIs after\nthe RoI pooling layer\nProposed RoIs\nhave different sizes.\nConvNet\nFigure 7.10 The Fast R-CNN architecture consists of a feature extractor ConvNet, RoI \nextractor, RoI pooling layers, fully connected layers, and a two-head output layer. Note \nthat, unlike R-CNNs, Fast R-CNNs apply the feature extractor to the entire input image \nbefore applying the regions proposal module.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 318}
316
page_content='299 Region-based convolutional neural networks (R-CNNs)\n\uf0a1Regression —The second sibling layer outputs bounding box regression offsets\nv = (x, y, w, h) for each of the K object classes. The loss function is the loss for\nbounding box for class u\nLloc(tu,u) = L1smooth (tiu – vi)\nwhere:\n– v is the true bounding box, v = (x, y, w, h).\n–tu is the prediction bounding box correction:\ntu = (txu, tyu, twu, thu)\n–L1smooth is the bounding box loss that measures the difference between tiu\nand vi using the smooth L1 loss function. It is a robust function and is\nclaimed to be less sensitive to outliers than other regression losses like L2.\nThe overall loss function is\nL = Lcls + Lloc\nL(p,u,tu,v) = Lcls(p,u) + [u ≥ 1]lbox(tu,v)\nNote that [ u ≥ 1] is added before the regression loss to indicate 0 when the region\ninspected doesn’t contain any object and contains a background. It is a way of ignor-\ning the bounding box regression when the classifier labels the region as a back-\nground. The indicator function [ u ≥ 1] is defined as\n[u ≥ 1] = \nDISADVANTAGES OF FAST R-CNN\nFas t R- C NN i s mu ch f as t er in t er ms o f te s t ing ti me , be ca us e we do n’ t have t o f e e d\n2,000 region proposals to the convolutional neural network for every image. Instead, a\nconvolution operation is done only once per image, and a feature map is generated\nfrom it. Training is also faster because all the components are in one CNN network:\nfeature extractor, object classifier, and bounding-box regressor. However, there is a big\nbottleneck remaining: the selective search algorithm for generating region proposals\nis very slow and is generated separately by another model. The last step to achieve a\ncomplete end-to-end object detection system using DL is to find a way to combine the\nregion proposal algorithm into our end-to-end DL network. This is what Faster R-CNN\ndoes, as we will see next.\uf0e5\n1i f u1≥\n0 otherwise\uf0ee\uf0ed\uf0ec' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 319}
317
page_content='300 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\n7.2.3 Faster R-CNN\nFaster R-CNN is the third iteration of the R-CNN family, developed in 2016 by Shao-\nqing Ren et al. Similar to Fast R-CNN, the image is provided as an input to a convolu-\ntional network that provides a convolutional feature map. Instead of using a selective\nsearch algorithm on the feature map to identify the region proposals, a region proposal\nnetwork (RPN) is used to predict the region proposals as part of the training process.\nThe predicted region proposals are then reshaped using an RoI pooling layer and\nused to classify the image within the proposed region and predict the offset values for\nthe bounding boxes. These improvements both reduce the number of region propos-\nals and accelerate the test-time operation of the model to near real-time with then-\nstate-of-the-art performance.\nFASTER R-CNN ARCHITECTURE\nThe architecture of Faster R-CNN can be described using two main networks:\n\uf0a1Region proposal network (RPN) —Selective search is replaced by a ConvNet that\nproposes RoIs from the last feature maps of the feature extractor to be consid-\nered for investigation. The RPN has two outputs: the objectness score (object or\nno object) and the box location.\n\uf0a1Fast R-CNN —It consists of the typical components of Fast R-CNN: \n– Base network for the feature extractor: a typical pretrained CNN model to\nextract features from the input image\n– RoI pooling layer to extract fixed-size RoIs\n– Output layer that contains two fully connected layers: a softmax classifier to\noutput the class probability and a bounding box regression CNN for the\nbounding box predictions\nAs you can see in figure 7.11, the input image is presented to the network, and its fea-\ntures are extracted via a pretrained CNN. These features, in parallel, are sent to two\ndifferent components of the Faster R-CNN architecture:\n\uf0a1The RPN to determine where in the image a potential object could be. At this\npoint, we do not know what the object is, just that there is potentially an object\nat a certain location in the image. \n\uf0a1RoI pooling to extract fixed-size windows of features. \nThe output is then passed into two fully connected layers: one for the object classifier\nand one for the bounding box coordinate predictions to obtain our final localizations.\n This architecture achieves an end-to-end trainable, complete object detection pipe-\nline where all of the required components are inside the network:\n\uf0a1Base network feature extractor\n\uf0a1Regions proposal\n\uf0a1RoI pooling\n\uf0a1Object classification\n\uf0a1Bounding-box regressor' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 320}
318
page_content='301 Region-based convolutional neural networks (R-CNNs)\nBASE NETWORK TO EXTRACT FEATURES \nSimilar to Fast R-CNN, the first step is to use a pretrained CNN and slice off its classifi-\ncation part. The base network is used to extract features from the input image. We\ncovered how this works in detail in chapter 6. In this component, you can use any of\nthe popular CNN architectures based on the problem you are trying to solve. The\noriginal Faster R-CNN paper used ZF4 and VGG5 pretrained networks on ImageNet;\nbut since then, there have been lots of different networks with a varying number of\nweights. For example, MobileNet,6 a smaller and efficient network architecture opti-\nmized for speed, has approximately 3.3 million parameters, whereas ResNet-152 (152\nlayers)—once the state of the art in the ImageNet classification competition—has\naround 60 million. Most recently, new architectures like DenseNet7 are both improv-\ning results and reducing the number of parameters.\n4Matthew D. Zeiler and Rob Fergus, “Visualizing and Understanding Convolutional Networks,” 2013,\nhttp:/ /arxiv.org/abs/1311.2901 .\n5Karen Simonyan and Andrew Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recog-\nnition,” 2014, http:/ /arxiv.org/abs/1409.1556 .\n6Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco\nAndreetto, and Hartwig Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision\nApplications,” 2017, http:/ /arxiv.org/abs/1704.04861 .\n7Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger, “Densely Connected Convolu-\ntional Networks,” 2016, http:/ /arxiv.org/abs/1608.06993 .\nVGG-16\nFeature\nmapBounding box\ncoordinates\n(, , , )xywhRegion proposal network (RPN)\nFast R-CNNNo object\nObject\nBounding box\ncoordinates\n(, , , )xywh\nClass A\nClass B\nFigure 7.11 The Faster R-CNN architecture has two main components: an RPN that identifies regions \nthat may contain objects of interest and their approximate location, and a Fast R-CNN network that \nclassifies objects and refines their location defined using bounding boxes. The two components share \nthe convolutional layers of the pretrained VGG16.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 321}
319
page_content='302 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nAs we learned in earlier chapters, each convolutional layer creates abstractions based\non the previous information. The first layer usually learns edges, the second finds pat-\nterns in edges to activate for more complex shapes, and so forth. Eventually we end up\nwith a convolutional feature map that can be fed to the RPN to extract regions that\ncontain objects. \nREGION PROPOSAL NETWORK (RPN)\nThe RPN identifies regions that could potentially contain objects of interest, based on\nthe last feature map of the pretrained convolutional neural network. An RPN is also\nknown as an attention network because it guides the network’s attention to interesting\nregions in the image. Faster R-CNN uses an RPN to bake the region proposal\ndirectly into the R-CNN architecture instead of running a selective search algorithm\nto extract RoIs. \n The architecture of the RPN is composed of two layers (figure 7.12):\n\uf0a1A 3 × 3 fully convolutional layer with 512 channels\n\uf0a1Two parallel 1 × 1 convolutional layers: a classification layer that is used to pre-\ndict whether the region contains an object (the score of it being background or\nforeground), and a layer for regression or bounding box prediction.VGGNet vs. ResNet\nNowadays, ResNet architectures have mostly replaced VGG as a base network for\nextracting features. The obvious advantage of ResNet over VGG is that it has many\nmore layers (is deeper), giving it more capacity to learn very complex features. This\nis true for the classification task and should be equally true in the case of object\ndetection. In addition, ResNet makes it easy to train deep models with the use of\nresidual connections and batch normalization, which was not invented when VGG was\nfirst released. Please revisit chapter 5 for a more detailed review of the different CNN\narchitectures.\n3 × 3 CONV\n(pad 1, 512 output channels)\n1 × 1 CONV\n(4 output channels)k1 × 1 CONV\n(2 output channels)kFigure 7.12 Convolutional implementation \nof an RPN architecture, where k is the \nnumber of anchors' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 322}
320
page_content='303 Region-based convolutional neural networks (R-CNNs)\nThe 3 × 3 convolutional layer is applied on the last feature map of the base network\nwhere a sliding window of size 3 × 3 is passed over the feature map. The output is then\npassed to two 1 × 1 convolutional layers: a classifier and a bounding-box regressor.\nNote that the classifier and the regressor of the RPN are not trying to predict the class\nof the object and its bounding box; this comes later, after the RPN. Remember, the\ngoal of the RPN is to determine whether the region has an object to be investigated\nafterward by the fully connected layers. In the RPN, we use a binary classifier to pre-\ndict the objectness score of the region, to determine the probability of this region\nbeing a foreground (contains an object) or a background (doesn’t contain an object).\nIt basically looks at the region and asks, “Does this region contain an object?” If the\nanswer is yes, then the region is passed along for further investigation by RoI pooling\nand the final output layers (see figure 7.13).\nHow does the regressor predict the bounding box?\nTo answer this question, let’s first define the bounding box. It is the box that sur-\nrounds the object and is identified by the tuple ( x, y, w, h), where x and y are the\ncoordinates in the image that describes the center of the bounding box and h and\nw are the height and width of the bounding box. Researchers have found that\ndefining the ( x, y) coordinates of the center point can be challenging because we\nhave to enforce some rules to make sure the network predicts values inside the\nboundaries of the image. Instead, we can create reference boxes called anchor boxes\nin the image and make the regression layer predict offsets from these boxes calledFully convolutional networks (FCNs)\nOne important aspect of object detection networks is that they should be fully convo-\nlutional. A fully convolutional neural network means that the network does not contain\nany fully connected layers, typically found at the end of a network prior to making out-\nput predictions.\nIn the context of image classification, removing the fully connected layers is normally\naccomplished by applying average pooling across the entire volume prior to using a\nsingle dense softmax classifier to output the final predictions. An FCN has two main\nbenefits:\n\uf0a1It is faster because it contains only convolution operations and no fully con-\nnected layers.\n\uf0a1It can accept images of any spatial resolution (width and height), provided the\nimage and network can fit into the available memory.\nBeing an FCN makes the network invariant to the size of the input image. However,\nin practice, we might want to stick to a constant input size due to issues that only\nbecome apparent when we are implementing the algorithm. A significant such prob-\nlem is that if we want to process images in batches (because images in batches can\nbe processed in parallel by the GPU, leading to speed boosts), all of the images must\nhave a fixed height and width.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 323}
321
page_content='304 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\ndeltas (Δx, Δy, Δw, Δh) to adjust the anchor boxes to better fit the object to get final\nproposals (figure 7.14).\nAnchor boxes\nUsing a sliding window approach, the RPN generates k regions for each location in\nthe feature map. These regions are represented as anchor boxes . The anchors are cen-\ntered in the middle of their corresponding sliding window and differ in terms of scale\nand aspect ratio to cover a wide variety of objects. They are fixed bounding boxes that\nare placed throughout the image to be used for reference when first predicting object\nHigh objectness score\n(foreground)Low objectness score\n(background)Figure 7.13 The RPN classifier \npredicts the objectness score, \nwhich is the probability of an \nimage containing an object \n(foreground) or a background.\nAnchor box\nNew widthPredicted\nbounding box New ( , ) xy New height(, )xyhy\nxwΔ= offsets\nΔ\nΔΔ\nFigure 7.14 Illustration of \npredicting the delta shift from \nthe anchor boxes and the \nbounding box coordinates' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 324}
322
page_content='305 Region-based convolutional neural networks (R-CNNs)\nlocations. In their paper, Ren et. al. generated nine anchor boxes that all had the\nsame center but that had three different aspect ratios and three different scales. \n Figure 7.15 shows an example of how anchor boxes are applied. Anchors are at the\ncenter of the sliding windows; each window has k anchor boxes with the anchor at\ntheir center.\nTraining the RPN\nThe RPN is trained to classify an anchor box to output an objectness score and to\napproximate the four coordinates of the object (location parameters). It is trained using\nhuman annotators to label the bounding boxes. A labeled box is called the ground truth . \n For each anchor box, the overlap probability value ( p) is computed, which indi-\ncates how much these anchors overlap with the ground-truth bounding boxes: \n p = \nIf an anchor has high overlap with a ground-truth bounding box, then it is likely that\nthe anchor box includes an object of interest, and it is labeled as positive with respect\nto the object versus no object classification task. Similarly, if an anchor has small overlap\nwith a ground-truth bounding box, it is labeled as negative. During the training process,\nAnchors\nThe anchor is placed at the\ncenter of the sliding window.\nEach anchor has anchor boxes\nwith varying sizes.\nThe loU is calculated to choose\nthe bounding box that overlaps\nthe most with the ground-truth\nbounding box.\nAnchor boxesSliding windows\nFigure 7.15 Anchors are at the center of each sliding window. IoU is calculated to select the \nbounding box that overlaps the most with the ground truth.\n1 if IoU 0.7 >\n1– if IoU 0.3 <\n0 otherwise\uf0ee\uf0ef\uf0ed\uf0ef\uf0ec' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 325}
323
page_content='306 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nthe positive and negative anchors are passed as input to two fully connected layers cor-\nresponding to the classification of anchors as containing an object or no object, and\nto the regression of location parameters (four coordinates), respectively. Correspond-\ning to the k number of anchors from a location, the RPN network outputs 2 k scores\nand 4 k coordinates. Thus, for example, if the number of anchors per sliding window\n(k) is 9, then the RPN outputs 18 objectness scores and 36 location coordinates (fig-\nure 7.16).\nFULLY CONNECTED LAYER \nThe output fully connected layer takes two inputs: the feature maps coming from the\nbase ConvNet and the RoIs coming from the RPN. It then classifies the selected\nregions and outputs their prediction class and the bounding box parameters. The\nobject classification layer in Faster R-CNN uses softmax activation, while the locationRPN as a standalone application\nAn RPN can be used as a standalone application. For example, in problems with a\nsingle class of objects, the objectness probability can be used as the final class prob-\nability. This is because in such a case, foreground means single class , and background\nmeans not a single class . \nThe reason you would want to use RPN for cases like single-class detection is the\ngain in speed in both training and prediction. Since the RPN is a very simple network\nthat only uses convolutional layers, the prediction time can be faster than using the\nclassification base network.256-dIntermediate\nlayer\nCONV feature mapSliding window2 scores 4kk anchor boxesreglayer\n...clslayer\nkcoordinates\nFigure 7.16 Region proposal network' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 326}
324
page_content='307 Region-based convolutional neural networks (R-CNNs)\nregression layer uses linear regression over the coordinates defining the location as a\nbounding box. All of the network parameters are trained together using multi-task loss.\nMULTI-TASK LOSS FUNCTION\nSimilar to Fast R-CNN, Faster R-CNN is optimized for a multi-task loss function that\ncombines the losses of classification and bounding box regression:\nL = Lcls + Lloc\nL({pi},{ti}) = Lcls(pi,pi*) + pi* · L1smooth (ti – ti*)\nThe loss equation might look a little overwhelming at first, but it is simpler than it\nappears. Understanding it is not necessary to be able to run and train Faster R-CNNs,\nso feel free to skip this section. But I encourage you to power through this explana-\ntion, because it will add a lot of depth to your understanding of how the optimization\nprocess works under the hood. Let’s go through the symbols first; see table 7.2.\nNow that you know the definitions of the symbols, let’s try to read the multi-task loss func-\ntion again. To help understand this equation, just for a moment, ignore the normaliza-\ntion terms and the ( i) terms. Here’s the simplified loss function for each instance ( i):\nLoss = L cls(p, p*) + p* · L1smooth (t – t*)Table 7.2 Multi-task loss function symbols\nSymbol Explanation\npi and p*ipi is the predicted probability of the anchor ( i) being an object and the ground, and p*i \nis the binary ground truth (0 or 1) of the anchor being an object. \nti and t*iti is the predicted four parameters that define the bounding box, and t*i is the ground-\ntruth parameters.\nNcls Normalization term for the classification loss. Ren et al. set it to be a mini-batch size \nof ~256.\nNloc Normalization term for the bounding box regression. Ren et al. set it to the number \nof anchor locations, ~2400.\nLcls(pi, p*i) The log loss function over two classes. We can easily translate a multi-class classifi-\ncation into a binary classification by predicting whether a sample is a target object: \nLcls(pi, p*i) = –p*i log pi – (1 – p*i) log (1 – pi)\nL1smooth As described in section 7.2.2, the bounding box loss measures the difference \nbetween the predicted and true location parameters ( ti, t*i) using the smooth L1 loss \nfunction. It is a robust function and is claimed to be less sensitive to outliers than \nother regression losses like L2.\nλ A balancing parameter, set to be ~10 in Ren et al. (so the Lcls and Lloc terms are \nroughly equally weighted).1\nNcls-------- -\uf0e5λ\nNloc---------\uf0e5' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 327}
325
page_content='308 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nThis simplified function is the summation of two loss functions: the classification loss\nand the location loss (bounding box). Let’s look at them one at a time:\n\uf0a1The idea of any loss function is that it subtracts the predicted value from the\ntrue value to find the amount of error. The classification loss is the cross-entropy\nfunction explained in chapter 2. Nothing new. It is a log loss function that\ncalculates the error between the prediction probability ( p) and the ground\ntruth ( p*): \nLcls(pi, p*i) = – p*i log pi – (1 – p*i) log (1 – pi)\n\uf0a1The location loss is the difference between the predicted and true location\nparameters ( ti, ti*) using the smooth L1 loss function. The difference is then\nmultiplied by the ground truth probability of the region containing an object\np*. If it is not an object, p* is 0 to eliminate the entire location loss for non-object\nregions.\nFinally, we add the values of both losses to create the multi-loss function:\nL = Lcls + Lloc\nThere you have it: the multi-loss function for each instance ( i). Put back the ( i) and Σ\nsymbols to calculate the summation of losses for each instance. \n7.2.4 Recap of the R-CNN family\nTable 7.3 recaps the evolution of the R-CNN architecture:\n\uf0a1R-CNN —Bounding boxes are proposed by the selective search algorithm. Each\nis warped, and features are extracted via a deep convolutional neural network\nsuch as AlexNet, before a final set of object classifications and bounding box\npredictions is made with linear SVMs and linear regressors.\n\uf0a1Fast R-CNN —A simplified design with a single model. An RoI pooling layer is\nused after the CNN to consolidate regions. The model predicts both class labels\nand RoIs directly.\n\uf0a1Faster R-CNN —A fully end-to-end DL object detector. It replaces the selective\nsearch algorithm to propose RoIs with a region proposal network that inter-\nprets features extracted from the deep CNN and learns to propose RoIs\ndirectly.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 328}
326
page_content='309 Region-based convolutional neural networks (R-CNNs)\nTable 7.3 The evolution of the CNN family of networks from R-CNN to Fast R-CNN to Faster R-CNN\nR-CNN Fast R-CNN Faster R-CNN\nmAP on the PASCAL \nVisual Object \nClasses Challenge \n200766.0% 66.9% 66.9%\nFeatures 1Applies selective search to \nextract RoIs (~2,000) from \neach image.\n2A ConvNet is used to extract \nfeatures from each of the \n~2,000 regions extracted.\n3Uses classification and \nbounding box predictions.Each image is passed only \nonce to the CNN, and feature \nmaps are extracted. \n1A ConvNet is used to \nextract feature maps from \nthe input image.\n2Selective search is used \non these maps to generate \npredictions. \nThis way, we run only one \nConvNet over the entire image \ninstead of ~2,000 ConvNets \nover 2000 overlapping \nregions.Replaces the selec-\ntive search method \nwith a region pro-\nposal network, which \nmakes the algorithm \nmuch faster.\nAn end-to-end DL \nnetwork.\nLimitations High computation time, as \neach region is passed to the \nCNN separately. Also, uses \nthree different models for \nmaking predictions.Selective search is slow \nand, hence, computation time \nis still high.Object proposal takes \ntime. And as there \nare different systems \nworking one after the \nother, the perfor-\nmance of systems \ndepends on how \nthe previous system \nperformed.\nTest time per image 50 seconds 2 seconds 0.2 seconds\nSpeed-up from \nR-CNN1x 25x 250x\nConvNetSVMs Bbox reg\nConvNet\nInput imageSVMs Bbox reg\nConvNetSVMs Bbox reg\n1. Selective search algorithm\nis used to extract RoIs from\nthe input image.2.Extracted regions are warped\nbefore being fed to the ConvNet.3.Forward each region through\nthe pretrained ConvNet to\nextract features.4.The network produces\nbounding-box and classification\npredictions.\nInput imageFeature extractorRol extractor\n(selective searchRol pooling layerFully connected layersTwo output layers\nFCsBounding-box\nregressorSoftmax\nclassifier\nFixed-size RoIs after\nthe Rol pooling layer\nProposed RoIs\nhave different sizes.\nConvNet\nInput imageFeature\nmapsRegion proposal\nnetworkProposalsClassifier\nRol pooling\nConv layers' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 329}
327
page_content='310 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nR-CNN LIMITATIONS\nAs you might have noticed, each paper proposes improvements to the seminal work\ndone in R-CNN to develop a faster network, with the goal of achieving real-time\nobject detection. The achievements displayed through this set of work is truly amaz-\ning, yet none of these architectures manages to create a real-time object detector.\nWithout going into too much detail, the following problems have been identified\nwith these networks:\n\uf0a1Training the data is unwieldy and takes too long.\n\uf0a1Training happens in multiple phases (such as the training region proposal ver-\nsus a classifier).\n\uf0a1The network is too slow at inference time.\nFortunately, in the last few years, new architectures have been created to address the\nbottlenecks of R-CNN and its successors, enabling real-time object detection. The most\nfamous are the single-shot detector (SSD) and you only look once (YOLO), which we\nwill explain in sections 7.3 and 7.4. \nMULTI-STAGE VS. SINGLE -STAGE DETECTOR\nModels in the R-CNN family are all region-based. Detection happens in two stages,\nand thus these models are called two-stage detectors: \n1The model proposes a set of RoIs using selective search or an RPN. The pro-\nposed regions are sparse because the potential bounding-box candidates can be\ninfinite. \n2A classifier only processes the region candidates.\nOne-stage detectors take a different approach. They skip the region proposal stage\nand run detection directly over a dense sampling of possible locations. This approach\nis faster and simpler but can potentially drag down performance a bit. In the next two\nsections, we will examine the SSD and YOLO one-stage object detectors. In general,\nsingle-stage detectors tend to be less accurate than two-stage detectors but are signifi-\ncantly faster.\n7.3 Single-shot detector (SSD)\nThe SSD paper was released in 2016 by Wei Liu et al.8 The SSD network reached new\nrecords in terms of performance and precision for object detection tasks, scoring over\n74% mAP at 59 FPS on standard datasets such as the PASCAL VOC and Microsoft\nCOCO.\n \n \n8Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander\nC. Berg, “SSD: Single Shot MultiBox Detector,” 2016, http:/ /arxiv.org/abs/1512.02325 .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 330}
328
page_content='311 Single-shot detector (SSD)\nWe learned earlier that the R-CNN family are multi-stage detectors: the network first\npredicts the objectness score of the bounding box and then passes this box through a\nclassifier to predict the class probability. In single-stage detectors like SSD and YOLO\n(discussed in section 7.4), the convolutional layers make both predictions directly in\none shot: hence the name single-shot detector. The image is passed once through the\nnetwork, and the objectness score for each bounding box is predicted using logistic\nregression to indicate the level of overlap with the ground truth. If the bounding box\noverlaps 100% with the ground truth, the objectness score is 1; and if there is no over-\nlap, the objectness score is 0. We then set a threshold value (0.5) that says, “If the\nobjectness score is above 50%, this bounding box likely has an object of interest, and\nwe get predictions. If it is less than 50%, we ignore the predictions.”\n7.3.1 High-level SSD architecture\nThe SSD approach is based on a feed-forward convolutional network that produces a\nfixed-size collection of bounding boxes and scores for the presence of object class\ninstances in those boxes, followed by a NMS step to produce the final detections. The\narchitecture of the SSD model is composed of three main parts:\n\uf0a1Base network to extract feature maps —A standard pretrained network used for\nhigh-quality image classification, which is truncated before any classification\nlayers. In their paper, Liu et al. used a VGG16 network. Other networks like\nVGG19 and ResNet can be used and should produce good results. \n\uf0a1Multi-scale feature layers —A series of convolution filters are added after the base\nnetwork. These layers decrease in size progressively to allow predictions of\ndetections at multiple scales. \n\uf0a1Non-maximum suppression —NMS is used to eliminate overlapping boxes and\nkeep only one box for each object detected.\nAs you can see in figure 7.17, layers 4_3, 7, 8_2, 9_2, 10_2, and 11_2 make predictions\ndirectly to the NMS layer. We will talk about why these layers progressively decrease inMeasuring detector speed (FPS: Frames per second)\nAs discussed at the beginning of this chapter, the most common metric for measur-\ning detection speed is the number of frames per second. For example, Faster R-CNN\noperates at only 7 frames per second (FPS). There have been many attempts to build\nfaster detectors by attacking each stage of the detection pipeline, but so far, signifi-\ncantly increased speed has come only at the cost of significantly decreased detection\naccuracy. In this section, you will see why single-stage networks like SSD can achieve\nfaster detections that are more suitable for real-time detection.\nFor benchmarking, SSD300 achieves 74.3% mAP at 59 FPS, while SSD512 achieves\n76.8% mAP at 22 FPS, which outperforms Faster R-CNN (73.2% mAP at 7 FPS).\nSSD300 refers to an input image of size 300 × 300, and SSD512 refers to an input\nimage of size 512 × 512.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 331}
329
page_content='312 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nsize in section 7.3.3. For now, let’s follow along to understand the end-to-end flow of\ndata in SSD.\n You can see in figure 7.17, that the network makes a total of 8,732 detections per\nclass that are then fed to an NMS layer to reduce down to one detection per object.\nWhere did the number 8,732 come from? \n To have more accurate detection, different layers of feature maps also go through a\nsmall 3 × 3 convolution for object detection. For example, Conv4_3 is of size 38 × 38 ×\n512, and a 3 × 3 convolutional is applied. There are four bounding boxes, each of which\nhas ( number of classes + 4 box values) outputs. Suppose there are 20 object classes plus 1\nbackground class; then the output number of bounding boxes is 38 × 38 × 4 = 5,776\nbounding boxes. Similarly, we calculate the number of bounding boxes for the other\nconvolutional layers:\n\uf0a1Conv7: 19 × 19 × 6 = 2,166 boxes (6 boxes for each location)\n\uf0a1Conv8_2: 10 × 10 × 6 = 600 boxes (6 boxes for each location)\n\uf0a1Conv9_2: 5 × 5 × 6 = 150 boxes (6 boxes for each location)\n\uf0a1Conv10_2: 3 × 3 × 4 = 36 boxes (4 boxes for each location)\n\uf0a1Conv11_2: 1 × 1 × 4 = 4 boxes (4 boxes for each location)\nIf we sum them up, we get 5,776 + 2,166 + 600 + 150 + 36 + 4 = 8,732 boxes produced.\nThis is a huge number of boxes to show for our detector. That’s why we apply NMS to\nreduce the number of the output boxes. As you will see in section 7.4, in YOLO there are\n7 × 7 locations at the end with two bounding boxes for each location: 7 × 7 × 2 = 98 boxes.1024 512VGG16 through\nCONV5_3 layer CONV6\n(FC6)\nCONV8_2\nCONV9_2CONV7\n(FC7)Detections:\n8732 per classNon-maximum\nsuppressionExtra\nfeature layers\nClassifier: CONV: 3 × 3 × (4 × (classes + 4))\n5121024\n3300300\n3838 19\n191010\n25655CONV10_2\n256 33CONV11_2\n25611Image\nClassifier: CONV: 3 × 3 × (6 × (classes + 4))\nCONV: 3 × 3 × 1024 CONV: 1 × 1 × 1024 CONV: 1 × 1 × 256 CONV: 1 × 1 × 128\nCONV: 3 × 3 × 512-s2 CONV: 3 × 3 × 256-s1CONV: 1 × 1 × 128\nCONV: 3 × 3 × 256-s1CONV: 3 × 3 ×\n(4 × (classes + 4))\nFigure 7.17 The SSD architecture is composed of a base network (VGG16), extra convolutional layers for object \ndetection, and a non-maximum suppression (NMS) layer for final detections. Note that convolution layers 7, 8, 9, \n10, and 11 make predictions that are directly fed to the NMS layer. ( Source: Liu et al., 2016.)' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 332}
330
page_content="313 Single-shot detector (SSD)\nNow, let’s dive a little deeper into each component of the SSD architecture. \n7.3.2 Base network\nAs you can see in figure 7.17, the SSD architecture builds on the VGG16 architecture\nafter slicing off the fully connected classification layers (VGG16 is explained in detail\nin chapter 5). VGG16 was used as the base network because of its strong performance\nin high-quality image classification tasks and its popularity for problems where trans-\nfer learning helps to improve results. Instead of the original VGG fully connected lay-\ners, a set of supporting convolutional layers (from Conv6 onward) was added to\nenable us to extract features at multiple scales and progressively decrease the size of\nthe input to each subsequent layer. \n Following is a simplified code implementation of the VGG16 network used in SSD\nusing Keras. You will not need to implement this from scratch; my goal in including\nthis code snippet is to show you that this is a typical VGG16 network like the one\nimplemented in chapter 5: \nconv1_1 = Conv2D( 64, (3, 3), activation ='relu', padding='same')\nconv1_2 = Conv2D( 64, (3, 3), activation ='relu', padding='same')(conv1_1)\npool1 = MaxPooling2D( pool_size =(2, 2), strides=(2, 2), padding='same')(conv1_2)\n What does the output prediction look like?\nFor each feature, the network predicts the following: \n\uf0a14 values that describe the bounding box ( x, y, w, h)\n\uf0a11 value for the objectness score \n\uf0a1C values that represent the probability of each class\nThat’s a total of 5 + C prediction values. Suppose there are four object classes in our\nproblem. Then each prediction will be a vector that looks like this: [ x, y, w, h, object-\nness score , C1, C2, C3, C4].\nAn example visualization of the output prediction when we have four classes in our problem. The \nconvolutional layer predicts the bounding box coordinates, objectness score, and four class \nprobabilities: C1, C2, C3, and C4.Prediction X = xywhC1C2C3C4Pabj Prediction Y xywhC1C2C3C4PabjX\nY" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 333}
331
page_content="314 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nconv2_1 = Conv2D( 128, (3, 3), activation ='relu', padding='same')(pool1)\nconv2_2 = Conv2D( 128, (3, 3), activation ='relu', padding='same')(conv2_1)\npool2 = MaxPooling2D( pool_size =(2, 2), strides=(2, 2), padding='same')(conv2_2)\n \nconv3_1 = Conv2D( 256, (3, 3), activation ='relu', padding='same')(pool2)\nconv3_2 = Conv2D( 256, (3, 3), activation ='relu', padding='same')(conv3_1)\nconv3_3 = Conv2D( 256, (3, 3), activation ='relu', padding='same')(conv3_2)\npool3 = MaxPooling2D( pool_size =(2, 2), strides=(2, 2), padding='same')(conv3_3)\n \nconv4_1 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(pool3)\nconv4_2 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(conv4_1)\nconv4_3 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(conv4_2)\npool4 = MaxPooling2D( pool_size =(2, 2), strides=(2, 2), padding='same')(conv4_3)\n \nconv5_1 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(pool4)\nconv5_2 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(conv5_1)\nconv5_3 = Conv2D( 512, (3, 3), activation ='relu', padding='same')(conv5_2)\npool5 = MaxPooling2D( pool_size =(3, 3), strides=(1, 1), padding='same')(conv5_3)\nYou saw VGG16 implemented in Keras in chapter 5. The two main takeaways from\nadding this here are as follows:\n\uf0a1Layer conv4_3 will be used again to make direct predictions.\n\uf0a1Layer pool5 will be fed to the next layer (conv6), which is the first of the multi-\nscale features layers.\nHOW THE BASE NETWORK MAKES PREDICTIONS\nConsider the following example. Suppose you have the image in figure 7.18, and the\nnetwork’s job is to draw bounding boxes around all the boats in the image. The pro-\ncess goes as follows:\n1Similar to the anchors concept in R-CNN, SSD overlays a grid of anchors around\nthe image. For each anchor, the network creates a set of bounding boxes at its\ncenter. In SSD, anchors are called priors .\n2The base network looks at each bounding box as a separate image. For each\nbounding box, the network asks, “Is there a boat in this box?” Or in other\nwords, “Did I extract any features of a boat in this box?”\n3When the network finds a bounding box that contains boat features, it sends its\ncoordinates prediction and object classification to the NMS layer.\n4NMS eliminates all the boxes except the one that overlaps the most with the\nground-truth bounding box.\nNOTE Liu et al. used VGG16 because of its strong performance in complex\nimage classification tasks. You can use other networks like the deeper VGG19\nor ResNet for the base network, and it should perform as well if not better in\naccuracy; but it could be slower if you chose to implement a deeper network.\nMobileNet is a good choice if you want a balance between a complex, high-\nperforming deep network and being fast. \nNow, on to the next component of the SSD architecture: multi-scale feature layers." metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 334}
332
page_content='315 Single-shot detector (SSD)\n7.3.3 Multi-scale feature layers\nThese are convolutional feature layers that are added to the end of the truncated base\nnetwork. These layers decrease in size progressively to allow predictions of detections\nat multiple scales. \nMULTI-SCALE DETECTIONS\nTo understand the goal of the multi-scale feature layers and why they vary in size, let’s\nlook at the image of horses in figure 7.19. As you can see, the base network may be\nBounding boxes that\ncontain boat featuresBounding boxes that\ncontain no boat features\nBoat\nBoat\nFigure 7.18 The SSD base network looks at the anchor boxes to find features of a \nboat. Solid boxes indicate that the network has found boat features. Dotted boxes \nindicate no boat features.\nFigure 7.19 Horses at different scales in \nan image. The horses that are far from the \ncamera are easier to detect because they \nare small in size and can fit inside the priors \n(anchor boxes). The base network might fail \nto detect the horse closest to the camera \nbecause it needs a different scale of anchors \nto be able to create priors that cover more \nidentifiable features.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 335}
333
page_content='316 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nable to detect the horse features in the background, but it may fail to detect the horse\nthat is closest to the camera. To understand why, take a close look at the dotted\nbounding box and try to imagine this box alone outside the context of the full image\n(see figure 7.20).\nCan you see horse features in the bounding box in figure 7.20? No. To deal with\nobjects of different scales in an image, some methods suggest preprocessing the image\nat different sizes and combining the results afterward (figure 7.21). However, by using\ndifferent convolution layers that vary in size, we can use feature maps from several dif-\nferent layers in a single network; for prediction we can mimic the same effect, while\nalso sharing parameters across all object scales. As CNN reduces the spatial dimension\ngradually, the resolution of the feature maps also decreases. SSD uses lower-resolution\nlayers to detect larger-scale objects. For example, 4 × 4 feature maps are used for\nlarger scale objects.\n To visualize this, imagine that the network reduces the image dimensions to be\nable to fit all of the horses inside its bounding boxes (figure 7.22). The multi-scale fea-\nture layers resize the image dimensions and keep the bounding-box sizes so that they\nFigure 7.20 An isolated horse feature\n8 × 8 feature map 4 × 4 feature map\nFigure 7.21 Lower-resolution feature maps detect larger-scale objects (right); \nhigher-resolution feature maps detect smaller-scale objects (left).' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 336}
334
page_content='317 Single-shot detector (SSD)\ncan fit the larger horse. In reality, convolutional layers do not literally reduce the size\nof the image; this is just for illustration to help us intuitively understand the concept.\nThe image is not just resized, it actually goes through the convolutional process and\nthus won’t look anything like itself anymore. It will be a completely random-looking\nimage, but it will preserve its features. The convolutional process is explained in detail\nin chapter 3.\n Using multi-scale feature maps improves network accuracy significantly. Liu et al.\nran an experiment to measure the advantage gained by adding the multi-scale feature\nlayers. Figure 7.23 shows a decrease in accuracy with fewer layers; you can see the\naccuracy with different numbers of feature map layers used for object detection.\nNotice that network accuracy drops from 74.3% when having the prediction source\nfrom all six layers to 62.4% for one source layer. When using only the conv7 layer for\nFigure 7.22 Multi-scale feature layers \nreduce the spatial dimensions of the input \nimage to detect objects with different \nscales. In this image, you can see that the \nnew priors are kind of zoomed out to cover \nmore identifiable features of the horse close \nto the camera.\nPrediction source layers from:\nconv4_3 conv7 conv8_2 conv9_2 conv10_2 conv11_2mAP use\nboundary boxes?\nYes\n74.3No\n63.4# boxes\n8,732\n74.6 63.1 8,764\n73.8 68.4 8,942\n70.7 69.2 9,864\n64.2\n62.464.4\n64.09,025\n8,664\nFigure 7.23 Effects of using multiple output layers from the original paper. The \ndetector’s accuracy (mAP) increases when the authors add multi-scale features. \n(Source: Liu et al., 2016.)' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 337}
335
page_content="318 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nprediction, performance is the worst, reinforcing the message that it is critical to spread\nboxes of different scales over different layers. \nARCHITECTURE OF THE MULTI -SCALE LAYERS\nLiu et al. decided to add six convolutional layers that decrease in size. They did this\nwith a lot of tuning and trial and error until they produced the best results. As you saw\nin figure 7.17, convolutional layers 6 and 7 are pretty straightforward. Conv6 has a ker-\nnel size of 3 × 3, and conv7 has a kernel size of 1 × 1. Layers 8 through 11, on the other\nhand, are treated more like blocks, where each block consists of two convolutional lay-\ners of kernel sizes 1 × 1 and 3 × 3. \n Here is the code implementation in Keras for layers 6 through 11 (you can see the\nfull implementation in the book’s downloadable code): \n# conv6 and conv7\nconv6 = Conv2D( 1024, (3, 3), dilation_rate =(6, 6), activation ='relu', \n padding ='same')(pool5)\nconv7 = Conv2D( 1024, (1, 1), activation ='relu', padding='same')(conv6)\n# conv8 block\nconv8_1 = Conv2D( 256, (1, 1), activation ='relu', padding='same')(conv7)\nconv8_2 = Conv2D( 512, (3, 3), strides=(2, 2), activation ='relu', \n padding ='valid')(conv8_1)\n \n# conv9 block\nconv9_1 = Conv2D( 128, (1, 1), activation ='relu', padding='same')(conv8_2)\nconv9_2 = Conv2D( 256, (3, 3), strides=(2, 2), activation ='relu', \n padding ='valid')(conv9_1)\n \n# conv10 block\nconv10_1 = Conv2D( 128, (1, 1), activation ='relu', padding='same')(conv9_2)\nconv10_2 = Conv2D( 256, (3, 3), strides=(1, 1), activation ='relu', \n padding ='valid')(conv10_1)\n \n# conv11 block\nconv11_1 = Conv2D( 128, (1, 1), activation ='relu', padding='same')(conv10_2)\nconv11_2 = Conv2D( 256, (3, 3), strides=(1, 1), activation ='relu', \n padding ='valid')(conv11_1)\nAs mentioned before, if you are not working in research or academia, you most prob-\nably won’t need to implement object detection architectures yourself. In most cases,\nyou will download an open source implementation and build on it to work on your\nproblem. I just added these code snippets to help you internalize the information dis-\ncussed about different layer architectures.\nAtrous (or dilated) convolutions\nDilated convolutions introduce another parameter to convolutional layers: the dilation\nrate. This defines the spacing between the values in a kernel. A 3 × 3 kernel with a\ndilation rate of 2 has the same field of view as a 5 × 5 kernel while only using nine\nparameters. Imagine taking a 5 × 5 kernel and deleting every second column and row." metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 338}
336
page_content="319 Single-shot detector (SSD)\nNext, we discuss the third and last component of the SSD architecture: NMS.\n7.3.4 Non-maximum suppression\nGiven the large number of boxes generated by the detection layer per class during a\nforward pass of SSD at inference time, it is essential to prune most of the bounding\nbox by applying the NMS technique (explained earlier in this chapter). Boxes with a\nconfidence loss and IoU less than a certain threshold are discarded, and only the top\nN predictions are kept (figure 7.24). This ensures that only the most likely predictions\nare retained by the network, while the noisier ones are removed.\n How does SSD use NMS to prune the bounding boxes? SSD sorts the predicted\nboxes by their confidence scores. Starting from the top confidence prediction, SSD\nevaluates whether there are any previously predicted boundary boxes for the same\nclass that overlap with each other above a certain threshold by calculating their IoU.\n(The IoU threshold value is tunable. Liu et al. chose 0.45 in their paper.) Boxes with\nIoU above the threshold are ignored because they overlap too much with another box\nthat has a higher confidence score, so they are most likely detecting the same object.\nAt most, we keep the top 200 predictions per image.This delivers a wider field of view at the same computational cost.\nDilated convolutions are particularly popular in the field of real-time segmentation.\nUse them if you need a wide field of view and cannot afford multiple convolutions or\nlarger kernels.\nThe following code builds a dilated 3 × 3 convolution layer with a dilation rate of 2\nusing Keras:\nConv2D(1024, (3, 3), dilation_rate =(2,2), activation ='relu', padding ='same')A 3 × 3 kernel with a dilation rate of 2 has \nthe same field of view as a 5 × 5 kernel \nwhile only using nine parameters." metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 339}
337
page_content='320 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\n7.4 You only look once (YOLO)\nSimilar to the R-CNN family, YOLO is a family of object detection networks developed\nby Joseph Redmon et al. and improved over the years through the following versions:\n\uf0a1YOLOv1 , published in 20169—Called “unified, real-time object detection” because\nit is a single-detection network that unifies the two components of a detector:\nobject detector and class predictor.\n\uf0a1YOLOv2 (also known as YOLO9000), published later in 201610—Capable of\ndetecting over 9,000 objects; hence the name. It has been trained on ImageNet\nand COCO datasets and has achieved 16% mAP, which is not good; but it was\nvery fast during test time.\n\uf0a1YOLOv3 , published in 201811—Significantly larger than previous models and\nhas achieved a mAP of 57.9%, which is the best result yet out of the YOLO fam-\nily of object detectors.\nThe YOLO family is a series of end-to-end DL models designed for fast object detec-\ntion, and it was among the first attempts to build a fast real-time object detector. It is\none of the faster object detection algorithms out there. Although the accuracy of the\nmodels is close but not as good as R-CNNs, they are popular for object detection\nbecause of their detection speed, often demonstrated in real-time video or camera\nfeed input. \n9Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi, “You Only Look Once: Unified, Real-Time\nObject Detection,” 2016, http:/ /arxiv.org/abs/1506.02640 .\n10Joseph Redmon and Ali Farhadi, “YOLO9000: Better, Faster, Stronger,” 2016, http:/ /arxiv.org/abs/\n1612.08242 .\n11Joseph Redmon and Ali Farhadi, “YOLOv3: An Incremental Improvement,” 2018, http:/ /arxiv.org/abs/\n1804.02767 .\nDog Building Car\nFigure 7.24 Non-maximum suppression reduces the number of bounding boxes to only one box for \neach object.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 340}
338
page_content='321 You only look once (YOLO)\n The creators of YOLO took a different approach than the previous networks.\nYOLO does not undergo the region proposal step like R-CNNs. Instead, it only pre-\ndicts over a limited number of bounding boxes by splitting the input into a grid of\ncells; each cell directly predicts a bounding box and object classification. The result is\na large number of candidate bounding boxes that are consolidated into a final predic-\ntion using NMS (figure 7.25).\nYOLOv1 proposed the general architecture, YOLOv2 refined the design and made\nuse of predefined anchor boxes to improve bounding-box proposals, and YOLOv3\nfurther refined the model architecture and training process. In this section, we are\ngoing to focus on YOLOv3 because it is currently the state-of-the-art architecture in\nthe YOLO family. \n7.4.1 How YOLOv3 works\nThe YOLO network splits the input image into a grid of S × S cells. If the center of the\nground-truth box falls into a cell, that cell is responsible for detecting the existence of\nthat object. Each grid cell predicts B number of bounding boxes and their objectness\nscore along with their class predictions, as follows:\n\uf0a1Coordinates of B bounding boxes —Similar to previous detectors, YOLO predicts\nfour coordinates for each bounding box ( bx, by, bw, bh), where x and y are set to\nbe offsets of a cell location. \n\uf0a1Objectness score (P0)—indicates the probability that the cell contains an object.\nThe objectness score is passed through a sigmoid function to be treated as a\nprobability with a value range between 0 and 1. The objectness score is calcu-\nlated as follows:\nP0 = Pr (containing an object) × IoU (pred, truth) Predicts bounding boxes\nand classificationsSplits the image into grids Final predictions after\nnon-maximum suppression\nFigure 7.25 YOLO splits the image into grids, predicts objects for each grid, and then uses NMS to finalize \npredictions.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 341}
339
page_content='322 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\n\uf0a1Class prediction —If the bounding box contains an object, the network predicts\nthe probability of K number of classes, where K is the total number of classes in\nyour problem. \nIt is important to note that before v3, YOLO used a softmax function for the class\nscores. In v3, Redmon et al. decided to use sigmoid instead. The reason is that soft-\nmax imposes the assumption that each box has exactly one class, which is often not\nthe case. In other words, if an object belongs to one class, then it’s guaranteed not to\nbelong to another class. While this assumption is true for some datasets, it may not\nwork when we have classes like Women and Person. A multilabel approach models the\ndata more accurately. \n As you can see in figure 7.26, for each bounding box ( B), the prediction looks like\nthis: [( bounding box coordinates ), (objectness score ), (class predictions )]. We’ve learned that\nInput image split into a 13 × 13 grid\nBox 1\nBounding box\ncoordinatesClass\npredictionsObjectness\nscoretx ty tw th P1P2... Pc Po × BBox 2Box 3\nGround truth\nCell at the center\nof the ground truth\nAttributes of each bounding boxPrediction feature\nmap of the center cell\nFigure 7.26 Example of a YOLOv3 workflow when applying a 13 × 13 \ngrid to the input image. The input image is split into 169 cells. Each \ncell predicts B number of bounding boxes and their objectness score \nalong with their class predictions. In this example, we show the cell \nat the center of the ground-truth making predictions for 3 boxes \n(B = 3). Each prediction has the following attributes: bounding box \ncoordinates, objectness score, and class predictions.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 342}
340
page_content='323 You only look once (YOLO)\nthe bounding box coordinates are four values plus one value for the objectness score\nand K values for class predictions. Then the total number of values predicted for all\nbounding boxes is 5 B + K multiplied by the number of cells in the grid S × S:\nTotal predicted values = S × S × (5 B + K)\nPREDICTIONS ACROSS DIFFERENT SCALES\nLook closely at figure 7.26. Notice that the prediction feature map has three boxes.\nYou might have wondered why there are three boxes. Similar to the anchors concept in\nSSD, YOLOv3 has nine anchors to allow for prediction at three different scales per cell.\nThe detection layer makes detections at feature maps of three different sizes having\nstrides 32, 16, and 8, respectively. This means that with an input image of size 416 × 416,\nwe make detections on scales 13 × 13, 26 × 26, and 52 × 52 (figure 7.27). The 13 × 13\nlayer is responsible for detecting large objects, the 26 × 26 layer is for detecting medium\nobjects, and the 52 × 52 layer detects smaller objects.\nThis results in the prediction of three bounding boxes for each cell ( B = 3). That’s\nwhy in figure 7.26, the prediction feature map is predicting Box 1, Box 2, and Box 3.\nThe bounding box responsible for detecting the dog will be the one whose anchor has\nthe highest IoU with the ground-truth box.\nNOTE Detections at different layers help address the issue of detecting small\nobjects, which was a frequent complaint with YOLOv2. The upsampling layers\ncan help the network preserve and learn fine-grained features, which are\ninstrumental for detecting small objects. \nThe network does this by downsampling the input image until the first detection\nlayer, where a detection is made using feature maps of a layer with stride 32. Further,\nlayers are upsampled by a factor of 2 and concatenated with feature maps of previous\n26 × 26 13 × 13 52× 52\nFigure 7.27 Prediction feature maps at different scales' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 343}
341
page_content='324 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nlayers having identical feature-map sizes. Another detection is now made at layer with\nstride 16. The same upsampling procedure is repeated, and a final detection is made\nat the layer of stride 8.\nYOLO V3 OUTPUT BOUNDING BOXES\nFor an input image of size 416 × 416, YOLO predicts ((52 × 52) + (26 × 26) + 13 × 13))\n× 3 = 10,647 bounding boxes. That is a huge number of boxes for an output. In our\ndog example, we have only one object. We want only one bounding box around this\nobject. How do we reduce the boxes from 10,647 down to 1? \n First, we filter the boxes based on their objectness score. Generally, boxes having\nscores below a threshold are ignored. Second, we use NMS to cure the problem of\nmultiple detections of the same image. For example, all three bounding boxes of the\noutlined grid cell at the center of the image may detect a box, or the adjacent cells\nmay detect the same object.\n7.4.2 YOLOv3 architecture\nNow that you understand how YOLO works, going through the architecture will be\nvery simple and straightforward. YOLO is a single neural network that unifies object\ndetection and classifications into one end-to-end network. The neural network archi-\ntecture was inspired by the GoogLeNet model (Inception) for feature extraction.\nInstead of the Inception modules, YOLO uses 1 × 1 reduction layers followed by 3 × 3\nconvolutional layers. Redmon and Farhadi called this DarkNet (figure 7.28).\nYOLOv2 used a custom deep architecture darknet-19, an originally 19-layer network\nsupplemented with 11 more layers for object detection. With a 30-layer architecture,\nYOLOv2 often struggled with small object detections. This was attributed to loss of\nfine-grained features as the layers downsampled the input. However, YOLOv2’s archi-\ntecture was still lacking some of the most important elements that are now stable inInput\nimage\nFully\nconnected\nDarkNet\narchitectureFully\nconnected× timesB\nLength: 5 + BK( , , , , obj score)xywh Class probability× timesK\nFigure 7.28 High-level architecture of YOLO' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 344}
342
page_content='325 You only look once (YOLO)\nmost state-of-the art algorithms: no residual blocks, no skip connections, and no upsam-\npling. YOLOv3 incorporates all of these updates.\n YOLOv3 uses a variant of DarkNet called Darknet-53 (figure 7.29). It has a 53-layer\nnetwork that is trained on ImageNet. For the task of detection, 53 more layers are\nstacked onto it, giving us a 106-layer fully convolutional underlying architecture for\nYOLOv3. This is the reason behind the slowness of YOLOv3 compared to YOLOv2—\nbut this comes with a great boost in detection accuracy.\nFULL ARCHITECTURE OF YOLO V3\nWe just learned that YOLOv3 makes predictions across three different scales. This\nbecomes a lot clearer when you see the full architecture, shown in figure 7.30.\n The input image goes through the DarkNet-53 feature extractor, and then the\nimage is downsampled by the network until layer 79. The network branches out and\ncontinues to downsample the image until it makes its first prediction at layer 82. This\ndetection is made on a grid scale of 13 × 13 that is responsible for detecting large\nobjects, as we explained before. \n Next the feature map from layer 79 is upsampled by 2x to dimensions 26 × 26 and\nconcatenated with the feature map from layer 61. Then the second detection is made by\nlayer 94 on a grid scale of 26 × 26 that is responsible for detecting medium objects.\n Finally, a similar procedure is followed again, and the feature map from layer 91 is\nsubjected to few upsampling convolutional layers before being depth concatenated\nwith a feature map from layer 36. A third prediction is made by layer 106 on a grid\nscale of 52 × 52, which is responsible for detecting small objects.Type\n1×Convolutional\nConvolutional\nConvolutional\nConvolutional\nResidual\nConvolutional\nConvolutional\nConvolutional\nResidual\nConvolutional\nConvolutional\nConvolutional\nResidual\nConvolutional\nConvolutional\nConvolutional\nResidual\nConvolutional\nConvolutional\nConvolutional\nResidual\nAvgpool\nConnected\nSoftmaxFilters\n32\n64\n32\n34\n128\n64\n128\n256\n128\n256\n512\n256\n512\n1024\n512\n1024Size\n3 × 3\n3 × 3 / 2\n1 × 1\n3 × 3\n3 × 3 / 2\n1 × 1\n3 × 3\n3 × 3 / 2\n1 × 1\n3 × 3\n3 × 3 / 2\n1 × 1\n3 × 3\n3 × 3 / 2\n1 × 1\n3 × 3\nGlobal\n1000Output\n256 × 256\n128 × 128\n128 × 128\n64 × 64\n64 × 64\n32 × 32\n32 × 32\n16 × 16\n16 × 16\n8 × 8\n8 × 82×\n8×\n8×\n4×\nFigure 7.29 DarkNet-53 feature \nextractor architecture. ( Source: \nRedmon and Farhadi, 2018.)' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 345}
343
page_content='326 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\n7.5 Project: Train an SSD network in a self-driving car \napplication\nThe code for this project was created by Pierluigi Ferrari in his GitHub repository\n(https:/ /github.com/pierluigiferrari/ssd_keras ). The project was adapted for this chap-\nter; you can find this implementation with the book’s downloadable code.\n Note that for this project, we are going to build a smaller SSD network called SSD7.\nSSD7 is a seven-layer version of the SSD300 network. It is important to note that while an\nSSD7 network would yield some acceptable results, this is not an optimized network\narchitecture. The goal is just to build a low-complexity network that is fast enough for\nyou to train on your personal computer. It took me around 20 hours to train this net-\nwork on the road traffic dataset; training could take a lot less time on a GPU.\nNOTE The original repository created by Pierluigi Ferrari comes with imple-\nmentation tutorials for SSD7, SSD300, and SSD512 networks. I encourage you\nto check it out.DarkNet\narchitectureUpsampling\nlayer\nDetection layers\nat scale 1\nScale: 3\nStride: 8\n106Scale: 2\nStride: 16\n94Scale: 1\nStride: 32\n82Upsampling\nlayerConcatenation Concatenation36\n... ... ... ... ... ... ... ... ... ...+ ++\nDetection layers\nat scale 2\nDetection layers\nat scale 361\n7991\nFigure 7.30 YOLOv3 network architecture. (Inspired by the diagram in Ayoosh Kathuria’s post “What’s new in \nYOLO v3?” Medium , 2018, http:/ /mng.bz/lGN2 .)' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 346}
344
page_content='327 Project: Train an SSD network in a self-driving car application\nIn this project, we will use a toy dataset created by Udacity. You can visit Udacity’s\nGitHub repository for more information on the dataset ( https:/ /github.com/udacity/\nself-driving-car/tree/master/annotations ). It has more than 22,000 labeled images\nand 5 object classes: car, truck, pedestrian, bicyclist, and traffic light. All of the images\nhave been resized to a height of 300 pixels and a width of 480 pixels. You can down-\nload the dataset as part of the book’s code. \nNOTE The GitHub data repository is owned by Udacity, and it may be\nupdated after this writing. To avoid any confusion, I downloaded the dataset\nthat I used to create this project and provided it with the book’s code to allow\nyou to replicate the results in this project. \nWhat makes this dataset very interesting is that these are real-time images taken while\ndriving in Mountain View, California, and neighboring cities during daylight condi-\ntions. No image cleanup was done. Take a look at the image examples in figure 7.31.\nFigure 7.31 Example images from the Udacity self-driving dataset\n(Image copyright © 2016 Udacity and published under MIT License.)' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 347}
345
page_content="328 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\nAs stated on Udacity’s page, the dataset was labeled by CrowdAI and Autti. You can\nfind the labels in CSV format in the folder, split into three files: training, validation,\nand test datasets. The labeling format is straightforward, as follows:\nXmin, xmax, ymin, and ymax are the bounding box coordinates. Class_id is the cor-\nrect label, and frame is the image name.\n7.5.1 Step 1: Build the model\nBefore jumping into the model training, take a close look at the build_model method\nin the keras_ssd7.py file. This file builds a Keras model with the SSD architecture. As\nwe learned earlier in this chapter, the model consists of convolutional feature layers\nand a number of convolutional predictor layers that make their input from different\nfeature layers.\n Here is what the build_model method looks like. Please read the comments in the\nkeras_ssd7.py file to understand the arguments passed:\ndef build_model (image_size,\n mode ='training' ,frame xmin xmax ymin ymax class_id\n1478019952686311006.jpg 237 251 143 155 1\nData annotation using LabelImg\nIf you are annotating your own data, there are several open source labeling applica-\ntions that you can use, like LabelImg ( https:/ /pypi.org/project/labelImg ). They are\nvery easy to set up and use.\nExample of using the labelImg application to annotate images" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 348}
346
page_content="329 Project: Train an SSD network in a self-driving car application\n l2_regularization =0.0,\n min_scale =0.1,\n max_scale =0.9,\n scales =None,\n aspect_ratios_global =[0.5, 1.0, 2.0],\n aspect_ratios_per_layer =None,\n two_boxes_for_ar1 =True,\n clip_boxes =False,\n variances =[1.0, 1.0, 1.0, 1.0],\n coords ='centroids' ,\n normalize_coords =False,\n subtract_mean =None,\n divide_by_stddev =None,\n swap_channels =False,\n confidence_thresh =0.01,\n iou_threshold =0.45,\n top_k =200,\n nms_max_output_size =400,\n return_predictor_sizes =False)\n7.5.2 Step 2: Model configuration\nIn this section, we set the model configuration parameters. First we set the height,\nw i d t h , a n d n u m b e r o f c o l o r c h a n n e l s t o w h a t e v e r w e w a n t t h e m o d e l t o a c c e p t a s\nimage input. If your input images have a different size than defined here, or if your\nimages have non-uniform size, you must use the data generator’s image transforma-\ntions (resizing and/or cropping) so that your images end up having the required\ninput size before they are fed to the model: \nimg_height = 300 \nimg_width = 480 \nimg_channels = 3 \nintensity_mean = 127.5 \nintensity_range = 127.5 \nThe number of classes is the number of positive classes in your dataset: for example,\n20 for PASCAL VOC or 80 for COCO. Class ID 0 must always be reserved for the back-\nground class:\nn_classes = 5 \nscales = [0.08, 0.16, 0.32, 0.64, 0.96] \naspect_ratios = [0.5, 1.0, 2.0] \nsteps = None \noffsets = None Height, width, \nand channels of \nthe input images\nSet to your preference (maybe None). \nThe current settings transform the input \npixel values to the interval [– 1,1].\nNumber of classes \nin our datasetAn explicit list of anchor box \nscaling factors. If this is passed, \nit overrides the min_scale and \nmax_scale arguments.\nList of aspect ratios \nfor the anchor boxesIn case you’d like to set the step \nsizes for the anchor box grids \nmanually; not recommended\nIn case you’d like to set the offsets for the anchor \nbox grids manually; not recommended" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 349}
347
page_content="330 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\ntwo_boxes_for_ar1 = True \nclip_boxes = False \nvariances = [1.0, 1.0, 1.0, 1.0] \nnormalize_coords = True \n7.5.3 Step 3: Create the model\nNow we call the build_model() function to build our model:\nmodel = build_model(image_size=(img_height, img_width, img_channels),\n n_classes=n_classes,\n mode= 'training' ,\n l2_regularization=0.0005,\n scales=scales,\n aspect_ratios_global=aspect_ratios,\n aspect_ratios_per_layer= None,\n two_boxes_for_ar1=two_boxes_for_ar1,\n steps=steps,\n offsets=offsets,\n clip_boxes=clip_boxes,\n variances=variances,\n normalize_coords=normalize_coords,\n subtract_mean=intensity_mean,\n divide_by_stddev=intensity_range)\nYou can optionally load saved weights. If you don’t want to load weights, skip the fol-\nlowing code snippet:\nmodel.load_weights('<path/to/model.h5>', by_name=True)\nInstantiate an Adam optimizer and the SSD loss function, and compile the model.\nHere, we will use a custom Keras function called SSDLoss . It implements the multi-\ntask log loss for classification and smooth L1 loss for localization. neg_pos_ratio and\nalpha are set as in the SSD paper (Liu et al., 2016):\nadam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)\nssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)\nmodel.compile(optimizer=adam, loss=ssd_loss.compute_loss)Specifies whether to generate two \nanchor boxes for aspect ratio 1\nSpecifies whether to clip the anchor \nboxes to lie entirely within the image \nboundariesList of variances by which the encoded \ntarget coordinates are scaled\nSpecifies whether the model is supposed to \nuse coordinates relative to the image size" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 350}
348
page_content='331 Project: Train an SSD network in a self-driving car application\n7.5.4 Step 4: Load the data\nTo load the data, follow these steps:\n1Instantiate two DataGenerator objects—one for training and one for validation:\ntrain_dataset = DataGenerator(load_images_into_memory= False, \n hdf5_dataset_path= None)\nval_dataset = DataGenerator(load_images_into_memory= False, \n hdf5_dataset_path= None)\n2Parse the image and label lists for the training and validation datasets:\nimages_dir = \'path_to_downloaded_directory\'\ntrain_labels_filename = \'path_to_dataset/labels_train.csv\' \nval_labels_filename = \'path_to_dataset/labels_val.csv\'\ntrain_dataset.parse_csv(images_dir=images_dir,\n labels_filename=train_labels_filename,\n input_format=[\'image_name\', \'xmin\', \'xmax\', \'ymin\',\n \'ymax\', \'class_id\'],\n include_classes=\'all\')\nval_dataset.parse_csv(images_dir=images_dir,\n labels_filename=val_labels_filename,\n input_format=[\'image_name\', \'xmin\', \'xmax\', \'ymin\',\n \'ymax\', \'class_id\'],\n include_classes=\'all\')\ntrain_dataset_size = train_dataset.get_dataset_size() \nval_dataset_size = val_dataset.get_dataset_size() \nprint("Number of images in the training \ndataset: \\t{:>6}".format(train_dataset_size))\nprint("Number of images in the validation \ndataset: \\t{:>6}".format(val_dataset_size))\nThis cell should print out the size of your training and validation datasets as\nfollows:\nNumber of images in the training dataset: 18000\nNumber of images in the validation dataset: 4241\n3Set the batch size:\nbatch_size = 16\nAs you learned in chapter 4, you can increase the batch size to get a boost in the\ncomputing speed based on the hardware that you are using for this training. Ground \ntruth\nGets the number \nof samples in the \ntraining and \nvalidation datasets' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 351}
349
page_content="332 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\n4Define the data augmentation process:\ndata_augmentation_chain = DataAugmentationConstantInputSize(\n random_brightness=(-48, 48, 0.5),\n random_contrast=(0.5, 1.8, 0.5),\n random_saturation=(0.5, 1.8, 0.5),\n random_hue=(18, 0.5),\n random_flip=0.5,\n random_translate=((0.03,0.5),\n (0.03,0.5), 0.5),\n random_scale=(0.5, 2.0, 0.5),\n n_trials_max=3,\n clip_boxes= True,\n overlap_criterion= 'area',\n bounds_box_filter=(0.3, 1.0),\n bounds_validator=(0.5, 1.0),\n n_boxes_min=1,\n background=(0,0,0))\n5Instantiate an encoder that can encode ground-truth labels into the format\nneeded by the SSD loss function. Here, the encoder constructor needs the\nspatial dimensions of the model’s predictor layers to create the anchor boxes:\npredictor_sizes = [model.get_layer( 'classes4' ).output_shape[1:3],\n model.get_layer( 'classes5' ).output_shape[1:3],\n model.get_layer( 'classes6' ).output_shape[1:3],\n model.get_layer( 'classes7' ).output_shape[1:3]]\nssd_input_encoder = SSDInputEncoder(img_height=img_height,\n img_width=img_width,\n n_classes=n_classes,\n predictor_sizes=predictor_sizes,\n scales=scales,\n aspect_ratios_global=aspect_ratios,\n two_boxes_for_ar1=two_boxes_for_ar1,\n steps=steps,\n offsets=offsets,\n clip_boxes=clip_boxes,\n variances=variances,\n matching_type= 'multi',\n pos_iou_threshold=0.5,\n neg_iou_limit=0.3,\n normalize_coords=normalize_coords)\n6Create the generator handles that will be passed to Keras’s fit_generator()\nfunction:\ntrain_generator = train_dataset.generate(batch_size=batch_size,\n shuffle= True,\n transformations=[\n data_augmentation_chain],\n label_encoder=ssd_input_encoder," metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 352}
350
page_content="333 Project: Train an SSD network in a self-driving car application\n returns={ 'processed_images' ,\n 'encoded_labels' },\n keep_images_without_gt= False)\nval_generator = val_dataset.generate(batch_size=batch_size,\n shuffle= False,\n transformations=[],\n label_encoder=ssd_input_encoder,\n returns={ 'processed_images' ,\n 'encoded_labels' },\n keep_images_without_gt= False)\n7.5.5 Step 5: Train the model\nEverything is set, and we are ready to train our SSD7 network. We’ve already chosen\nan optimizer and a learning rate and set the batch size; now let’s set the remaining\ntraining parameters and train the network. There are no new parameters here that\nyou haven’t learned already. We will set the model checkpoint, early stopping, and\nlearning rate reduction rate:\nmodel_checkpoint = \nModelCheckpoint(filepath= 'ssd7_epoch- {epoch:02d} _loss-{loss:.4f} _val_loss-\n{val_loss:.4f} .h5',\n monitor= 'val_loss' ,\n verbose=1,\n save_best_only= True,\n save_weights_only= False,\n mode= 'auto',\n period=1)\ncsv_logger = CSVLogger(filename= 'ssd7_training_log.csv' ,\n separator= ',',\n append= True)\nearly_stopping = EarlyStopping(monitor= 'val_loss' , \n min_delta=0.0,\n patience=10,\n verbose=1)\nreduce_learning_rate = ReduceLROnPlateau(monitor= 'val_loss' , \n factor=0.2,\n patience=8,\n verbose=1,\n epsilon=0.001,\n cooldown=0,\n min_lr=0.00001)\ncallbacks = [model_checkpoint, csv_logger, early_stopping, reduce_learning_rate]\nSet one epoch to consist of 1,000 training steps. I’ve arbitrarily set the number of\nepochs to 20 here. This does not necessarily mean that 20,000 training steps is the\noptimum number. Depending on the model, dataset, learning rate, and so on, you\nmight have to train much longer (or less) to achieve convergence:Early stopping if val_loss \ndid not improve for 10 \nconsecutive epochs\nLearning rate \nreduction rate \nwhen it plateaus" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 353}
351
page_content="334 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\ninitial_epoch = 0 \nfinal_epoch = 20 \nsteps_per_epoch = 1000\nhistory = model.fit_generator(generator=train_generator, \n steps_per_epoch=steps_per_epoch,\n epochs=final_epoch,\n callbacks=callbacks,\n validation_data=val_generator,\n validation_steps=ceil(\n val_dataset_size/batch_size),\n initial_epoch=initial_epoch)\n7.5.6 Step 6: Visualize the loss\nLet’s visualize the loss and val_loss values to look at how the training and validation\nloss evolved and check whether our training is going in the right direction (figure 7.32):\nplt.figure(figsize=(20,12))\nplt.plot(history.history[ 'loss'], label= 'loss')\nplt.plot(history.history[ 'val_loss' ], label= 'val_loss' )\nplt.legend(loc= 'upper right' , prop={ 'size': 24})If you’re resuming previous training, set \ninitial_epoch and final_epoch accordingly.\nStarts \ntraining\n4.00\n2.252.502.753.003.253.503.75\n0.0 17.5 15.0 12.5 10.0 7.5 5.0 2.5loss\nval_loss\nFigure 7.32 Visualized loss and val_loss values during SSD7 training for 20 epochs" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 354}
352
page_content='335 Project: Train an SSD network in a self-driving car application\n7.5.7 Step 7: Make predictions\nNow let’s make some predictions on the validation dataset with the trained model. For\nconvenience, we’ll use the validation generator that we’ve already set up. Feel free to\nchange the batch size:\npredict_generator = val_dataset.generate(batch_size=1, \n shuffle= True,\n transformations=[],\n label_encoder= None,\n returns={ \'processed_images\' ,\n \'processed_labels\' ,\n \'filenames\' },\n keep_images_without_gt= False)\nbatch_images, batch_labels, batch_filenames = next(predict_generator) \ny_pred = model.predict(batch_images) \n \ny_pred_decoded = decode_detections(y_pred, \n confidence_thresh=0.5,\n iou_threshold=0.45,\n top_k=200,\n normalize_coords=normalize_coords,\n img_height=img_height,\n img_width=img_width)\nnp.set_printoptions(precision=2, suppress= True, linewidth=90)\nprint("Predicted boxes: \\n")\nprint(\' class conf xmin ymin xmax ymax\' )\nprint(y_pred_decoded[i])\nThis code snippet prints the predicted bounding boxes along with their class and the\nlevel of confidence for each one, as shown in figure 7.33.\nWhen we draw these predicted boxes onto the image, as shown in figure 7.34, each\npredicted box has its confidence next to the category name. The ground-truth boxes\nare also drawn onto the image for comparison.\n \n 1. Set the generator \nfor the predictions.\n2. Generate samples.\n3. Make a prediction.\n4. Decode the raw \nprediction y_pred.\nclass\n1.\n1.\n1.\n1.\n1.\n1.\n2.conf\n0.93\n0.88\n0.88\n0.6\n0.58\n0.5\n0.6xmin\n131.96\n52.39\n262.65\n234.53\n73.2\n225.06\n266.38ymin\n152.12\n151.89\n140.26\n148.43\n153.51\n130.93\n116.4xmax\n159.29\n87.44\n286.45\n267.19\n91.79\n274.15\n282.23ymax\n172.3 ]\n179.34]\n164.05]\n170.34]\n175.64]\n169.79]\n173.16]][[\n[\n[\n[\n[\n[\n[Figure 7.33 Predicted bounding \nboxes, confidence level, and class' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 355}
353
page_content='336 CHAPTER 7Object detection with R-CNN, SSD, and YOLO\n Summary\n\uf0a1Image classification is the task of predicting the type or class of an object in\nan image.\n\uf0a1Object detection is the task of predicting the location of objects in an image via\nbounding boxes and the classes of the located objects.\n\uf0a1The general framework of object detection systems consists of four main com-\nponents: region proposals, feature extraction and predictions, non-maximum\nsuppression, and evaluation metrics.\n\uf0a1Object detection algorithms are evaluated using two main metrics: frame per\nsecond (FPS) to measure the network’s speed, and mean average precision (mAP)\nto measure the network’s precision. \n\uf0a1The three most popular object detection systems are the R-CNN family of net-\nworks, SSD, and the YOLO family of networks.\n\uf0a1The R-CNN family of networks has three main variations: R-CNN, Fast R-CNN,\nand Faster R-CNN. R-CNN and Fast R-CNN use a selective search algorithm to\npropose RoIs, whereas Faster R-CNN is an end-to-end DL system that uses a\nregion proposal network to propose RoIs.\n\uf0a1The YOLO family of networks include YOLOv1, YOLOv2 (or YOLO9000), and\nYOLOv3.\n0\n25020015010050\n0 400 300 200 100jeep 0.88car 0.93car 0.60car 0.50car 0.88\ncar 0.60car 0.58\nFigure 7.34 Predicted boxes drawn onto the image' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 356}
354
page_content='337 Summary\n\uf0a1R-CNN is a multi-stage detector: it separates the process to predict the object-\nness score of the bounding box and the object class into two different stages. \n\uf0a1SSD and YOLO are single-stage detectors: the image is passed once through the\nnetwork to predict the objectness score and the object class. \n\uf0a1In general, single-stage detectors tend to be less accurate than two-stage detec-\ntors but are significantly faster.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 357}
355
page_content='Part 3\nGenerative models\nand visual embeddings\nA t this point, we’ve covered a lot of ground about how deep neural net-\nworks can help us understand image features and perform deterministic tasks\non them, like object classification and detection. Now it’s time to turn our focus\nto a different, slightly more advanced area of computer vision and deep learn-\ning: generative models. These neural network models actually create new con-\ntent that didn’t exist before—new people, new objects, a new reality, like magic!\nWe train these models on a dataset from a specific domain, and then they create\nnew images with objects from the same domain that look close to the real data.\nIn this part of the book, we’ll cover both training and image generation, as well\nas look at neural transfer and the cutting edge of what’s happening in visual\nembeddings.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 359}
356
page_content='341Generative adversarial\nnetworks (GANs)\nGenerative adversarial networks (GANs) are a new type of neural architecture\nintroduced by Ian Goodfellow and other researchers at the University of Montreal,\nincluding Yoshua Bengio, in 2014.1 GANs have been called “the most interesting\nidea in the last 10 years in ML” by Yann LeCun, Facebook’s AI research director.\nThe excitement is well justified. The most notable feature of GANs is their capacity\nto create hyperrealistic images, videos, music, and text. For example, except for the\nfar-right column, none of the faces shown on the right side of figure 8.1 belong to\nreal humans; they are all fake. The same is true for the handwritten digits on theThis chapter covers\n\uf0a1Understanding the basic components of GANs: \ngenerative and discriminative models\n\uf0a1Evaluating generative models\n\uf0a1Learning about popular vision applications \nof GANs\n\uf0a1Building a GAN model\n1Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron\nCourville, and Yoshua Bengio, “Generative Adversarial Networks,” 2014, http:/ /arxiv.org/abs/1406.2661 .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 361}
357
page_content='342 CHAPTER 8Generative adversarial networks (GANs)\nleft side of the figure. This shows a GAN’s ability to learn features from the training\nimages and imagine its own new images using the patterns it has learned.\n We’ve learned in the past chapters how deep neural networks can be used to\nunderstand image features and perform deterministic tasks on them like object classi-\nfication and detection. In this part of the book, we will talk about a different type of\napplication for deep learning in the computer vision world: generative models . These\nare neural network models that are able to imagine and produce new content that\nhasn’t been created before. They can imagine new worlds, new people, and new reali-\nties in a seemingly magical way. We train generative models by providing a training\ndataset in a specific domain; their job is to create images that have new objects from\nthe same domain that look like the real data. \n For a long time, humans have had an advantage over computers: the ability to\nimagine and create. Computers have excelled in solving problems like regression, classi-\nfication, and clustering. But with the introduction of generative networks, researchers\ncan make computers generate content of the same or higher quality compared to that\ncreated by their human counterparts. By learning to mimic any distribution of data,\ncomputers can be taught to create worlds that are similar to our own in any domain:\nimages, music, speech, prose. They are robot artists, in a sense, and their output is\nimpressive. GANs are also seen as an important stepping stone toward achieving artifi-\ncial general intelligence (AGI), an artificial system capable of matching human cogni-\ntive capacity to acquire expertise in virtually any domain—from images, to language,\nto creative skills needed to compose sonnets.\n Naturally, this ability to generate new content makes GANs look a little bit like\nmagic, at least at first sight. In this chapter, we will only attempt to scratch the surface\nof what is possible with GANs. We will overcome the apparent magic of GANs in order\nto dive into the architectural ideas and math behind these models in order to provideFigure 8.1 Illustration of GANs’ abilities by Goodfellow and co-authors. These are samples generated \nby GANs after training on two datasets: MNIST and the Toronto Faces Dataset (TFD). In both cases, the \nright-most column contains true data. This shows that the produced data is really generated and not \nonly memorized by the network. ( Source: Goodfellow et al., 2014.)' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 362}
358
page_content='343 GAN architecture\nthe necessary theoretical knowledge and practical skills to continue exploring any\nfacet of this field that you find most interesting. Not only will we discuss the funda-\nmental notions that GANs rely on, but we will also implement and train an end-to-end\nGAN and go through it step by step. Let’s get started!\n8.1 GAN architecture\nGANs are based on the idea of adversarial training . The GAN architecture basically\nconsists of two neural networks that compete against each other:\n\uf0a1The generator tries to convert random noise into observations that look as if they\nhave been sampled from the original dataset.\n\uf0a1The discriminator tries to predict whether an observation comes from the origi-\nnal dataset or is one of the generator’s forgeries. \nThis competitiveness helps them to mimic any distribution of data. I like to think\nof the GAN architecture as two boxers fighting (figure 8.2): in their quest to win the\nbout, both are learning each others’ moves and techniques. They start with less\nknowledge about their opponent, and as the match goes on, they learn and become\nbetter.\nAnother analogy will help drive home the idea: think of a GAN as the opposition of a\ncounterfeiter and a cop in a game of cat and mouse, where the counterfeiter is learn-\ning to pass false notes, and the cop is learning to detect them (figure 8.3). Both are\ndynamic: as the counterfeiter learns to perfect creating false notes, the cop is in train-\ning and getting better at detecting the fakes. Each side learns the other’s methods in a\nconstant escalation.\n Generator\nGenerates images from\nthe features learned in\nthe training dataset\nDiscriminator\nPredicts whether the\nimage is real or fake\nFigure 8.2 A fight between two adversarial networks: generative and discriminative' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 363}
359
page_content='344 CHAPTER 8Generative adversarial networks (GANs)\nAs you can see in the architecture diagram in figure 8.4, a GAN takes the following steps:\n1The generator takes in random numbers and returns an image.\n2This generated image is fed into the discriminator alongside a stream of images\ntaken from the actual, ground-truth dataset.\n3The discriminator takes in both real and fake images and returns probabilities:\nnumbers between 0 and 1, with 1 representing a prediction of authenticity and\n0 representing a prediction of fake.\nIf you take a close look at the generator and discriminator networks, you will notice\nthat the generator network is an inverted ConvNet that starts with the flattened vector.\nThe images are upscaled until they are similar in size to the images in the training\ndataset. We will dive deeper into the generator architecture later in this chapter—I just\nwanted you to notice this phenomenon now. Police Counterfeiters\n=\nFigure 8.3 The GAN’s generator and discriminator models are like a counterfeiter and a police officer. \nReal\nFakeRandom noise\nGeneratorFake imageTraining set\nDiscriminator\nFigure 8.4 The GAN architecture is composed of generator and discriminator networks. Note \nthat the discriminator network is a typical CNN where the convolutional layers reduce in size until \nthey get to the flattened layer. The generator network, on the other hand, is an inverted CNN that \nstarts with the flattened vector: the convolutional layers increase in size until they form the \ndimension of the input images.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 364}
360
page_content='345 GAN architecture\n8.1.1 Deep convolutional GANs (DCGANs)\nIn the original GAN paper in 2014, multi-layer perceptron (MLP) networks were used\nto build the generator and discriminator networks. However, since then, it has been\nproven that convolutional layers give greater predictive power to the discriminator,\nwhich in turn enhances the accuracy of the generator and the overall model. This\ntype of GAN is called a deep convolutional GAN (DCGAN) and was developed by Alec\nRadford et al. in 2016.2 Now, all GAN architectures contain convolutional layers, so\nthe “DC” is implied when we talk about GANs; so, for the rest of this chapter, we refer\nto DCGANs as both GANs and DCGANs. You can also go back to chapters 2 and 3 to\nlearn more about the differences between MLP and CNN networks and why CNN is\npreferred for image problems. Next, let’s dive deeper into the architecture of the dis-\ncriminator and generator networks.\n8.1.2 The discriminator model\nAs explained earlier, the goal of the discriminator is to predict whether an image is\nreal or fake. This is a typical supervised classification problem, so we can use the tradi-\ntional classifier network that we learned about in the previous chapters. The network\nconsists of stacked convolutional layers, followed by a dense output layer with a sig-\nmoid activation function. We use a sigmoid activation function because this is a binary\nclassification problem: the goal of the network is to output prediction probabilities\nvalues that range between 0 and 1, where 0 means the image generated by the genera-\ntor is fake and 1 means it is 100% real. \n The discriminator is a normal, well understood classification model. As you can see in\nfigure 8.5, training the discriminator is pretty straightforward. We feed the discriminator\n2Alec Radford, Luke Metz, and Soumith Chintala, “Unsupervised Representation Learning with Deep Convo-\nlutional Generative Adversarial Networks,” 2016, http:/ /arxiv.org/abs/1511.06434 .Real imagesConvolutional\nlayersTraining dataset Discriminator network\nSigmoid\nfunction\nFake images0.1\nrealness\nprobability output\nFigure 8.5 The discriminator for the GAN' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 365}
361
page_content='346 CHAPTER 8Generative adversarial networks (GANs)\nlabeled images: fake (or generated) and real images. The real images come from the\ntraining dataset, and the fake images are the output of the generator model. \n Now, let’s implement the discriminator network in Keras. At the end of this chap-\nter, we will compile all the code snippets together to build an end-to-end GAN. We will\nfirst implement a discriminator_model function. In this code snippet, the shape of\nthe image input is 28 × 28; you can change it as needed for your problem:\ndef discriminator_model():\n discriminator = Sequential() \n discriminator.add(Conv2D( 32, kernel_size= 3, strides= 2,\n input_shape= (28,28,1) ,padding= "same")) \n discriminator.add(LeakyReLU( alpha=0.2)) \n discriminator.add(Dropout( 0.25)) \n discriminator.add(Conv2D( 64, kernel_size= 3, strides= 2, padding= "same"))\n discriminator.add(ZeroPadding2D( padding= ((0,1),(0,1)))) \n discriminator.add(BatchNormalization( momentum= 0.8)) \n discriminator.add(LeakyReLU( alpha=0.2)) \n discriminator.add(Dropout( 0.25)) \n \n discriminator.add(Conv2D( 128, kernel_size= 3, strides= 2, padding= "same"))\n discriminator.add(BatchNormalization( momentum= 0.8)) \n discriminator.add(LeakyReLU( alpha=0.2)) \n discriminator.add(Dropout( 0.25)) \n \n discriminator.add(Conv2D( 256, kernel_size= 3, strides= 1, padding= "same"))\n discriminator.add(BatchNormalization( momentum= 0.8)) \n discriminator.add(LeakyReLU( alpha=0.2)) \n discriminator.add(Dropout( 0.25)) \n \n discriminator.add(Flatten()) \n discriminator.add(Dense( 1, activation= \'sigmoid\' )) \n \n discriminator.summary() \n \n img_shape = (28,28,1) \n img = Input(shape=img_shape) \n probability = discriminator(img) \n \n return Model(img, probability) Instantiates a \nsequential model \nand names it \ndiscriminator Adds a \nconvolutional \nlayer to the \ndiscriminator \nmodelAdds a\nleaky ReLU\nactivation\nfunction\nAdds a dropout layer with \na 25% dropout probability \nAdds a second\nconvolutional\nlayer with zero\npadding\nAdds a batch \nnormalization layer \nfor faster learning \nand higher accuracy Adds a third\nconvolutional\nlayer with batch\nnormalization,\nleaky ReLU, and\na dropout\nAdds the fourth\nconvolutional\nlayer with batch\nnormalization,\nleaky ReLU, and\na dropoutFlattens the \nnetwork and adds \nthe output dense \nlayer with sigmoid \nactivation function\nPrints\nthe model\nsummary Sets the input \nimage shape\nRuns the discriminator \nmodel to get the output \nprobability\nReturns a model that takes the image as\ninput and produces the probability output' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 366}
362
page_content='347 GAN architecture\nThe output summary of the discriminator model is shown in figure 8.6. As you might\nhave noticed, there is nothing new: the discriminator model follows the regular pat-\ntern of the traditional CNN networks that we learned about in chapters 3, 4, and 5. We\nstack convolutional, batch normalization, activation, and dropout layers to create our\nmodel. All of these layers have hyperparameters that we tune when we are training the\nnetwork. For your own implementation, you can tune these hyperparameters and add\nor remove layers as you see fit. Tuning CNN hyperparameters is explained in detail in\nchapters 3 and 4.\nLayer (type)\nTotal params: 393,729\nTrainable params: 392,833\nNon-trainable params: 896conv2d_1 (Conv2D)\nleaky_re_lu_1 (LeakyReLU)Output Shape\n(None, 14, 14, 32)\n(None, 14, 14, 32)Param #\n320\n0\ndropout_1 (Dropout) (None, 14, 14, 32) 0\nconv2d_2 (Conv2D) (None, 7, 7, 64) 18496\nconv2d_3 (Conv2D) (None, 4, 4, 128) 73856zero_padding2d_1 (ZeroPaddin (None, 8, 8, 64) 0\nbatch_normalization_1 (Batch (None, 8, 8, 64) 250\nleaky_re_lu_2 (LeakyReLU) (None, 8, 8, 64) 0\ndropout_2 (Dropout) (None, 8, 8, 64) 0\nflatten_1 (Flatten) (None, 4096 0\ndense_1 (Dense) (None, 1) 4097batch_normalization_3 (Batch (None, 4, 4, 256) 1024\nleaky_re_lu_4 (LeakyReLU) (None, 4, 4, 256) 0\ndropout_4 (Dropout) (None, 4, 4, 256) 0conv2d_4 (Conv2D) (None, 4, 4, 256) 295168batch_normalization_2 (Batch (None, 4, 4, 128) 512\nleaky_re_lu_3 (LeakyReLU) (None, 4, 4, 128) 0\ndropout_3 (Dropout) (None, 4, 4, 128) 0\nFigure 8.6 The output summary for the discriminator model' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 367}
363
page_content='348 CHAPTER 8Generative adversarial networks (GANs)\nIn the output summary in figure 8.6, note that the width and height of the output fea-\nture maps decrease in size, whereas the depth increases in size. This is the expected\nbehavior for traditional CNN networks as we’ve seen in previous chapters. Let’s see\nwhat happens to the feature maps’ size in the generator network in the next section. \n8.1.3 The generator model\nThe generator takes in some random data and tries to mimic the training dataset to\ngenerate fake images. Its goal is to trick the discriminator by trying to generate images\nthat are perfect replicas of the training dataset. As it is trained, it gets better and bet-\nter after each iteration. But the discriminator is being trained at the same time, so the\ngenerator has to keep improving as the discriminator learns its tricks. \n As you can see in figure 8.7, the generator model looks like an inverted ConvNet.\nThe generator takes a vector input with some random noise data and reshapes it into\na cube volume that has a width, height, and depth. This volume is meant to be treated\nas a feature map that will be fed to several convolutional layers that will create the\nfinal image.\nUPSAMPLING TO SCALE FEATURE MAPS\nTraditional convolutional neural networks use pooling layers to downsample input\nimages. In order to scale the feature maps, we use upsampling layers that scale the\nimage dimensions by repeating each row and column of the input pixels.\n Keras has an upsampling layer ( Upsampling2D ) that scales the image dimensions\nby taking a scaling factor ( size ) as an argument:\nkeras.layers.UpSampling2D(size=( 2, 2))\nThis line of code repeats every row and column of the image matrix two times,\nbecause the size of the scaling factor is set to (2, 2); see figure 8.8. If the scaling factor\nis (3, 3), the upsampling layer repeats each row and column of the input matrix three\ntimes, as shown in figure 8.9.Random noise\ninput vector\n7 × 7 × 12814 × 14 × 128Upsampling\nReshaping\n28 × 28 × 64\n28 × 28 × 1\nFigure 8.7 The generator model of the GAN' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 368}
364
page_content='349 GAN architecture\nWhen we build the generator model, we keep adding upsampling layers until the size\nof the feature maps is similar to the training dataset. You will see how this is imple-\nmented in Keras in the next section. \n Now, let’s build the generator_model function that builds the generator network:\n def generator_model():\n generator = Sequential() \n generator.add(Dense( 128 * 7 * 7, activation= "relu", input_dim= 100)) \n generator.add(Reshape(( 7, 7, 128))) \n generator.add(UpSampling2D( size=(2,2))) \n \n generator.add(Conv2D( 128, kernel_size= 3, padding= "same")) \n generator.add(BatchNormalization( momentum= 0.8)) \n generator.add(Activation(" relu"))\n generator.add(UpSampling2D( size=(2,2))) \n \n# convolutional + batch normalization layers\n generator.add(Conv2D( 64, kernel_size= 3, padding= "same")) \n generator.add(BatchNormalization( momentum= 0.8)) \n generator.add(Activation(" relu"))\n \n# convolutional layer with filters = 1\n generator.add(Conv2D( 1, kernel_size= 3, padding= "same"))\n generator.add(Activation(" tanh"))\n generator.summary() \n \n noise = Input(shape=(100,)) \n fake_image = generator(noise) \n return Model(noise, fake_image) Input =1, 2\n3, 4\nOutput =1, 1, 2, 2\n1, 1, 2, 2\n3, 3, 4, 4\n3, 3, 4, 4\nFigure 8.8 Upsampling \nexample when the scaling size \nis (2, 2)[[1. 1. 1. 2. 2. 2.]\n[1. 1. 1. 2. 2. 2.]\n[1. 1. 1. 2. 2. 2.]\n[3. 3. 3. 4. 4. 4.]\n[3. 3. 3. 4. 4. 4.]\n[3. 3. 3. 4. 4. 4.]]\nFigure 8.9 Upsampling \nexample when scaling size \nis (3, 3)\nInstantiates a sequential\nmodel and names it generator Adds a dense layer\nthat has a number of\nneurons = 128 × 7 × 7Reshapes\nthe image\ndimensions to\n7 × 7 × 128\nUpsampling\nlayer to\ndouble the size\nof the image\ndimensions to\n14 × 14Adds a \nconvolutional \nlayer to run the \nconvolutional \nprocess and batch \nnormalization\nUpsamples \nthe image \ndimensions \nto 28 × 28\nWe don’t add upsampling here because \nthe image size of 28 × 28 is equal to the \nimage size in the MNIST dataset. You can \nadjust this for your own problem.Prints the model summary\nGenerates the input noise vector \nof length = 100. We use 100 \nhere to create a simple network.\nRuns the generator \nmodel to create the \nfake imageReturns a model that\ntakes the noise vector\nas input and outputs\nthe fake image' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 369}
365
page_content='350 CHAPTER 8Generative adversarial networks (GANs)\nThe output summary of the generator model is shown in figure 8.10. In the code snip-\npet, the only new component is the Upsampling layer to double its input dimensions\nby repeating pixels. Similar to the discriminator, we stack convolutional layers on top\nof each other and add other optimization layers like BatchNormalization . The key\ndifference in the generator model is that it starts with the flattened vector; images are\nupsampled until they have dimensions similar to the training dataset. All of these lay-\ners have hyperparameters that we tune when we are training the network. For your\nown implementation, you can tune these hyperparameters and add or remove layers\nas you see fit.\nNotice the change in the output shape after each layer. It starts from a 1D vector of\n6,272 neurons. We reshaped it to a 7 × 7 × 128 volume, and then the width and height\nwere upsampled twice to 14 × 14 followed by 28 × 28. The depth decreased from 128 to\n64 to 1 because this network is built to deal with the grayscale MNIST dataset project\nthat we will implement later in this chapter. If you are building a generator model to\ngenerate color images, then you should set the filters in the last convolutional layer to 3. Layer (type)\nTotal params: 856,193\nTrainable params: 855,809\nNon-trainable params: 384dense_2 (Dense)\nreshape_1 (Reshape)Output Shape\n(None, 6272\n(None, 7, 7, 128)Param #\n633472\n0\nup_sampling2d_1 (UpSampling2 (None, 14, 14, 128) 0\nconv2d_5 (Conv2D) (None, 14, 14, 128) 147584\nbatch_normalization_5 (Batch (None, 28, 28, 64) 256batch_normalization_4 (Batch (None, 14, 14, 128) 512\nactivation_1 (Activation) (None, 14, 14, 128) 0\nup_sampling2d_2 (UpSampling2 (None, 28, 28, 128) 0\nconv2d_6 (Conv2D) (None, 28, 28, 64) 73792\nactivation_2 (Activation) (None, 28, 28, 64) 0\nconv2d_7 (Conv2D) (None, 28, 28, 1) 577\nactivation_3 (Activation) (None, 28, 28, 1) 0\nFigure 8.10 The output summary of the generator model' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 370}
366
page_content='351 GAN architecture\n8.1.4 Training the GAN\nNow that we’ve learned the discriminator and generator models separately, let’s put\nthem together to train an end-to-end generative adversarial network. The discrimina-\ntor is being trained to become a better classifier to maximize the probability of assign-\ning the correct label to both training examples (real) and images generated by the\ngenerator (fake): for example, the police officer becomes better at differentiating\nbetween fakes and real currency. The generator, on the other hand, is being trained\nto become a better forger, to maximize its chances of fooling the discriminator. Both\nnetworks are getting better at what they do. \n The process of training GAN models involves two processes: \n1Train the discriminator . This is a straightforward supervised training process. The\nnetwork is given labeled images coming from the generator (fake) and the\ntraining data (real), and it learns to classify between real and fake images with a\nsigmoid prediction output. Nothing new here. \n2Train the generator . This process is a little tricky. The generator model cannot be\ntrained alone like the discriminator. It needs the discriminator model to tell it\nwhether it did a good job of faking images. So, we create a combined network to\ntrain the generator, composed of both discriminator and generator models.\nThink of the training processes as two parallel lanes. One lane trains the discriminator\nalone, and the other lane is the combined model that trains the generator. The GAN\ntraining process is illustrated in figure 8.11.\n As you can see in figure 8.11, when training the combined model, we freeze the\nweights of the discriminator because this model focuses only on training the generator.\nDiscriminator training\nFake data from\nthe generator\nBinary classification:\nreal/fakeReal data\nDiscriminator\nUpdate the\nmodelGenerator training\nDiscriminator\n(freeze training)Input vector\nGenerator\nTraining data Update the\nweightFake Real\nBinary classification:\nreal/fake\nFigure 8.11 The process flow to train GANs' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 371}
367
page_content="352 CHAPTER 8Generative adversarial networks (GANs)\nWe will discuss the intuition behind this idea when we explain the generator training\nproces. For now, just know that we need to build and train two models: one for the dis-\ncriminator alone and the other for both discriminator and generator models. \n Both processes follow the traditional neural network training process explained in\nchapter 2. It starts with the feedforward process and then makes predictions and cal-\nculates and backpropagates the error. When training the discriminator, the error is\nbackpropagated back to the discriminator model to update its weights; in the com-\nbined model, the error is backpropagated back to the generator to update its weights. \n During the training iterations, we follow the same neural network training proce-\ndure to observe the network’s performance and tune its hyperparameters until we see\nthat the generator is achieving satisfying results for our problem. This is when we can\nstop the training and deploy the generator model. Now, let’s see how we compile the\ndiscriminator and the combined networks to train the GAN model.\nTRAINING THE DISCRIMINATOR\nAs we said before, this is a straightforward process. First, we build the model from the\ndiscriminator_model method that we created earlier in this chapter. Then we com-\npile the model and use the binary_crossentropy loss function and an optimizer of\nyour choice (we use Adam in this example).\n Let’s see the Keras implementation that builds and compiles the generator. Please\nnote that this code snippet is not meant to be compilable on its own—it is here for\nillustration. At the end of this chapter, you can find the full code of this project:\n discriminator = discriminator_model()\ndiscriminator.compile( loss='binary_crossentropy' ,optimizer= 'adam', \nmetrics= ['accuracy' ]) \nWe can train the model by creating random training batches using Keras’ train_on\n_batch method to run a single gradient update on a single batch of data:\nnoise = np.random.normal( 0, 1, (batch_size, 100)) \ngen_imgs = generator.predict(noise) \n \n# Train the discriminator (real classified as ones and generated as zeros)\nd_loss_real = discriminator.train_on_batch(imgs, valid)\nd_loss_fake = discriminator.train_on_batch(gen_imgs, fake)\nTRAINING THE GENERATOR (COMBINED MODEL )\nHere is the one tricky part in training GANs: training the generator. While the dis-\ncriminator can be trained in isolation from the generator model, the generator needs\nthe discriminator in order to be trained. For this, we build a combined model that\ncontains both the generator and the discriminator, as shown in figure 8.12.\n When we want to train the generator, we freeze the weights of the discriminator\nmodel because the generator and discriminator have different loss functions pulling\nin different directions. If we don’t freeze the discriminator weights, it will be pulled in the\nsame direction the generator is learning so it will be more likely to predict generatedSample\nnoiseGenerates a batch \nof new images" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 372}
368
page_content="353 GAN architecture\nimages as real, which is not the desired outcome. Freezing the weights of the discrimi-\nnator model doesn’t affect the existing discriminator model that we compiled earlier\nwhen we were training the discriminator. Think of it as having two discriminator\nmodels—this is not the case, but it is easier to imagine. \n Now, let’s build the combined model:\n generator = generator_model() \n \n z = Input(shape=(100,)) \n image = generator(z) \n \n discriminator.trainable = False \n \n valid = discriminator(img) \n \n combined = Model(z, valid) \nNow that we have built the combined model, we can proceed with the training process\nas normal. We compile the combined model with a binary_crossentropy loss func-\ntion and an Adam optimizer:\ncombined.compile( loss='binary_crossentropy' , optimizer= optimizer)\ng_loss = self.combined.train_on_batch(noise, valid) \nTRAINING EPOCHS\nIn the project at the end of the chapter, you will see that the previous code snippet is\nput inside a loop function to perform the training for a certain number of epochs. For\neach epoch, the two compiled models (discriminator and combined) are trained\nsimultaneously. During the training process, both the generator and discriminatorRandom noise\nGeneratorFeedback through backpropagation\nFake imageDiscriminatorOutput\n(e.g. 0.3)\nFigure 8.12 Illustration of the combined model that contains both the generator and discriminator \nmodels\nBuilds the generator\nThe generator takes noise as \ninput and generates an image.\nFreezes the weights of \nthe discriminator model\nThe discriminator takes \ngenerated images as \ninput and determines \ntheir validity.The combined model (stacked generator\nand discriminator) trains the generator\nto fool the discriminator.\nTrains the generator (wants the discriminator\nto mistake images for being real)" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 373}
369
page_content='354 CHAPTER 8Generative adversarial networks (GANs)\nimprove. You can observe the performance of your GAN by printing out the results\nafter each epoch (or a set of epochs) to see how the generator is doing at generating\nsynthetic images. Figure 8.13 shows an example of the evolution of the generator’s\nperformance throughout its training process on the MNIST dataset.\nIn the example, epoch 0 starts with random noise data that doesn’t yet represent the\nfeatures in the training dataset. As the GAN model goes through the training, its gen-\nerator gets better and better at creating high-quality imitations of the training dataset\nthat can fool the discriminator. Manually observing the generator’s performance is a\ngood way to evaluate system performance to decide on the number of epochs and\nwhen to stop training. We’ll look more at GAN evaluation techniques in section 8.2.\n8.1.5 GAN minimax function\nGAN training is more of a zero-sum game than an optimization problem. In zero-\nsum games, the total utility score is divided among the players. An increase in one\nplayer’s score results in a decrease in another player’s score. In AI, this is called mini-\nmax game theory. Minimax is a decision-making algorithm, typically used in turn-\nbased, two-player games. The goal of the algorithm is to find the optimal next move.\nOne player, called the maximizer , works to get the maximum possible score; the other\nplayer, called the minimizer , tries to get the lowest score by counter-moving against\nthe maximizer. \n GANs play a minimax game where the entire network attempts to optimize the\nfunction V(D,G) in the following equation:\nThe goal of the discriminator ( D) is to maximize the probability of getting the correct\nlabel of the image. The generator’s ( G) goal, on the other hand, is to minimize theEpoch: 5,500 Epoch: 3,500 Epoch: 7,500 Epoch: 2,500 Epoch: 0 Epoch: 1,500 Epoch: 9,500\nFigure 8.13 The generator gets better at mimicking the handwritten digits of the MNIST dataset throughout its \ntraining from epoch 0 to epoch 9,500.\nMin Max ( , ) = [log ( )] + [log(1 – ( ( )))]\nGDVD G E Dx E DGzxp zP z z ~ ~( )data\nDiscriminator output\nfor real data xDiscriminator output\nfor generated fake data ( ) Gz' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 374}
370
page_content='355 GAN architecture\nchances of getting caught. So, we train D to maximize the probability of assigning the\ncorrect label to both training examples and samples from G. We simultaneously train\nG to minimize log(1 – D(G(z))). In other words, D and G play a two-player minimax\ngame with the value function V(D,G).\nLike any other mathematical equation, the preceding one looks terrifying to anyone\nwho isn’t well versed in the math behind it, but the idea it represents is simple yet pow-\nerful. It’s just a mathematical representation of the two competing objectives of the\ndiscriminator and the generator models. Let’s go through the symbols first (table 8.1)\nand then explain it.\nThe discriminator takes its input from two sources:\n\uf0a1Data from the generator , G(z)—This is fake data ( z). The discriminator output from\nthe generator is denoted as D(G(z)).\n\uf0a1Real input from the real training data (x)—The discriminator output from the real\ndata is denoted as log D(x). \nTo simplify the minimax equation, the best way to look at it is to break it down into two\ncomponents: the discriminator training function and the generator training (combinedMinimax game theory\nIn a two-person, zero-sum game, a person can win only if the other player loses. No\ncooperation is possible. This game theory is widely used in games such as tic-tac-\ntoe, backgammon, mancala, chess, and so on. The maximizer player tries to get the\nhighest score possible, while the minimizer player tries to do the opposite and get\nthe lowest score possible.\nIn a given game state, if the maximizer has the upper hand, then the score will tend\nto be a positive value. If the minimizer has the upper hand in that state, then the\nscore will tend to be a negative value. The values are calculated by heuristics that\nare unique for every type of game.\nTable 8.1 Symbols used in the minimax equation\nSymbol Explanation \nG Generator.\nD Discriminator.\nz Random noise fed to the generator ( G).\nG(z) The generator takes the random noise data ( z) and tries to reconstruct the real images.\nD(G(z)) The discriminator ( D) output from the generator.\nlog D(x) The discriminator’s probability output for real data.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 375}
371
page_content='356 CHAPTER 8Generative adversarial networks (GANs)\nmodel) function. During the training process, we created two training flows, and each\nhas its own error function:\n\uf0a1One for the discriminator alone, represented by the following function that\naims to maximize the minimax function by making the predictions as close as\npossible to 1:\nEx~pdata [log D(x)]\n\uf0a1One for the combined model to train the generator represented by the follow-\ning function, which aims to minimize the minimax function by making the pre-\ndictions as close as possible to 0:\nEz~Pz(z) [log(1 – D(G(z)))]\nNow that we understand the equation symbols and have a better understanding of\nhow the minimax function works, let’s look at the function again:\nThe goal of the minimax objective function V(D, G) is to maximize D(x) from the true\ndata distribution and minimize D(G(z)) from the fake data distribution. To achieve\nthis, we use the log-likelihood of D(x) and 1 – D(z) in the objective function. The log\nof a value just makes sure that the closer we are to an incorrect value, the more we are\npenalized.\n Early in the GAN training process, the discriminator will reject fake data from the\ngenerator with high confidence, because the fake images are very different from\nthe real training data—the generator hasn’t learned yet. As we train the discriminator\nto maximize the probability of assigning the correct labels to both real examples and\nfake images from the generator, we simultaneously train the generator to minimize\nthe discriminator classification error for the generated fake data. The discriminator\nwants to maximize objectives such that D(x) is close to 1 for real data and D(G(z)) is\nclose to 0 for fake data. On the other hand, the generator wants to minimize objec-\ntives such that D(G(z)) is close to 1 so that the discriminator is fooled into thinking\nthe generated G(z) is real. We stop the training when the fake data generated by the\ngenerator is recognized as real data.Min Max ( , ) = [log ( )] + [log(1 – ( ( )))]\nGDVD G E Dx E DGzxp zP z z ~ ~( )data\nError from the\ndiscriminator\nmodel trainingError from the combined\nmodel training' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 376}
372
page_content='357 Evaluating GAN models\n8.2 Evaluating GAN models\nDeep learning neural network models that are used for classification and detection\nproblems are trained with a loss function until convergence. A GAN generator model,\non the other hand, is trained using a discriminator that learns to classify images as real\nor generated. As we learned in the previous section, both the generator and discrimi-\nnator models are trained together to maintain an equilibrium. As such, no objective\nloss function is used to train the GAN generator models, and there is no way to objec-\ntively assess the progress of the training and the relative or absolute quality of the\nmodel from loss alone. This means models must be evaluated using the quality of the\ngenerated synthetic images and by manually inspecting the generated images. \n A good way to identify evaluation techniques is to review research papers and the\ntechniques the authors used to evaluate their GANs. Tim Salimans et al. (2016) evalu-\nated their GAN performance by having human annotators manually judge the visual\nquality of the synthesized samples.3 They created a web interface and hired annotators\non Amazon Mechanical Turk (MTurk) to distinguish between generated data and\nreal data.\n One downside of using human annotators is that the metric varies depending on\nthe setup of the task and the motivation of the annotators. The team also found that\nresults changed drastically when they gave annotators feedback about their mistakes:\nby learning from such feedback, annotators are better able to point out the flaws in\ngenerated images, giving a more pessimistic quality assessment. \n Other non-manual approaches were used by Salimans et al. and by other research-\ners we will discuss in this section. In general, there is no consensus about a correct way\nto evaluate a given GAN generator model. This makes it challenging for researchers\nand practitioners to do the following:\n\uf0a1Select the best GAN generator model during a training run—in other words,\ndecide when to stop training.\n\uf0a1Choose generated images to demonstrate the capability of a GAN generator\nmodel.\n\uf0a1Compare and benchmark GAN model architectures.\n\uf0a1Tune the model hyperparameters and configuration and compare results.\nFinding quantifiable ways to understand a GAN’s progress and output quality is still an\nactive area of research. A suite of qualitative and quantitative techniques has been\ndeveloped to assess the performance of a GAN model based on the quality and diver-\nsity of the generated synthetic images. Two commonly used evaluation metrics for\nimage quality and diversity are the inception score and the Fréchet inception distance (FID).\nIn this section, you will discover techniques for evaluating GAN models based on gen-\nerated synthetic images.\n3Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. “Improved\nTechniques for Training GANs,” 2016, http:/ /arxiv.org/abs/1606.03498 .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 377}
373
page_content='358 CHAPTER 8Generative adversarial networks (GANs)\n8.2.1 Inception score\nThe inception score is based on a heuristic that realistic samples should be able to\nbe classified when passed through a pretrained network such as Inception on Image-\nNet (hence the name inception score ). The idea is really simple. The heuristic relies\non two values:\n\uf0a1High predictability of the generated image —We apply a pretrained inception classi-\nfier model to every generated image and get its softmax prediction. If the gen-\nerated image is good enough, then it should give us a high predictability score. \n\uf0a1Diverse generated samples —No classes should dominate the distribution of the\ngenerated images.\nA large number of generated images are classified using the model. Specifically, the\nprobability of the image belonging to each class is predicted. The probabilities are\nthen summarized in the score to capture both how much each image looks like a\nknown class and how diverse the set of images is across the known classes. If both\nthese traits are satisfied, there should be a large inception score. A higher inception\nscore indicates better-quality generated images.\n8.2.2 Fréchet inception distance (FID)\nThe FID score was proposed and used by Martin Heusel et al. in 2017.4 The score was\nproposed as an improvement over the existing inception score.\n Like the inception score, the FID score uses the Inception model to capture specific\nfeatures of an input image. These activations are calculated for a collection of real and\ngenerated images. The activations for each real and generated image are summarized as\na multivariate Gaussian, and the distance between these two distributions is then calcu-\nlated using the Fréchet distance, also called the Wasserstein-2 distance.\n An important note is that the FID needs a decent sample size to give good results\n(the suggested size is 50,000 samples). If you use too few samples, you will end up over-\nestimating your actual FID, and the estimates will have a large variance. A lower FID\nscore indicates more realistic images that match the statistical properties of real images.\n8.2.3 Which evaluation scheme to use\nBoth measures (inception score and FID) are easy to implement and calculate on\nbatches of generated images. As such, the practice of systematically generating\nimages and saving models during training can and should continue to be used to\nallow post hoc model selection. Diving deep into the inception score and FID is out\nof the scope of this book. As mentioned earlier, this is an active area of research, and\nthere is no consensus in the industry as of the time of writing about the one best\n4Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter, “GANs\nTrained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium,” 2017, http:/ /arxiv.org/\nabs/1706.08500 .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 378}
374
page_content='359 Popular GAN applications\napproach to evaluate GAN performance. Different scores assess various aspects of\nthe image-generation process, and it is unlikely that a single score can cover all\naspects. The goal of this section is to expose you to some techniques that have been\ndeveloped in recent years to automate the GAN evaluation process, but manual eval-\nuation is still widely used. \n When you are getting started, it is a good idea to begin with manual inspection\nof generated images in order to evaluate and select generator models. Developing\nGAN models is complex enough for both beginners and experts; manual inspection\ncan get you a long way while refining your model implementation and testing model\nconfigurations.\n Other researchers are taking different approaches by using domain-specific evalu-\nation metrics. For example, Konstantin Shmelkov and his team (2018) used two mea-\nsures based on image classification, GAN-train and GAN-test, which approximated the\nrecall (diversity) and precision (quality of the image) of GANs, respectively.5 \n8.3 Popular GAN applications\nGenerative modeling has come a long way in the last five years. The field has devel-\noped to the point where it is expected that the next generation of generative models\nwill be more comfortable creating art than humans. GANs now have the power to\nsolve the problems of industries like healthcare, automotive, fine arts, and many oth-\ners. In this section, we will learn about some of the use cases of adversarial networks\nand which GAN architecture is used for that application. The goal of this section is\nnot to implement the variations of the GAN network, but to provide some exposure to\npotential applications of GAN models and resources for further reading. \n8.3.1 Text-to-photo synthesis\nSynthesis of high-quality images from text descriptions is a challenging problem in\nCV. Samples generated by existing text-to-image approaches can roughly reflect the\nmeaning of the given descriptions, but they fail to contain necessary details and vivid\nobject parts. \n The GAN network that was built for this application is the stacked generative\nadversarial network (StackGAN).6 Zhang et al. were able to generate 256 × 256 photo-\nrealistic images conditioned on text descriptions.\n StackGANs work in two stages (figure 8.14):\n\uf0a1Stage-I : StackGAN sketches the primitive shape and colors of the object based\non the given text description, yielding low-resolution images.\n5Konstantin Shmelkov, Cordelia Schmid, and Karteek Alahari, “How Good Is My GAN?” 2018, http:/ /arxiv\n.org/abs/1807.09499 .\n6Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris Metaxas,\n“StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks,” 2016,\nhttp:/ /arxiv.org/abs/1612.03242 .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 379}
375
page_content='360 CHAPTER 8Generative adversarial networks (GANs)\n\uf0a1Stage-II : StackGAN takes the output of stage-I and a text description as input\nand generates high-resolution images with photorealistic details. It is able to\nrectify defects in the images created in stage-I and add compelling details with\nthe refinement process.\n8.3.2 Image-to-image translation (Pix2Pix GAN)\nImage-to-image translation is defined as translating one representation of a scene into\nanother, given sufficient training data. It is inspired by the language translation anal-\nogy: just as an idea can be expressed by many different languages, a scene may be ren-\ndered by a grayscale image, RGB image, semantic label maps, edge sketches, and so on.\nIn figure 8.15, image-to-image translation tasks are demonstrated on a range of appli-\ncations such as converting street scene segmentation labels to real images, grayscale\nto color images, sketches of products to product photographs, and day photographs to\nnight ones.\n Pix2Pix is a member of the GAN family designed by Phillip Isola et al. in 2016 for\ngeneral-purpose image-to-image translation.7 The Pix2Pix network architecture is\nsimilar to the GAN concept: it consists of a generator model for outputting new syn-\nthetic images that look realistic, and a discriminator model that classifies images as\n7Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros, “Image-to-Image Translation with Conditional\nAdversarial Networks,” 2016, http:/ /arxiv.org/abs/1611.07004 .Figure 8.14 (a) Stage-I: Given text descriptions, StackGAN sketches rough shapes and basic colors of objects, \nyielding low-resolution images. (b) Stage-II takes Stage-I results and text descriptions as inputs, and generates \nhigh-resolution images with photorealistic details. ( Source: Zhang et al., 2016.)\na) StackGAN Stage-I\n64 × 64 imagesThis bird is white with some black\non its head and wings, and has a\nlong orange beak.\nb) StackGAN Stage-II\n256 × 256 imagesThis bird has a yellow belly and\ntarsus, gray back, wings, and brown\nthroat, nape with a black face.This flower has overlapping pink\npointed petals surrounding a ring\nof short yellow filaments.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 380}
376
page_content='361 Popular GAN applications\nreal (from the dataset) or fake (generated). The training process is also similar to\nthat used for GANs: the discriminator model is updated directly, whereas the gener-\nator model is updated via the discriminator model. As such, the two models are\ntrained simultaneously in an adversarial process where the generator seeks to better\nfool the discriminator and the discriminator seeks to better identify the counterfeit\nimages.\n The novel idea of Pix2Pix networks is that they learn a loss function adapted to the\ntask and data at hand, which makes them applicable in a wide variety of settings. They\nare a type of conditional GAN (cGAN) where the generation of the output image is\nconditional on an input source image. The discriminator is provided with both a source\nimage and the target image and must determine whether the target is a plausible\ntransformation of the source image.\n The results of the Pix2Pix network are really promising for many image-to-image\ntranslation tasks. Visit https:/ /affinelayer.com/pixsrv t o p l a y m o r e w i t h t h e P i x 2 P i x\nnetwork; this site has an interactive demo created by Isola and team in which you can\nconvert sketch edges of cats or products to photos and façades to real images. \n8.3.3 Image super-resolution GAN (SRGAN)\nA certain type of GAN models can be used to convert low-resolution images into high-\nresolution images. This type is called a super-resolution generative adversarial networksFigure 8.15 Examples of Pix2Pix applications taken from the original paper.Black and white to color\nInput OutputEdges to photos\nDay to night\nInput Output\nInput Output' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 381}
377
page_content='362 CHAPTER 8Generative adversarial networks (GANs)\n(SRGAN) and was introduced by Christian Ledig et al. in 2016.8 Figure 8.16 shows\nhow SRGAN was able to create a very high-resolution image.\n8.3.4 Ready to get your hands dirty?\nGAN models have huge potential for creating and imagining new realities that have\nnever existed before. The applications mentioned in this chapter are just a few exam-\nples to give you an idea of what GANs can do today. Such applications come out every\nfew weeks and are worth trying. If you are interested in getting your hands dirty with\nmore GAN applications, visit the amazing Keras-GAN repository at https:/ /github.com/\neriklindernoren/Keras-GAN , maintained by Erik Linder-Norén. It includes many\nGAN models created using Keras and is an excellent resource for Keras examples.\nMuch of the code in this chapter was inspired by and adapted from this repository. \n8.4 Project: Building your own GAN\nIn this project, you’ll build a GAN using convolutional layers in the generator and dis-\ncriminator. This is called a deep convolutional GAN (DCGAN) for short. The DCGAN\narchitecture was first explored by Alec Radford et al. (2016), as discussed in section 8.1.1,\nand has seen impressive results in generating new images. You can follow along with\nthe implementation in this chapter or run code in the project notebook available with\nthis book’s downloadable code. \n8Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta,\nAndrew Aitken, et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Net-\nwork,” 2016, http:/ /arxiv.org/abs/1609.04802 .Figure 8.16 SRGAN converting a low-resolution image to a high-resolution \nimage. ( Source: Ledig et al., 2016.)Original SRGAN' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 382}
378
page_content='363 Project: Building your own GAN\n In this project, you’ll be training DCGAN on the Fashion-MNIST dataset ( https:/ /\ngithub.com/zalandoresearch/fashion-mnist ). Fashion-MNIST consists of 60,000 gray-\nscale images for training and a test set of 10,000 images (figure 8.17). Each 28 × 28\ngrayscale image is associated with a label from 10 classes. Fashion-MNIST is intended\nto serve as a direct replacement for the original MNIST dataset for benchmarking\nmachine learning algorithms. I chose grayscale images for this project because it\nrequires less computational power to train convolutional networks on one-channel\ngrayscale images compared to three-channel colored images, which makes it easier for\nyou to train on a personal computer without a GPU.\nThe dataset is broken into 10 fashion categories. The class labels are as follows:\nLabel Description\n0 T-shirt/top\n1 Trouser\n2 Pullover\n3D r e s s\n4C o a t\n5 Sandal\n6S h i r t\n7 Sneaker\n8B a g\n9 Ankle bootFigure 8.17 Fashion-MNIST dataset examples' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 383}
379
page_content="364 CHAPTER 8Generative adversarial networks (GANs)\nSTEP 1: I MPORT LIBRARIES \nAs always, the first thing to do is to import all the libraries we use in this project:\nfrom __future__ import print_function, division\nfrom keras.datasets import fashion_mnist \nfrom keras.layers import Input, Dense, Reshape, Flatten, Dropout \nfrom keras.layers import BatchNormalization, Activation, ZeroPadding2D \nfrom keras.layers.advanced_activations import LeakyReLU \nfrom keras.layers.convolutional import UpSampling2D, Conv2D \nfrom keras.models import Sequential, Model \nfrom keras.optimizers import Adam \nimport numpy as np \nimport matplotlib.pyplot as plt \nSTEP 2: D OWNLOAD AND VISUALIZE THE DATASET\nKeras makes the Fashion-MNIST dataset available for us to download with just one\ncommand: fashion_mnist.load_data() . Here, we download the dataset and rescale\nthe training set to the range –1 to 1 to allow the model to converge faster (see the\n“Data normalization” section in chapter 4 for more details on image scaling):\n(training_data, _), (_, _) = fashion_mnist.load_data() \nX_train = training_data / 127.5 - 1. \nX_train = np.expand_dims(X_train, axis=3) \nJust for the fun of it, let’s visualize the image matrix (figure 8.18):\ndef visualize_input(img, ax):\n ax.imshow(img, cmap= 'gray')\n width, height = img.shape\n thresh = img.max()/2.5\n for x in range(width):\n for y in range(height):\n ax.annotate( str(round(img[x][y],2)), xy=(y,x),\n horizontalalignment= 'center' ,\n verticalalignment= 'center' ,\n color= 'white' if img[x][y]<thresh else 'black')\nfig = plt.figure(figsize = (12,12)) \nax = fig.add_subplot(111)\nvisualize_input(training_data[3343], ax)Imports the fashion_mnist \ndataset from Keras\nImports\nKeras\nlayers and\nmodels\nImports numpy \nand matplotlib\nLoads the dataset\nRescales the training \ndata to scale – 1 to 1" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 384}
380
page_content='365 Project: Building your own GAN\nSTEP 3: B UILD THE GENERATOR\nNow, let’s build the generator model. The input will be our noise vector ( z) as explained\nin section 8.1.5. The generator architecture is shown in figure 8.19.\n The first layer is a fully connected layer that is then reshaped into a deep, narrow\nlayer, something like 7 × 7 × 128 (in the original DCGAN paper, the team reshaped\nthe input to 4 × 4 × 1024). Then we use the upsampling layer to double the feature\nmap dimensions from 7 × 7 to 14 × 14 and then again to 28 × 28. In this network, weFigure 8.18 A visualized example of the Fashion-MNIST dataset0 00 0 2 0 00 00 0 0 0 00 0\n0 0 0 1\n0 0 0\n0 0\n0 0\n0 0\n0 0 0 0\n0 00 00 0\n00 0 0 00 00\n1\n4\n6\n1 1 110 00 00 00 0 0\n0 00 00 00 0 0\n0 00 00 00 0 0\n0 00 00 00 0 0\n0 02\n6\n8\n7 0 00 00 0 0\n0 00 00 00 0 0\n0 00 0\n0 00\n0000 0 9 9\n130\n145\n143\n115\n99\n311395 91\n118\n167\n18637 0\n157\n22691\n13122\n63\n0 00 00 0 00 4\n05 129 0\n0\n0\n0\n0\n4\n810\n0\n0\n0\n0\n0000000570121 33\n0\n05\n6\n0\n174\n145 12879\n121\n77 1821570\n0\n0\n0156 10470\n50 00 00 00 0 0 0 0 9 9 3 7 40 00 00 00 0 0 0 0 3\n224 194 215 212 200 197 207 207 221 199 212 192 197 148 227\n219 184 208 203 203 207 187 223 212 173 209 201 182 140 213\n232 174 202 201 203 221 189 214 213 159 218 206 185 169 207201 167 180 195 143 201 0 0 50 0 145 194 172 135\n233 174 192 200 209 212 200 216 202 157 221 201 188 185 208\n231 169 193 199 217 198 195 214 187 149 227 220 189 198 207\n234 167 192 198 215 190 195 226 220 163 228 234 193 208 215\n237 167 184 198 209 193 224 225 240 227 231 228 203 219 221\n238 163 180 190 210 198 212 205 225 231 225 228 203 231 218\n242 154 184 188 209 199 219 207 227 232 223 228 210 239 217\n243 144 187 184 215 200 227 213 225 234 222 227 214 216 221\n245 134 187 180 224 206 228 216 223 233 223 225 218 219 225\n249 126 201 181 223 221 227 212 223 232 223 224 223 222 226\n246 111 217 195 221 224 225 208 220 230 221 224 225 223 223\n22097 217 201 219 222 222 209 217 227 219 222 226 219 221\n21172 217 204 216 215 209 206 210 221 218 213 223 212 217\n22227 230 226 255 231 251 250 252 255 226 250 226 210 226\n125 117 120 129 121 141 141 138 131 142 139 116 98 136172\n0\n10 15 209\n23\n15\n14\n14\n19\n22\n25\n32\n39\n43\n50\n51\n29\n800 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00 0 0 00\n00\n0\n00 00 0 0\n250 00 00 00 0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 00 00 0\n0 0 25 0 0 0 02015105' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 385}
381
page_content='366 CHAPTER 8Generative adversarial networks (GANs)\nuse three convolutional layers. We also use batch normalization and a ReLU activa-\ntion. For each of these layers, the general scheme is convolution ⇒ batch normaliza-\ntion ⇒ ReLU. We keep stacking up layers like this until we get the final transposed\nconvolution layer with shape 28 × 28 × 1:\n def build_generator():\n generator = Sequential() \n generator.add(Dense( 128 * 7 * 7, activation= "relu", input_dim= 100)) \n generator.add(Reshape(( 7, 7, 128))) \n generator.add(UpSampling2D()) \n generator.add(Conv2D( 128, kernel_size= 3, padding= "same", \n activation= "relu")) \n generator.add(BatchNormalization( momentum= 0.8)) \n generator.add(UpSampling2D()) \n \n# convolutional + batch normalization layers\n generator.add(Conv2D( 64, kernel_size= 3, padding= "same", \n activation= "relu")) \n generator.add(BatchNormalization( momentum= 0.8)) \n \n # convolutional layer with filters = 1\n generator.add(Conv2D( 1, kernel_size= 3, padding= "same",\n activation= "relu"))\n \n generator.summary() \n Figure 8.19 Architecture of the generator model100z\n7 × 7 × 12814 × 14 × 128Upsampling\nReshape\n28 × 28 × 64\n28 × 28 × 1\nInstantiates a sequential\nmodel and names it generator\nAdds the dense layer\nthat has a number of\nneurons = 128 × 7 × 7Reshapes\nthe image\ndimensions to\n7 × 7 × 128 Upsampling layer to double \nthe size of the image \ndimensions to 14 × 14\nAdds a convolutional layer to \nrun the convolutional process \nand batch normalizationUpsamples the image \ndimensions to 28 × 28\nWe don’t add upsampling\nhere because the image size of\n28 × 28 is equal to the image\nsize in the MNIST dataset. You\ncan adjust this for your\nown problem.Prints \nthe model \nsummary' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 386}
382
page_content='367 Project: Building your own GAN\n noise = Input(shape=(100,)) \n fake_image = generator(noise) \n return Model(inputs=noise, outputs=fake_image) \nSTEP 4: B UILD THE DISCRIMINATOR\nThe discriminator is just a convolutional classifier like what we have built before (fig-\nure 8.20). The inputs to the discriminator are 28 × 28 × 1 images. We want a few convo-\nlutional layers and then a fully connected layer for the output. As before, we want a\nsigmoid output, and we need to return the logits as well. For the depths of the convolu-\ntional layers, I suggest starting with 32 or 64 filters in the first layer, and then double\nthe depth as you add layers. In this implementation, we start with 64 layers, then 128, and\nthen 256. For downsampling, we do not use pooling layers. Instead, we use only strided\nconvolutional layers for downsampling, similar to Radford et al.’s implementation.\nWe also use batch normalization and dropout to optimize training, as we learned in\nchapter 4. For each of the four convolutional layers, the general scheme is convolu-\ntion ⇒ batch normalization ⇒ leaky ReLU. Now, let’s build the build_discriminator\nfunction:\n def build_discriminator():\n discriminator = Sequential() \n discriminator.add(Conv2D( 32, kernel_size= 3, strides= 2, \n input_shape= (28,28,1) , padding= "same")) \n discriminator.add(LeakyReLU( alpha=0.2)) \n discriminator.add(Dropout( 0.25)) Generates the input noise vector of length = 100. \nWe chose 100 here to create a simple network.Runs the generator \nmodel to create the \nfake image Returns a model \nthat takes the noise \nvector as an input \nand outputs the \nfake image\nFigure 8.20 Architecture of the discriminator model28 × 28 × 114 × 14 × 328 × 8 × 644 × 4 × 1284 × 4 × 256\nFC 40961\nInstantiates a sequential \nmodel and names it \ndiscriminatorAdds a \nconvolutional \nlayer to the \ndiscriminator \nmodelAdds a\nleaky ReLU\nactivation\nfunction\nAdds a dropout layer with \na 25% dropout probability' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 387}
383
page_content='368 CHAPTER 8Generative adversarial networks (GANs)\n discriminator.add(Conv2D( 64, kernel_size= 3, strides= 2, \n padding= "same")) \n discriminator.add(ZeroPadding2D( padding= ((0,1),(0,1)))) \n discriminator.add(BatchNormalization( momentum= 0.8)) \n discriminator.add(LeakyReLU( alpha=0.2))\n discriminator.add(Dropout( 0.25))\n \n discriminator.add(Conv2D( 128, kernel_size= 3, strides= 2, padding= "same"))\n discriminator.add(BatchNormalization( momentum= 0.8)) \n discriminator.add(LeakyReLU( alpha=0.2)) \n discriminator.add(Dropout( 0.25)) \n \n discriminator.add(Conv2D( 256, kernel_size= 3, strides= 1, padding= "same"))\n discriminator.add(BatchNormalization( momentum= 0.8)) \n discriminator.add(LeakyReLU( alpha=0.2)) \n discriminator.add(Dropout( 0.25)) \n \n discriminator.add(Flatten()) \n discriminator.add(Dense( 1, activation= \'sigmoid\' )) \n \n img = Input(shape=(28,28,1)) \n probability = discriminator(img) \n \n return Model(inputs=img, outputs=probability) \nSTEP 5: B UILD THE COMBINED MODEL\nAs explained in section 8.1.3, to train the generator, we need to build a combined net-\nwork that contains both the generator and the discriminator (figure 8.21). The com-\nbined model takes the noise signal as input ( z) and outputs the discriminator’s\nprediction output as fake or real.Adds a second\nconvolutional\nlayer with\nzero paddingAdds a zero-padding \nlayer to change the \ndimension from \n7 × 7 to 8 × 8\nAdds a batch\nnormalization\nlayer for faster\nlearning and\nhigher accuracyAdds a third convolutional\nlayer with batch\nnormalization, leaky\nReLU, and a dropout\nAdds the fourth\nconvolutional layer with\nbatch normalization,\nleaky ReLU, and a dropoutFlattens the\nnetwork and\nadds the output\ndense layer\nwith sigmoid\nactivation\nfunction\nSets the\ninput image\nshape\nRuns the discriminator model \nto get the output probabilityReturns a model that \ntakes the image as \ninput and produces \nthe probability output\nFigure 8.21 Architecture of the combined model100z\n771 2 8××14 14 128××Upsampling\nReshape\n28 28 64××\n28 28 1×× 28 28 1××14 14 32××886 4××441 2 8×× 442 5 6××\nFC 40961Generator Discriminator' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 388}
384
page_content="369 Project: Building your own GAN\nRemember that we want to disable discriminator training for the combined model,\nas explained in detail in section 8.1.3. When training the generator, we don’t want\nthe discriminator to update weights as well, but we still want to include the discrimi-\nnator model in the generator training. So, we create a combined network that\nincludes both models but freeze the weights of the discriminator model in the com-\nbined network:\noptimizer = Adam(learning_rate= 0.0002, beta_1= 0.5) \ndiscriminator = build_discriminator() \ndiscriminator.compile(loss= 'binary_crossentropy' , optimizer=optimizer, \nmetrics=[ 'accuracy' ])\ndiscriminator.trainable = False \n# Build the generator\ngenerator = build_generator() \nz = Input(shape=(100,)) \nimg = generator(z) \nvalid = discriminator(img) \ncombined = Model(inputs=z, outputs=valid) \ncombined.compile(loss= 'binary_crossentropy' , optimizer=optimizer) \nSTEP 6: B UILD THE TRAINING FUNCTION\nWhen training the GAN model, we train two networks: the discriminator and the com-\nbined network that we created in the previous section. Let’s build the train function,\nwhich takes the following arguments:\n\uf0a1The number of epochs\n\uf0a1The batch size\n\uf0a1save_interval to state how often we want to save the results\ndef train(epochs, batch_size=128, save_interval=50):\n valid = np.ones((batch_size, 1)) \n fake = np.zeros((batch_size, 1)) \n for epoch in range(epochs):\n ## Train Discriminator network\n idx = np.random.randint(0, X_train.shape[0], batch_size) \n imgs = X_train[idx] Defines the\noptimizerBuilds and compiles \nthe discriminator\nFreezes the discriminator weights because we \ndon’t want to train it during generator training\nBuilds the\ngeneratorThe generator takes noise as \ninput with latent_dim = 100 \nand generates images.\nThe discriminator takes generated images \nas input and determines their validity.\nThe combined model (stacked generator and discriminator)\ntrains the generator to fool the discriminator.\nAdversarial \nground truths\nSelects a random\nhalf of images" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 389}
385
page_content='370 CHAPTER 8Generative adversarial networks (GANs)\n noise = np.random.normal(0, 1, (batch_size, 100)) \n gen_imgs = generator.predict(noise) \n d_loss_real = discriminator.train_on_batch(imgs, valid) \n d_loss_fake = discriminator.train_on_batch(gen_imgs, fake) \n d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) \n ## Train the combined network (Generator)\n g_loss = combined.train_on_batch(noise, valid) \nprint("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % \n (epoch, d_loss[0], 100*d_loss[1], g_loss)) \n if epoch % save_interval == 0: \n plot_generated_images(epoch, generator) \nBefore you run the train() function, you need to define the following plot_generated\n_images() function:\ndef plot_generated_images (epoch, generator, examples=100, dim=(10, 10), \n figsize=(10, 10)):\n noise = np.random.normal(0, 1, size=[examples, latent_dim])\n generated_images = generator.predict(noise)\n generated_images = generated_images.reshape(examples, 28, 28)\n plt.figure(figsize=figsize)\n for i in range(generated_images.shape[0]):\n plt.subplot(dim[0], dim[1], i+1)\n plt.imshow(generated_images[i], interpolation= \'nearest\' , \ncmap=\'gray_r\' )\n plt.axis( \'off\')\n plt.tight_layout()\n plt.savefig( \'gan_generated_image_epoch_ %d.png\' % epoch)\nSTEP 7: T RAIN AND OBSERVE RESULTS\nNow that the code implementation is complete, we are ready to start the DCGAN\ntraining. To train the model, run the following code snippet:\ntrain(epochs=1000, batch_size=32, save_interval=50)\nThis will run the training for 1,000 epochs and saves images every 50 epochs. When\nyou run the train() function, the training progress prints as shown in figure 8.22.\n I ran this training myself for 10,000 epochs. Figure 8.23 shows my results after 0,\n50, 1,000, and 10,000 epochs.\n As you can see in figure 8.23, at epoch 0, the images are just random noise—no\npatterns or meaningful data. At epoch 50, patterns have started to form. One very\napparent pattern is the bright pixels beginning to form at the center of the image,\nand the surroundings’ darker pixels. This happens because in the training data, all of\nthe shapes are located at the center of the image. Later in the training process, atSample noise, and\ngenerates a batch\nof new images\nTrains the\ndiscriminator (real\nclassified as 1s and\ngenerated as 0s)Trains the \ngenerator (wants \nthe discriminator \nto mistake images \nfor real ones)\nPrints \nprogress Saves generated\nimage samples if\nat save_interval' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 390}
386
page_content='371 Project: Building your own GAN\nepoch 1,000, you can see clear shapes and can probably guess the type of training data\nfed to the GAN model. Fast-forward to epoch 10,000, and you can see that the gen-\nerator has become very good at re-creating new images not present in the trainingFigure 8.22 Training \nprogress for the first \n16 epochs0 [D loss: 0.963556, acc.: 42.19%] [G loss: 0.726341]\n1 [D loss: 0.707453, acc.: 65.62%] [G loss: 1.239887]\n2 [D loss: 0.478705, acc.: 76.56%] [G loss: 1.666347]\n3 [D loss: 0.721997, acc.: 60.94%] [G loss: 2.243804]\n4 [D loss: 0.937356, acc.: 45.31%] [G loss: 1.459240]\n5 [D loss: 0.881121, acc.: 50.00%] [G loss: 1.417385]\n6 [D loss: 0.558153, acc.: 73.44%] [G loss: 1.393961]\n7 [D loss: 0.404117, acc.: 78.12%] [G loss: 1.141378]\n8 [D loss: 0.452483, acc.: 82.81%] [G loss: 0.802813]\n9 [D loss: 0.591792, acc.: 76.56%] [G loss: 0.690274]\n10 [D loss: 0.753802, acc.: 67.19%] [G loss: 0.934047]\n11 [D loss: 0.957626, acc.: 50.00%] [G loss: 1.140045]\n12 [D loss: 0.919308, acc.: 51.56%] [G loss: 1.311618]\n13 [D loss: 0.776363, acc.: 56.25%] [G loss: 1.041264]\n14 [D loss: 0.763993, acc.: 56.25%] [G loss: 1.090716]\n15 [D loss: 0.754735, acc.: 56.25%] [G loss: 1.530865]\n16 [D loss: 0.739731, acc.: 68.75%] [G loss: 1.887644]\nFigure 8.23 Output of the GAN generator after 0, 50, 1,000, and 10,000 epochs\nEpoch 0 Epoch 50\nEpoch 1,000 Epoch 10,000' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 391}
387
page_content='372 CHAPTER 8Generative adversarial networks (GANs)\ndataset. For example, pick any of the objects created at this epoch: let’s say the top-left\nimage (dress). This is a totally new dress design that is not present in the training data-\nset. The GAN model created a completely new dress design after learning the dress\npatterns from the training set. You can run the training longer or make the generator\nnetwork even deeper to get more refined results. \nIN CLOSING\nFor this project, I used the Fashion-MNIST dataset because the images are very small\nand are in grayscale (one-channel), which makes it computationally inexpensive for\nyou to train on your local computer with no GPU. Fashion-MNIST is also very clean\ndata: all of the images are centered and have less noise so they don’t require much\npreprocessing before you kick off your GAN training. This makes it a good toy dataset\nto jumpstart your first GAN project. \n If you are excited to get your hands dirty with more advanced datasets, you can try\nCIFAR as your next step ( https:/ /www.cs.toronto.edu/~kriz/cifar.html ) or Google’s\nQuick, Draw! dataset ( https:/ /quickdraw.withgoogle.com ), which is considered the\nworld’s largest doodle dataset at the time of writing. Another, more serious, dataset is\nStanford’s Cars Dataset ( https:/ /ai.stanford.edu/~jkrause/cars/car_dataset.html ),\nwhich contains more than 16,000 images of 196 classes of cars. You can try to train\nyour GAN model to design a completely new design for your dream car!\nSummary\n\uf0a1GANs learn patterns from the training dataset and create new images that have\na similar distribution of the training set.\n\uf0a1The GAN architecture consists of two deep neural networks that compete with\neach other.\n\uf0a1The generator tries to convert random noise into observations that look as if\nthey have been sampled from the original dataset.\n\uf0a1The discriminator tries to predict whether an observation comes from the orig-\ninal dataset or is one of the generator’s forgeries. \n\uf0a1The discriminator’s model is a typical classification neural network that aims to\nclassify images generated by the generator as real or fake.\n\uf0a1The generator’s architecture looks like an inverted CNN that starts with a nar-\nrow input and is upsampled a few times until it reaches the desired size.\n\uf0a1The upsampling layer scales the image dimensions by repeating each row and\ncolumn of its input pixels.\n\uf0a1To train the GAN, we train the network in batches through two parallel net-\nworks: the discriminator and a combined network where we freeze the weights\nof the discriminator and update only the generator’s weights.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 392}
388
page_content='373 Summary\n\uf0a1To evaluate the GAN, we mostly rely on our observation of the quality of images\ncreated by the generator. Other evaluation metrics are the inception score and\nFréchet inception distance (FID).\n\uf0a1In addition to generating new images, GANs can be used in applications such\nas text-to-photo synthesis, image-to-image translation, image super-resolution,\nand many other applications.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 393}
389
page_content='374DeepDream and\nneural style transfer\nIn fine art, especially painting, humans have mastered the skill of creating unique\nvisual experiences through composing a complex interplay between the content\nand style of an image. So far, the algorithmic basis of this process is unknown, and\nthere exists no artificial system with similar capabilities. Nowadays, deep neural net-\nworks have demonstrated great promise in many areas of visual perception such as\nobject classification and detection. Why not try using deep neural networks to cre-\nate art? In this chapter, we introduce an artificial system based on a deep neural\nnetwork that creates artistic images of high perceptual quality. The system uses neu-\nral representations to separate and recombine content and style of arbitrary\nimages, providing a neural algorithm for the creation of artistic images. \n In this chapter, we explore two new techniques to create artistic images using\nneural networks: DeepDream and neural style transfer. First, we examine howThis chapter covers\n\uf0a1Visualizing CNN feature maps\n\uf0a1Understanding the DeepDream algorithm and \nimplementing your own dream\n\uf0a1Using the neural style transfer algorithm to create \nartistic images' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 394}
390
page_content='375 How convolutional neural networks see the world\nconvolutional neural networks see the world. We’ve learned how CNNs are used to\nextract features in object classification and detection problems; here, we learn how\nto visualize the extracted feature maps. One reason is that we need this visualization\ntechnique in order to understand the DeepDream algorithm. Additionally, this will\nhelp us gain a better understanding of what our network learned during training;\nwe can use that to improve the network’s performance when solving classification\nand detection problems.\n Next, we discuss the DeepDream algorithm. The key idea of this technique is to\nprint the features we visualize in a certain layer onto our input image, to create a\ndream-like hallucinogenic image. Finally, we explore the neural style transfer tech-\nnique, which takes two images as inputs—a style image and a content image—and cre-\nates a new combined image that contains the layout from the content image and the\ntexture, colors, and patterns from the style image. \n Why is this discussion important? Because these techniques help us understand\nand visualize how neural networks are able to carry out difficult classification and\ndetection tasks and check what the network has learned during training. Being able to\nsee what the network thinks is an important feature to use when distinguishing objects\nwill help you understand what is missing from your training set and thus improve the\nnetwork’s performance.\n These techniques also make us wonder whether neural networks could become\ntools for artists, give us a new way to combine visual concepts, or perhaps even shed a\nlittle light on the roots of the creative process in general. Moreover, these algorithms\noffer a path forward to an algorithmic understanding of how humans create and per-\nceive artistic imagery.\n9.1 How convolutional neural networks see the world\nWe have talked a lot in this book about all the amazing things deep neural networks\ncan do. But despite all the exciting news about deep learning, the exact way neural\nnetworks see and interpret the world remains a black box. Yes, we have tried to\nexplain how the training process works, and we explained intuitively and mathemati-\ncally the backpropagation process that the network applies to update weights through\nmany iterations to optimize the loss function. This all sounds good and makes sense\non the scientific side of things. But how do CNNs see the world? How do they see the\nextracted features between all the layers?\n A better understanding of exactly how they recognize specific patterns or objects\nand why they work so well might allow us to improve their performance even further.\nAdditionally, on the business side, this would also solve the “AI explainability” prob-\nlem. In many cases, business leaders feel unable to make decisions based on model\npredictions because nobody really understands what is happening inside the black\nbox. This is what we do in this section: we open the black box and visualize what the\nnetwork sees through its layers, to help make neural network decisions interpretable\nby humans.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 395}
391
page_content='376 CHAPTER 9DeepDream and neural style transfer\n In computer vision problems, we can visualize the feature maps inside the convolu-\ntional network to understand how they see the world and what features they think are\ndistinctive in an object for differentiating between classes. The idea of visualizing con-\nvolutional layers was proposed by Erhan et al. in 2009.1 In this section, we will explain\nthis concept and implement it in Keras. \n9.1.1 Revisiting how neural networks work\nBefore we jump into the explanation of how we can visualize the activation maps (or\nfeature maps) in a CNN, let’s revisit how neural networks work. We train a deep neu-\nral network by showing it millions of training examples. The network then gradually\nupdates its parameters until it gives the classifications we want. The network typically\nconsists of 10–30 stacked layers of artificial neurons. Each image is fed into the input\nlayer, which then talks to the next layer, until eventually the “output” layer is reached.\nThe network’s prediction is then produced by its final output layer.\n One of the challenges of neural networks is understanding what exactly goes on at\neach layer. We know that after training, each layer progressively extracts image fea-\ntures at higher and higher levels, until the final layer essentially makes a decision\nabout what the image contains. For example, the first layer may look for edges or cor-\nners, intermediate layers interpret the basic features to look for overall shapes or com-\nponents, and the final few layers assemble those into complete interpretations. These\nneurons activate in response to very complex images such as a car or a bike.\n To understand what the network has learned through its training, we want to open\nthis black box and visualize its feature maps. One way to visualize the extracted fea-\ntures is to turn the network upside down and ask it to enhance an input image in such\na way as to elicit a particular interpretation. Say you want to know what sort of image\nwould result in the output “Bird.” Start with an image full of random noise, and then\ngradually tweak the image toward what the neural net considers an important feature\nof a bird (figure 9.1).\n1Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. “Visualizing Higher-Layer Features of\na Deep Network.” University of Montreal 1341 (3): 1. 2009. http:/ /mng.bz/yyMq .Input: random noise Output: visualized filter\nFigure 9.1 Start with an image \nconsisting of random noise, and \ntweak it until we visualize what \nthe network considers important \nfeatures of a bird.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 396}
392
page_content='377 How convolutional neural networks see the world\nWe will dive deeper into the bird example and see how to visualize the network filters.\nThe takeaway from this introduction is that neural networks are smart enough to\nunderstand which are the important features to pass along through its layers to be\nclassified by its fully connected layers. Non-important features are discarded along the\nway. To put it simply, neural networks learn the features of the objects in the training\ndataset. If we are able to visualize these feature maps at the deeper layers of the net-\nwork, we can find out where the neural network is paying attention and see the exact\nfeatures that it uses to make its predictions. \nNOTE This process is described best in François Chollet’s book, Deep Learning\nwith Python (Manning, 2017; www.manning.com/books/deep-learning-with-\npython ): “You can think of a deep network as a multistage information-\ndistillation operation, where information goes through successive filters and\ncomes out increasingly purified.” \n9.1.2 Visualizing CNN features\nAn easy way to visualize the features learned by convolutional networks is to display\nthe visual pattern that each filter is meant to respond to. This can be done with gradi-\nent ascent in input space. By applying gradient ascent to the value of the input image of\na ConvNet, we can maximize the response of a specific filter, starting from a blank\ninput image. The resulting input image will be one that the chosen filter is maximally\nresponsive to. \nGradient ascent vs. gradient descent \nAs a reminder, the general definition of a gradient is that it is the function that defines\nthe slope or rate of change of the line tangent to a curve at any given point. In simpler\nwords, the gradient is the slope of the line at that point. Here are some example gra-\ndients at certain points on a curve.\na\nef\nbcdSlope at\npoint aSlope at\npoint cSlope at\npoint d\nSlope at\npoint f\nSlope at\npoint e\nSlope at\npoint b\nThe gradient at different points on the curve' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 397}
393
page_content='378 CHAPTER 9DeepDream and neural style transfer\nNow comes the fun part of this section. In this exercise, we will see the visualized fea-\nture maps of a few examples at the beginning, middle, and end of a VGG16 network.\nThe implementation is straightforward, and we will get to it soon. Before we go to the\ncode implementation, let’s take a look at what these visualized filters look like. \n From the VGG16 diagram we saw in figure 9.1, let’s visualize the output feature\nmaps of the first, middle, and deep layers as follows: block1_conv1 , block3_conv2 ,\nand block5_conv3 . Figures 9.2, 9.3, and 9.4 show how the features evolve throughout\nthe network layers.\n As you can see in figure 9.2, the early layers basically just encode low-level, generic\nfeatures like direction and color. These direction and color filters then get combined(continued)\nWhether we want to descend or ascend the curve is based on our project. We learned\nin chapter 2 that GD is the algorithm that descends the error function to find the\nlocal minimum (for example, minimize the loss function) by taking steps toward\nthe negative of the gradient. \nTo visualize feature maps, we want to maximize these features to make them show\non the output image. In order to maximize the loss function, we want to reverse the\nGD process by using a gradient ascent algorithm . It takes steps proportional to the\npositive of the gradient to approach a local maximum of that function.\nFigure 9.2 Visualizing feature maps produced by block1_conv1 filters' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 398}
394
page_content='379 How convolutional neural networks see the world\ninto basic grid and spot textures in later layers. These textures are gradually combined\ninto increasingly complex patterns (figure 9.3): the network starts to see some pat-\nterns that create basic shapes. These shapes are not very identifiable yet, but they are\nmuch clearer than the earlier ones.\nNow this is the most exciting part. In figure 9.4, you see that the network was able to\nfind patterns in patterns. These features contain identifiable shapes. While the net-\nwork relies on more than one feature map to make its prediction, we can look at\nthese maps and make a close guess about the content of these images. In the left\nimage, I can see eyes and maybe a beak, and I would guess that this is a type of bird\nor fish. Even if our guess is not correct, we can easily eliminate most other classes\nlike car, boat, building, bike, and so on, because we can clearly see eyes and none of\nthose classes have eyes. Similarly, looking at the middle image, we can guess from\nthe patterns that this is some kind of a chain. The right image feels more like food\nor fruit.\nFigure 9.3 Visualizing feature maps produced by block3_conv2 filters\nFigure 9.4 Visualizing feature maps produced by block5_conv3 filters' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 399}
395
page_content='380 CHAPTER 9DeepDream and neural style transfer\nHow is this helpful in classification and detection problems? Let’s take the left feature\nmap in figure 9.4 as an example. Looking at the visible features like eyes and beaks, I\ncan interpret that the network relies on these two features to identify a bird. With this\nknowledge about what the network learned about birds, I will guess that it can detect\nthe bird in figure 9.5, because the bird’s eye and beak are visible.\nNow, let’s consider a more adversarial case where we can see the bird’s body but the eye\nand beak are covered by leaves (figure 9.6). Given that the network adds high weights\nFigure 9.5 Example of a bird image with visible eye and beak features\nFigure 9.6 Example of an adversarial \nimage of a bird where the eye and \nbeak are not visible but the body is \nrecognizable by a human' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 400}
396
page_content="381 How convolutional neural networks see the world\non the eye and beak features to recognize a bird, there is a good chance that it might\nmiss this bird because the bird’s main features are hidden. On the other hand, an\naverage human can easily detect the bird in the image. The solution to this problem is\nusing one of several data-augmentation techniques and collecting more adversarial\ncases in your training dataset to force the network to add higher weights on other fea-\ntures of a bird, like shape and color.\n9.1.3 Implementing a feature visualizer\nNow that you’ve seen the visualized examples, it is time to get your hands dirty and\ndevelop the code to visualize these activation filters yourself. This section walks\nthrough the CNN visualization code implementation from the official Keras docu-\nmentation, with minor tweaking.2 You will learn how to generate patterns that maxi-\nmize the mean activation of a chosen feature map. You can see the full code in Keras’s\nGithub repository ( http:/ /mng.bz/Md8n ). \nNOTE You will run into errors if you try to run the code snippets in this sec-\ntion. These snippets are just meant to illustrate the topic. You are encouraged\nto check out the full executable code that is downloadable with the book.\nFirst, we load the VGG16 model from the Keras library. To do that, we first import\nVGG16 from Keras and then load the model, which is pretrained on the ImageNet\ndataset, without including the classification fully connected layers (top part) of the\nnetwork:\nfrom keras.applications.vgg16 import VGG16 \nmodel = VGG16( weights='imagenet' , include_top =False) \nNow, let’s view the names and output shape of all the VGG16 layers. We do that to pick\nthe specific layer whose filters we want to visualize: \nfor layer in model.layers: \n if 'conv' not in layer.name: \n continue\n filters, biases = layer.get_weights() \n print(layer.name, layer.output.shape)\nWhen you run this code cell, you will get the output shown in figure 9.7. These are all\nthe convolutional layers contained in the VGG16 network. You can visualize any of\ntheir outputs simply by referring to each layer by name, as you will see in the next\ncode snippet.\n Let’s say we want to visualize the first conv layer: block1_conv1 . Note that this layer\nhas 64 filters, each of which has an index from 0 to 63 called filter_index . Now let’s\n2François Chollet, “How convolutional neural networks see the world,” The Keras Blog, 2016, https:/ /blog\n.keras.io/category/demo.html .Imports the VGG \nmodel from Keras\nLoads the model\nLoops through the \nmodel layers\nChecks for a \nconvolutional layer\nGets the filter weights" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 401}
397
page_content="382 CHAPTER 9DeepDream and neural style transfer\ndefine a loss function that seeks to maximize the activation of a specific filter ( filter\n_index ) in a specific layer ( layer_name ). We also want to compute the gradient using\nKeras’s backend function gradients and normalize the gradient to avoid very small\nand very large values, to ensure a smooth gradient ascent process.\n In this code snippet, we set the stage for gradient ascent. We define a loss function,\ncompute the gradients, and normalize the gradients:\nfrom keras import backend as K\nlayer_name = 'block1_conv1'\nfilter_index = 0 \nlayer_dict = dict([(layer.name, layer) for layer in model.layers[1:]]) \nlayer_output = layer_dict[layer_name].output \nloss = K.mean(layer_output[:, :, :, filter_index]) \ngrads = K.gradients(loss, input_img)[0] \ngrads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5) \niterate = K.function([input_img], [loss, grads]) \nWe can use the Keras function that we just defined to do gradient ascent to our filter\nactivation loss:\nimport numpy as np\ninput_img_data = np.random.random((1, 3, img_width, img_height)) * 20 + 128 \nfor i in range(20): \n loss_value, grads_value = iterate([input_img_data]) \n input_img_data += grads_value * step block1_conv1\nblock1_conv2\nblock2_conv1\nblock2_conv2\nblock3_conv1\nblock3_conv2\nblock3_conv3\nblock4_conv1\nblock4_conv2\nblock4_conv3\nblock5_conv1\nblock5_conv2\nblock5_conv3(None, None, None, 64)\n(None, None, None, 64)\n(None, None, None, 128)\n(None, None, None, 128)\n(None, None, None, 256)\n(None, None, None, 256)\n(None, None, None, 256)\n(None, None, None, 512)\n(None, None, None, 512)\n(None, None, None, 512)\n(None, None, None, 512)\n(None, None, None, 512)\n(None, None, None, 512)Figure 9.7 Output showing convolution \nlayers in the downloaded VGG16 network\nIdentifies the filter that we \nwant to visualize. This can \nbe any integer from 0 to \n63, as there are 64 filters \nin that layer.Gets the symbolic\noutputs of each key\nlayer (we gave them\nunique names).\nBuilds a loss function that \nmaximizes the activation \nof the nth filter of the \nlayer considered\nComputes the gradient \nof the input picture with \nrespect to this loss\nNormalizes the \ngradientThis function returns the loss and\ngrads given the input picture.\nStarts from a gray\nimage with some noise\nRuns gradient ascent \nfor 20 steps" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 402}
398
page_content="383 How convolutional neural networks see the world\nNow that we have implemented the gradient ascent, we need to build a function that\nconverts the tensor into a valid image. We will call it deprocess_image(x) . Then we\nsave the image on disk to view it: \nfrom keras.preprocessing.image import save_img\ndef deprocess_image (x): \n x -= x.mean() \n x /= (x.std() + 1e-5) \n x *= 0.1 \n x += 0.5 \n x = np.clip(x, 0, 1) \n x *= 255 \n x = x.transpose((1, 2, 0)) \n x = np.clip(x, 0, 255).astype( 'uint8') \n return x \nimg = input_img_data[0]\nimg = deprocess_image(img)\nimsave('%s_filter_ %d.png' % (layer_name, filter_index), img)\nThe result should be something like figure 9.8.\nYou can try to change the visualized filters to deeper layers in later blocks like block2\nand block3 to see more defined features extracted as a result of the network recogniz-\ning patterns within patterns through its layers. In the highest layers ( block5_conv2 ,\nblock5_conv3 ) you will start to recognize textures similar to those found in the objects\nthe network was trained to classify, such as feathers, eyes, and so on.Normalizes the tensor: \ncenters on 0. and ensures \nthat std is 0. 1\nClips to [0, 1]\nConverts to an \nRGB array\nFigure 9.8 VGG16 layer \nblock1_conv1 visualized" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 403}
399
page_content='384 CHAPTER 9DeepDream and neural style transfer\n9.2 DeepDream\nDeepDream was developed by Google researchers Alexander Mordvintsev et al. in\n2015.3 It is an artistic image modification technique that creates dream-like, hallucino-\ngenic images using CNNs, as shown in the example in figure 9.9).\nFor comparison, the original input image is shown in figure 9.10. The original is a sce-\nnic image from the ocean, containing two dolphins and other creatures. DeepDream\nmerged both dolphins into one object and replaced one of the faces with what looks\nlike a dog face. Other objects were also deformed in an artistic way, and the sea back-\nground has an edge-like texture.\n3Alexander Mordvintsev, Christopher Olah, and Mike Tyka, “Deepdream—A Code Example for Visualizing\nNeural Networks,” Google AI Blog, 2015, http:/ /mng.bz/aROB .\nFigure 9.9 DeepDream \noutput image\nFigure 9.10 DeepDream \ninput image' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 404}
400
page_content='385 DeepDream\nDeepDream quickly became an internet sensation, thanks to the trippy pictures it gen-\nerates, full of algorithmic artifacts, bird feathers, dog faces, and eyes. These artifacts\nare byproducts of the fact that the DeepDream ConvNet was trained on ImageNet,\nwhere dog breeds and bird species are vastly overrepresented. If you tried another\nnetwork that was pretrained on a dataset with a majority distribution of other objects,\nsuch as cars, you would see car features in your output image. \n The project started as a fun experiment to run a CNN in reverse and visualize its\nactivation maps using the same convolutional filter-visualization techniques explained\nin section 9.1: run a ConvNet in reverse, doing gradient ascent on the input in order\nto maximize the activation of a specific filter in an upper layer of the ConvNet. Deep-\nDream uses this same idea, with a few alterations:\n\uf0a1Input image —In filter visualization, we don’t use an input image. We start from\na blank image (or a slightly noisy one) and then maximize the filter activa-\ntions of the convolutional layers to view their features. In DeepDream, we use\nan input image to the network because the goal is to print these visualized fea-\ntures on an image.\n\uf0a1Maximizing filters versus layers —In filter visualization, as the name implies, we\nonly maximize activations of specific filters within the layer. But in DeepDream,\nwe aim to maximize the activation of the entire layer to mix together a large\nnumber of features at once.\n\uf0a1Octaves —In DeepDream, the input images are processed at different scales\ncalled octaves to improve the quality of the visualized features. This process will\nbe explained next.\n9.2.1 How the DeepDream algorithm works\nSimilar to the filter-visualization technique, DeepDream uses a pretrained network on\na large dataset. The Keras library has many pretrained ConvNets available to use:\nVGG16, VGG19, Inception, ResNet, and so on. We can use any of these networks in\nthe DeepDream implementation; we can even train a custom network on our own\ndataset and use it in the DeepDream algorithm. Intuitively, the choice of network and\nthe data it is pretrained on will affect our visualizations because different ConvNet\narchitectures result in different learned features; and, of course, different training\ndatasets will create different features as well. \n The creators of DeepDream used an Inception model because they found that in\npractice, it produces nice-looking dreams. So in this chapter, we will use the Inception\nv3 model. You are encouraged to try different models to observe the difference.\n The overall idea with DeepDream is that we pass an input image through a pre-\ntrained neural network such as the Inception v3 model. At some layer, we calculate\nthe gradient, which tells us how we should change the input image to maximize the\nvalue at this layer. We continue doing this for 10, 20, or 40 iterations until eventually,\npatterns start to emerge in the input image (figure 9.11).' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 405}