page_no
int64 1
474
| page_content
stringlengths 160
3.83k
|
|---|---|
1
|
page_content='MANNINGMohamed Elgendy' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 0}
|
2
|
page_content='Deep Learning for\nVision Systems\nMOHAMED ELGENDY\nMANNING\nSHELTER ISLAND' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 1}
|
3
|
page_content='For online information and ordering of this and other Manning books, please visit\nwww.manning.com . The publisher offers discounts on this book when ordered in quantity. \nFor more information, please contact\nSpecial Sales Department\nManning Publications Co.\n20 Baldwin Road\nPO Box 761\nShelter Island, NY 11964\nEmail: orders@manning.com\n©2020 by Manning Publications Co. All rights reserved.\nNo part of this publication may be reproduced, stored in a retrieval system, or transmitted, in \nany form or by means electronic, mechanical, photocopying, or otherwise, without prior written \npermission of the publisher.\nMany of the designations used by manufacturers and sellers to distinguish their products are \nclaimed as trademarks. Where those designations appear in the book, and Manning Publications \nwas aware of a trademark claim, the designations have been printed in initial caps or all caps.\nRecognizing the importance of preserving what has been written, it is Manning’s policy to have \nthe books we publish printed on acid-free paper, and we exert our best efforts to that end. \nRecognizing also our responsibility to conserve the resources of our planet, Manning books\nare printed on paper that is at least 15 percent recycled and processed without the use of \nelemental chlorine.\nDevelopment editor: Jenny Stout\nTechnical development editor: Alain Couniot\nManning Publications Co. Review editor: Ivan Martinovic ´\n20 Baldwin Road Production editor: Lori Weidert\nPO Box 761 Copy editor: Tiffany Taylor\nShelter Island, NY 11964 Proofreader: Keri Hales\nTechnical proofreader: Al Krinker\nTypesetter: Dennis Dalinnik\nCover designer: Marija Tudor\nISBN: 9781617296192\nPrinted in the United States of America' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 2}
|
4
|
page_content='To my mom, Huda, who taught me perseverance and kindness\nTo my dad, Ali, who taught me patience and purpose\nTo my loving and supportive wife, Amanda, who always inspires me to keep climbing\nTo my two-year-old daughter, Emily, who teaches me every day that AI still has \na long way to go to catch up with even the tiniest humans' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 3}
|
5
|
page_content='vcontents\npreface xiii\nacknowledgments xv\nabout this book xvi\nabout the author xix\nabout the cover illustration xx\nPART 1DEEP LEARNING FOUNDATION ............................. 1\n1 Welcome to computer vision 3\n1.1 Computer vision 4\nWhat is visual perception? 5■Vision systems 5\nSensing devices 7■Interpreting devices 8\n1.2 Applications of computer vision 10\nImage classification 10■Object detection and localization 12\nGenerating art (style transfer) 12■Creating images 13\nFace recognition 15■Image recommendation system 15\n1.3 Computer vision pipeline: The big picture 17\n1.4 Image input 19\nImage as functions 19■How computers see images 21\nColor images 21' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 5}
|
6
|
page_content='CONTENTS vi\n1.5 Image preprocessing 23\nConverting color images to grayscale to reduce computation \ncomplexity 23\n1.6 Feature extraction 27\nWhat is a feature in computer vision? 27■What makes a good \n(useful) feature? 28■Extracting features (handcrafted vs. \nautomatic extracting) 31\n1.7 Classifier learning algorithm 33\n2 Deep learning and neural networks 36\n2.1 Understanding perceptrons 37\nWhat is a perceptron? 38■How does the perceptron learn? 43\nIs one neuron enough to solve complex problems? 43\n2.2 Multilayer perceptrons 45\nMultilayer perceptron architecture 46■What are hidden \nlayers? 47■How many layers, and how many nodes in \neach layer? 47■Some takeaways from this section 50\n2.3 Activation functions 51\nLinear transfer function 53■Heaviside step function (binary \nclassifier) 54■Sigmoid/logistic function 55■Softmax \nfunction 57■Hyperbolic tangent function (tanh) 58\nRectified linear unit 58■Leaky ReLU 59\n2.4 The feedforward process 62\nFeedforward calculations 64■Feature learning 65\n2.5 Error functions 68\nWhat is the error function? 69■Why do we need an error \nfunction? 69■Error is always positive 69■Mean square \nerror 70■Cross-entropy 71■A final note on errors and \nweights 72\n2.6 Optimization algorithms 74\nWhat is optimization? 74■Batch gradient descent 77\nStochastic gradient descent 83■Mini-batch gradient descent 84\nGradient descent takeaways 85\n2.7 Backpropagation 86\nWhat is backpropagation? 87■Backpropagation takeaways 90' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 6}
|
7
|
page_content='CONTENTS vii\n3 Convolutional neural networks 92\n3.1 Image classification using MLP 93\nInput layer 94■Hidden layers 96■Output layer 96\nPutting it all together 97■Drawbacks of MLPs for processing \nimages 99\n3.2 CNN architecture 102\nThe big picture 102■A closer look at feature extraction 104\nA closer look at classification 105\n3.3 Basic components of a CNN 106\nConvolutional layers 107■Pooling layers or subsampling 114\nFully connected layers 119\n3.4 Image classification using CNNs 121\nBuilding the model architecture 121■Number of parameters \n(weights) 123\n3.5 Adding dropout layers to avoid overfitting 124\nWhat is overfitting? 125■What is a dropout layer? 125\nWhy do we need dropout layers? 126■Where does the dropout \nlayer go in the CNN architecture? 127\n3.6 Convolution over color images (3D images) 128\nHow do we perform a convolution on a color image? 129\nWhat happens to the computational complexity? 130\n3.7 Project: Image classification for color images 133\n4 Structuring DL projects and hyperparameter tuning 145\n4.1 Defining performance metrics 146\nIs accuracy the best metric for evaluating a model? 147\nConfusion matrix 147■Precision and recall 148\nF-score 149\n4.2 Designing a baseline model 149\n4.3 Getting your data ready for training 151\nSplitting your data for train/validation/test 151\nData preprocessing 153\n4.4 Evaluating the model and interpreting its \nperformance 156\nDiagnosing overfitting and underfitting 156■Plotting the \nlearning curves 158■' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 7}
|
8
|
page_content='CONTENTS viii\n4.5 Improving the network and tuning hyperparameters 162\nCollecting more data vs. tuning hyperparameters 162\nParameters vs. hyperparameters 163■Neural network \nhyperparameters 163■Network architecture 164\n4.6 Learning and optimization 166\nLearning rate and decay schedule 166■A systematic approach \nto find the optimal learning rate 169■Learning rate decay and \nadaptive learning 170■Mini-batch size 171\n4.7 Optimization algorithms 174\nGradient descent with momentum 174■Adam 175\nNumber of epochs and early stopping criteria 175■Early \nstopping 177\n4.8 Regularization techniques to avoid overfitting 177\nL2 regularization 177■Dropout layers 179\nData augmentation 180\n4.9 Batch normalization 181\nThe covariate shift problem 181■Covariate shift in neural \nnetworks 182■How does batch normalization work? 183\nBatch normalization implementation in Keras 184■Batch \nnormalization recap 185\n4.10 Project: Achieve high accuracy on image \nclassification 185\nPART 2IMAGE CLASSIFICATION AND DETECTION ........... 193\n5 Advanced CNN architectures 195\n5.1 CNN design patterns 197\n5.2 LeNet-5 199\nLeNet architecture 199■LeNet-5 implementation in Keras 200\nSetting up the learning hyperparameters 202■LeNet performance \non the MNIST dataset 203\n5.3 AlexNet 203\nAlexNet architecture 205■Novel features of AlexNet 205\nAlexNet implementation in Keras 207■Setting up the learning \nhyperparameters 210■AlexNet performance 211\n5.4 VGGNet 212\nNovel features of VGGNet 212■VGGNet configurations 213\nLearning hyperparameters 216■' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 8}
|
9
|
page_content='CONTENTS ix\n5.5 Inception and GoogLeNet 217\nNovel features of Inception 217■Inception module: Naive \nversion 218■Inception module with dimensionality \nreduction 220■Inception architecture 223■GoogLeNet in \nKeras 225■Learning hyperparameters 229■Inception \nperformance on the CIFAR dataset 229\n5.6 ResNet 230\nNovel features of ResNet 230■Residual blocks 233■ResNet \nimplementation in Keras 235■Learning hyperparameters 238\nResNet performance on the CIFAR dataset 238\n6 Transfer learning 240\n6.1 What problems does transfer learning solve? 241\n6.2 What is transfer learning? 243\n6.3 How transfer learning works 250\nHow do neural networks learn features? 252■Transferability of \nfeatures extracted at later layers 254\n6.4 Transfer learning approaches 254\nUsing a pretrained network as a classifier 254■Using a pretrained \nnetwork as a feature extractor 256■Fine-tuning 258\n6.5 Choosing the appropriate level of transfer learning 260\nScenario 1: Target dataset is small and similar to the source \ndataset 260■Scenario 2: Target dataset is large and similar \nto the source dataset 261■Scenario 3: Target dataset is small and \ndifferent from the source dataset 261■Scenario 4: Target dataset \nis large and different from the source dataset 261■Recap of the \ntransfer learning scenarios 262\n6.6 Open source datasets 262\nMNIST 263■Fashion-MNIST 264■CIFAR 264\nImageNet 265■MS COCO 266■Google Open Images 267\nKaggle 267\n6.7 Project 1: A pretrained network as a feature \nextractor 268\n6.8 Project 2: Fine-tuning 274\n7 Object detection with R-CNN, SSD, and YOLO 283\n7.1 General object detection framework 285\nRegion proposals 286■Network predictions 287\nNon-maximum suppression (NMS) 288■Object-detector \nevaluation metrics 289' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 9}
|
10
|
page_content='CONTENTS x\n7.2 Region-based convolutional neural networks \n(R-CNNs) 292\nR-CNN 293■Fast R-CNN 297■Faster R-CNN 300\nRecap of the R-CNN family 308\n7.3 Single-shot detector (SSD) 310\nHigh-level SSD architecture 311■Base network 313\nMulti-scale feature layers 315■Non-maximum \nsuppression 319\n7.4 You only look once (YOLO) 320\nHow YOLOv3 works 321■YOLOv3 architecture 324\n7.5 Project: Train an SSD network in a self-driving car \napplication 326\nStep 1: Build the model 328■Step 2: Model configuration 329\nStep 3: Create the model 330■Step 4: Load the data 331\nStep 5: Train the model 333■Step 6: Visualize the loss 334\nStep 7: Make predictions 335\nPART 3GENERATIVE MODELS AND VISUAL EMBEDDINGS ...339\n8 Generative adversarial networks (GANs) 341\n8.1 GAN architecture 343\nDeep convolutional GANs (DCGANs) 345■The discriminator \nmodel 345■The generator model 348■Training the \nGAN 351■GAN minimax function 354\n8.2 Evaluating GAN models 357\nInception score 358■Fréchet inception distance (FID) 358\nWhich evaluation scheme to use 358\n8.3 Popular GAN applications 359\nText-to-photo synthesis 359■Image-to-image translation (Pix2Pix \nGAN) 360■Image super-resolution GAN (SRGAN) 361\nReady to get your hands dirty? 362\n8.4 Project: Building your own GAN 362\n9 DeepDream and neural style transfer 374\n9.1 How convolutional neural networks see the world 375\nRevisiting how neural networks work 376■Visualizing CNN \nfeatures 377■' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 10}
|
11
|
page_content='CONTENTS xi\n9.2 DeepDream 384\nHow the DeepDream algorithm works 385■DeepDream \nimplementation in Keras 387\n9.3 Neural style transfer 392\nContent loss 393■Style loss 396■Total variance loss 397\nNetwork training 397\n10 Visual embeddings 400\n10.1 Applications of visual embeddings 402\nFace recognition 402■Image recommendation systems 403\nObject re-identification 405\n10.2 Learning embedding 406\n10.3 Loss functions 407\nProblem setup and formalization 408■Cross-entropy loss 409\nContrastive loss 410■Triplet loss 411■Naive implementation \nand runtime analysis of losses 412\n10.4 Mining informative data 414\nDataloader 414■Informative data mining: Finding useful \ntriplets 416■Batch all (BA) 419■Batch hard (BH) 419\nBatch weighted (BW) 421■Batch sample (BS) 421\n10.5 Project: Train an embedding network 423\nFashion: Get me items similar to this 424■Vehicle \nre-identification 424■Implementation 426■Testing \na trained model 427\n10.6 Pushing the boundaries of current accuracy 431\nappendix A Getting set up 437\nindex 445' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 11}
|
12
|
page_content='xiiipreface\nTwo years ago, I decided to write a book to teach deep learning for computer vision\nfrom an intuitive perspective. My goal was to develop a comprehensive resource\nthat takes learners from knowing only the basics of machine learning to building\nadvanced deep learning algorithms that they can apply to solve complex computer\nvision problems.\n The problem : In short, as of this moment, there are no books out there that teach\ndeep learning for computer vision the way I wanted to learn about it. As a beginner\nmachine learning engineer, I wanted to read one book that would take me from point\nA to point Z. I planned to specialize in building modern computer vision applications,\nand I wished that I had a single resource that would teach me everything I needed to\ndo two things: 1) use neural networks to build an end-to-end computer vision applica-\ntion, and 2) be comfortable reading and implementing research papers to stay up-to-\ndate with the latest industry advancements. \n I found myself jumping between online courses, blogs, papers, and YouTube\nvideos to create a comprehensive curriculum for myself. It’s challenging to try to\ncomprehend what is happening under the hood on a deeper level: not just a basic\nunderstanding, but how the concepts and theories make sense mathematically. It was\nimpossible to find one comprehensive resource that (horizontally) covered the most\nimportant topics that I needed to learn to work on complex computer vision applica-\ntions while also diving deep enough (vertically) to help me understand the math that\nmakes the magic work.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 13}
|
13
|
page_content='PREFACE xiv\n As a beginner, I searched but couldn’t find anything to meet these needs. So now\nI’ve written it. My goal has been to write a book that not only teaches the content I\nwanted when I was starting out, but also levels up your ability to learn on your own.\n My solution is a comprehensive book that dives deep both horizontally and vertically:\n■Horizontally —This book explains most topics that an engineer needs to learn to\nbuild production-ready computer vision applications, from neural networks\nand how they work to the different types of neural network architectures and\nhow to train, evaluate, and tune the network.\n■Vertically —The book dives a level or two deeper than the code and explains\nintuitively (and gently) how the math works under the hood, to empower you\nto be comfortable reading and implementing research papers or even invent-\ning your own techniques.\nAt the time of writing, I believe this is the only deep learning for vision systems\nresource that is taught this way. Whether you are looking for a job as a computer\nvision engineer, want to gain a deeper understanding of advanced neural networks\nalgorithms in computer vision, or want to build your product or startup, I wrote this\nbook with you in mind. I hope you enjoy it.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 14}
|
14
|
page_content='xvacknowledgments\nThis book was a lot of work. No, make that really a lot of work! But I hope you will find it\nvaluable. There are quite a few people I’d like to thank for helping me along the way. \n I would like to thank the people at Manning who made this book possible: pub-\nlisher Marjan Bace and everyone on the editorial and production teams, including\nJennifer Stout, Tiffany Taylor, Lori Weidert, Katie Tennant, and many others who\nworked behind the scenes. \n Many thanks go to the technical peer reviewers led by Alain Couniot—Al Krinker,\nAlbert Choy, Alessandro Campeis, Bojan Djurkovic, Burhan ul haq, David Fombella\nPombal, Ishan Khurana, Ita Cirovic Donev, Jason Coleman, Juan Gabriel Bono, Juan\nJosé Durillo Barrionuevo, Michele Adduci, Millad Dagdoni, Peter Hraber, Richard\nVaughan, Rohit Agarwal, Tony Holdroyd, Tymoteusz Wolodzko, and Will Fuger—and\nthe active readers who contributed their feedback in the book forums. Their contribu-\ntions included catching typos, code errors and technical mistakes, as well as making\nvaluable topic suggestions. Each pass through the review process and each piece of\nfeedback implemented through the forum topics shaped and molded the final ver-\nsion of this book.\n Finally, thank you to the entire Synapse Technology team. You’ve created some-\nthing that’s incredibly cool. Thank you to Simanta Guatam, Aleksandr Patsekin, Jay\nPatel, and others for answering my questions and brainstorming ideas for the book.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 15}
|
15
|
page_content='xviabout this book\nWho should read this book\nIf you know the basic machine learning framework, can hack around in Python, and\nwant to learn how to build and train advanced, production-ready neural networks to\nsolve complex computer vision problems, I wrote this book for you. The book was\nwritten for anyone with intermediate Python experience and basic machine learning\nunderstanding who wishes to explore training deep neural networks and learn to\napply deep learning to solve computer vision problems.\n When I started writing the book, my primary goal was as follows: “I want to write a\nbook to grow readers’ skills, not teach them content.” To achieve this goal, I had to\nkeep an eye on two main tenets:\n1Teach you how to learn . I don’t want to read a book that just goes through a set of\nscientific facts. I can get that on the internet for free. If I read a book, I want to\nfinish it having grown my skillset so I can study the topic further. I want to learn\nhow to think about the presented solutions and come up with my own.\n2Go very deep . If I’m successful in satisfying the first tenet, that makes this one\neasy. If you learn how to learn new concepts, that allows me to dive deep with-\nout worrying that you might fall behind. This book doesn’t avoid the math\npart of the learning, because understanding the mathematical equations will\nempower you with the best skill in the AI world: the ability to read research\npapers, compare innovations, and make the right decisions about implement-\ning new concepts in your own problems. But I promise to introduce only the\nmathematical concepts you need, and I promise to present them in a way that' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 16}
|
16
|
page_content='ABOUT THIS BOOK xvii\ndoesn’t interrupt your flow of understanding the concepts without the math\npart if you prefer. \nHow this book is organized: A roadmap\nThis book is structured into three parts. The first part explains deep leaning in detail\nas a foundation for the remaining topics. I strongly recommend that you not skip this\nsection, because it dives deep into neural network components and definitions and\nexplains all the notions required to be able to understand how neural networks work\nunder the hood. After reading part 1, you can jump directly to topics of interest in the\nremaining chapters. Part 2 explains deep learning techniques to solve object classifica-\ntion and detection problems, and part 3 explains deep learning techniques to gener-\nate images and visual embeddings. In several chapters, practical projects implement\nthe topics discussed.\nAbout the code\nAll of this book’s code examples use open source frameworks that are free to down-\nload. We will be using Python, Tensorflow, Keras, and OpenCV. Appendix A walks you\nthrough the complete setup. I also recommend that you have access to a GPU if you\nwant to run the book projects on your machine, because chapters 6–10 contain more\ncomplex projects to train deep networks that will take a long time on a regular CPU.\nAnother option is to use a cloud environment like Google Colab for free or other paid\noptions.\n Examples of source code occur both in numbered listings and in line with normal\ntext. In both cases, source code is formatted in a fixed-width font like this to sepa-\nrate it from ordinary text. Sometimes code is also in bold to highlight code that has\nchanged from previous steps in the chapter, such as when a new feature adds to an\nexisting line of code.\n In many cases, the original source code has been reformatted; we’ve added line\nbreaks and reworked indentation to accommodate the available page space in the\nbook. In rare cases, even this was not enough, and listings include line-continuation\nmarkers ( ➥). Additionally, comments in the source code have often been removed\nfrom the listings when the code is described in the text. Code annotations accompany\nmany of the listings, highlighting important concepts.\n The code for the examples in this book is available for download from the Man-\nning website at www.manning.com/books/deep-learning-for-vision-systems and from\nGitHub at https:/ /github.com/moelgendy/deep_learning_for_vision_systems .\nliveBook discussion forum\nPurchase of Deep Learning for Vision Systems includes free access to a private web\nforum run by Manning Publications where you can make comments about the book,\nask technical questions, and receive help from the author and from other users. To' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 17}
|
17
|
page_content='ABOUT THIS BOOK xviii\naccess the forum, go to https:/ /livebook.manning.com/#!/book/deep-learning-for-\nvision-systems/discussion . You can also learn more about Manning’s forums and the\nrules of conduct at https:/ /livebook.manning.com/#!/discussion .\n Manning’s commitment to our readers is to provide a venue where a meaningful\ndialogue between individual readers and between readers and the author can take\nplace. It is not a commitment to any specific amount of participation on the part of\nthe author, whose contribution to the forum remains voluntary (and unpaid). We sug-\ngest you try asking the author some challenging questions lest his interest stray! The\nforum and the archives of previous discussions will be accessible from the publisher’s\nwebsite as long as the book is in print.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 18}
|
18
|
page_content='xixabout the author\nMohamed Elgendy is the vice president of engineering at Rakuten, where he is lead-\ning the development of its AI platform and products. Previously, he served as head of\nengineering at Synapse Technology, building proprietary computer vision applica-\ntions to detect threats at security checkpoints worldwide. At Amazon, Mohamed built\nand managed the central AI team that serves as a deep learning think tank for Ama-\nzon engineering teams like AWS and Amazon Go. He also developed the deep learn-\ning for computer vision curriculum at Amazon’s Machine University. Mohamed\nregularly speaks at AI conferences like Amazon’s DevCon, O’Reilly’s AI conference,\nand Google’s I/O.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 19}
|
19
|
page_content='xxabout the cover illustration\nThe figure on the cover of Deep Learning for Vision Systems depicts Ibn al-Haytham, an\nArab mathematician, astronomer, and physicist who is often referred to as “the father\nof modern optics” due to his significant contributions to the principles of optics and\nvisual perception. The illustration is modified from the frontispiece of a fifteenth-\ncentury edition of Johannes Hevelius’s work Selenographia . \n In his book Kitab al-Manazir (Book of Optics ), Ibn al-Haytham was the first to explain\nthat vision occurs when light reflects from an object and then passes to one’s eyes. He\nwas also the first to demonstrate that vision occurs in the brain, rather than in the\neyes—and many of these concepts are at the heart of modern vision systems. You will\nsee the correlation when you read chapter 1 of this book.\n Ibn al-Haytham has been a great inspiration for me as I work and innovate in this\nfield. By honoring his memory on the cover of this book, I hope to inspire fellow prac-\ntitioners that our work can live and inspire others for thousands of years.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 20}
|
20
|
page_content='Part 1\nDeep learning foundation\nC omputer vision is a technological area that’s been advancing rapidly\nthanks to the tremendous advances in artificial intelligence and deep learning\nthat have taken place in the past few years. Neural networks now help self-driving\ncars to navigate around other cars, pedestrians, and other obstacles; and recom-\nmender agents are getting smarter about suggesting products that resemble other\nproducts. Face-recognition technologies are becoming more sophisticated, too,\nenabling smartphones to recognize faces before unlocking a phone or a door.\nComputer vision applications like these and others have become a staple in our\ndaily lives. However, by moving beyond the simple recognition of objects, deep\nlearning has given computers the power to imagine and create new things, like\nart that didn’t exist previously, new human faces, and other objects. Part 1 of this\nbook looks at the foundations of deep learning, different forms of neural net-\nworks, and structured projects that go a bit further with concepts like hyper-\nparameter tuning.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 21}
|
21
|
page_content='3Welcome to\ncomputer vision\nHello! I’m very excited that you are here. You are making a great decision—to\ngrasp deep learning (DL) and computer vision (CV). The timing couldn’t be more\nperfect. CV is an area that’s been advancing rapidly, thanks to the huge AI and DL\nadvances of recent years. Neural networks are now allowing self-driving cars to fig-\nure out where other cars and pedestrians are and navigate around them. We are\nusing CV applications in our daily lives more and more with all the smart devices in\nour homes—from security cameras to door locks. CV is also making face recogni-\ntion work better than ever: smartphones can recognize faces for unlocking, and\nsmart locks can unlock doors. I wouldn’t be surprised if sometime in the near\nfuture, your couch or television is able to recognize specific people in your house\nand react according to their personal preferences. It’s not just about recognizingThis chapter covers\n\uf0a1Components of the vision system\n\uf0a1Applications of computer vision\n\uf0a1Understanding the computer vision pipeline\n\uf0a1Preprocessing images and extracting features\n\uf0a1Using classifier learning algorithms' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 23}
|
22
|
page_content='4 CHAPTER 1Welcome to computer vision\nobjects—DL has given computers the power to imagine and create new things like art-\nwork; new objects; and even unique, realistic human faces. \n The main reason that I’m excited about deep learning for computer vision, and\nw hat d re w m e t o thi s fi e ld, is ho w r apid advanc es i n AI re s ear ch ar e e nab lin g ne w\napplications to be built every day and across different industries, something not possi-\nble just a few years ago. The unlimited possibilities of CV research is what inspired me\nto write this book. By learning these tools, perhaps you will be able to invent new prod-\nucts and applications. Even if you end up not working on CV per se, you will find\nmany concepts in this book useful for some of your DL algorithms and architectures.\nThat is because while the main focus is CV applications, this book covers the most\nimportant DL architectures, such as artificial neural networks (ANNs), convolutional\nnetworks (CNNs), generative adversarial networks (GANs), transfer learning, and\nmany more, which are transferable to other domains like natural language processing\n(NLP) and voice user interfaces (VUIs).\n The high-level layout of this chapter is as follows:\n\uf0a1Computer vision intuition —We will start with visual perception intuition and\nlearn the similarities between humans and machine vision systems. We will look\nat how vision systems have two main components: a sensing device and an inter-\npreting device. Each is tailored to fulfill a specific task.\n\uf0a1Applications of CV — H e r e , w e w i l l t a k e a b i r d ’ s - e y e v i e w o f t h e D L a l g o r i t h m s\nused in different CV applications. We will then discuss vision in general for dif-\nferent creatures. \n\uf0a1Computer vision pipeline —Finally, we will zoom in on the second component of\nvision systems: the interpreting device. We will walk through the sequence of\nsteps taken by vision systems to process and understand image data. These are\nreferred to as a computer vision pipeline . The CV pipeline is composed of four\nmain steps: image input, image preprocessing, feature extraction, and an ML\nmodel to interpret the image. We will talk about image formation and how com-\nputers see images. Then, we will quickly review image-processing techniques\nand extracting features.\nReady? Let’s get started!\n1.1 Computer vision\nThe core concept of any AI system is that it can perceive its environment and take\nactions based on its perceptions. Computer vision is concerned with the visual percep-\ntion part: it is the science of perceiving and understanding the world through images\nand videos by constructing a physical model of the world so that an AI system can then\ntake appropriate actions. For humans, vision is only one aspect of perception. We per-\nceive the world through our sight, but also through sound, smell, and our other\nsenses. It is similar with AI systems—vision is just one way to understand the world.\nDepending on the application you are building, you select the sensing device that best\ncaptures the world.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 24}
|
23
|
page_content='5 Computer vision\n1.1.1 What is visual perception?\nVisual perception , at its most basic, is the act of observing patterns and objects through\nsight or visual input. With an autonomous vehicle, for example, visual perception means\nunderstanding the surrounding objects and their specific details—such as pedestrians,\nor whether there is a particular lane the vehicle needs to be centered in—and detecting\ntraffic signs and understanding what they mean. That’s why the word perception is part\nof the definition. We are not just looking to capture the surrounding environment.\nWe are trying to build systems that can actually understand that environment through\nvisual input.\n1.1.2 Vision systems\nIn past decades, traditional image-processing techniques were considered CV systems,\nbut that is not totally accurate. A machine processing an image is completely different\nfrom that machine understanding what’s happening within the image, which is not a\ntrivial task. Image processing is now just a piece of a bigger, more complex system that\naims to interpret image content.\nHUMAN VISION SYSTEMS\nAt the highest level, vision systems are pretty much the same for humans, animals,\ninsects, and most living organisms. They consist of a sensor or an eye to capture the\nimage and a brain to process and interpret the image. The system then outputs a\nprediction of the image components based on the data extracted from the image\n(figure 1.1).\n Let’s see how the human vision system works. Suppose we want to interpret the\nimage of dogs in figure 1.1. We look at it and directly understand that the image con-\nsists of a bunch of dogs (three, to be specific). It comes pretty natural to us to classify\nPOOL\nHuman vision system\nEye (sensing device\nresponsible for capturing\nimages of the environment)Brain (interpreting device\nresponsible for understanding\nthe image content)\nDogs\ngrassInterpretation\nFigure 1.1 The human vision system uses the eye and brain to sense and interpret an image.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 25}
|
24
|
page_content='6 CHAPTER 1Welcome to computer vision\nand detect objects in this image because we have been trained over the years to iden-\ntify dogs. \n Suppose someone shows you a picture of a dog for the first time—you definitely\ndon’t know what it is. Then they tell you that this is a dog. After a couple experiments\nlike this, you will have been trained to identify dogs. Now, in a follow-up exercise, they\nshow you a picture of a horse. When you look at the image, your brain starts analyzing\nthe object features: hmmm, it has four legs, long face, long ears. Could it be a dog?\n“Wrong: this is a horse,” you’re told. Then your brain adjusts some parameters in its\nalgorithm to learn the differences between dogs and horses. Congratulations! You just\ntrained your brain to classify dogs and horses. Can you add more animals to the equa-\ntion, like cats, tigers, cheetahs, and so on? Definitely. You can train your brain to iden-\ntify almost anything. The same is true of computers. You can train machines to learn\nand identify objects, but humans are much more intuitive than machines. It takes\nonly a few images for you to learn to identify most objects, whereas with machines, it\ntakes thousands or, in more complex cases, millions of image samples to learn to\nidentify objects.\nAI VISION SYSTEMS\nScientists were inspired by the human vision system and in recent years have done an\namazing job of copying visual ability with machines. To mimic the human vision sys-\ntem, we need the same two main components: a sensing device to mimic the function\nof the eye and a powerful algorithm to mimic the brain function in interpreting and\nclassifying image content (figure 1.2).\n The ML perspective\nLet’s look at the previous example from the machine learning perspective:\n\uf0a1You learned to identify dogs by looking at examples of several dog-labeled\nimages. This approach is called supervised learning.\n\uf0a1Labeled data is data for which you already know the target answer. You were\nshown a sample image of a dog and told that it was a dog. Your brain learned\nto associate the features you saw with this label: dog.\n\uf0a1You were then shown a different object, a horse, and asked to identify it. At\nfirst, your brain thought it was a dog, because you hadn’t seen horses before,\nand your brain confused horse features with dog features. When you were\ntold that your prediction was wrong, your brain adjusted its parameters to\nlearn horse features. “Yes, both have four legs, but the horse’s legs are lon-\nger. Longer legs indicate a horse.” We can run this experiment many times\nuntil the brain makes no mistakes. This is called training by trial and error .' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 26}
|
25
|
page_content='7 Computer vision\n1.1.3 Sensing devices\nVision systems are designed to fulfill a specific task. An important aspect of design is\nselecting the best sensing device to capture the surroundings of a specific environ-\nment, whether that is a camera, radar, X-ray, CT scan, Lidar, or a combination of\ndevices to provide the full scene of an environment to fulfill the task at hand. \n Let’s look at the autonomous vehicle (AV) example again. The main goal of the\nAV vision system is to allow the car to understand the environment around it and\nmove from point A to point B safely and in a timely manner. To fulfill this goal, vehi-\ncles are equipped with a combination of cameras and sensors that can detect 360\ndegrees of movement—pedestrians, cyclists, vehicles, roadwork, and other objects—\nfrom up to three football fields away. \n Here are some of the sensing devices usually used in self-driving cars to perceive\nthe surrounding area:\n\uf0a1Lidar, a radar-like technique, uses invisible pulses of light to create a high-\nresolution 3D map of the surrounding area.\n\uf0a1Cameras can see street signs and road markings but cannot measure distance.\n\uf0a1Radar can measure distance and velocity but cannot see in fine detail. \nMedical diagnosis applications use X-rays or CT scans as sensing devices. Or maybe\nyou need to use some other type of radar to capture the landscape for agricultural\nvision systems. There are a variety of vision systems, each designed to perform a partic-\nular task. The first step in designing vision systems is to identify the task they are built\nfor. This is something to keep in mind when designing end-to-end vision systems.\nRecognizing images\nAnimals, humans, and insects all have eyes as sensing devices. But not all eyes have\nthe same structure, output image quality, and resolution. They are tailored to the spe-\ncific needs of the creature. Bees, for instance, and many other insects, have compoundComputer vision system\nDogs\ngrassOutput Interpreting device Sensing device\nFigure 1.2 The components of the computer vision system are a sensing device and an interpreting \ndevice.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 27}
|
26
|
page_content='8 CHAPTER 1Welcome to computer vision\n1.1.4 Interpreting devices\nComputer vision algorithms are typically employed as interpreting devices. The inter-\npreter is the brain of the vision system. Its role is to take the output image from the\nsensing device and learn features and patterns to identify objects. So we need to build\na brain. Simple! Scientists were inspired by how our brains work and tried to reverse\nengineer the central nervous system to get some insight on how to build an artificial\nbrain. Thus, artificial neural networks (ANNs) were born (figure 1.3).\n In figure 1.3, we can see an analogy between biological neurons and artificial sys-\ntems. Both contain a main processing element, a neuron , with input signals ( x1, x2, …,\nxn) and an output.\n The learning behavior of biological neurons inspired scientists to create a network\nof neurons that are connected to each other. Imitating how information is processed\nin the human brain, each artificial neuron fires a signal to all the neurons that it’s con-\nnected to when enough of its input signals are activated. Thus, neurons have a very\nsimple mechanism on the individual level (as you will see in the next chapter); but\nwhen you have millions of these neurons stacked in layers and connected together,\neach neuron is connected to thousands of other neurons, yielding a learning behav-\nior. Building a multilayer neural network is called deep learning (figure 1.4).(continued)\neyes that consist of multiple lenses (as many as 30,000 lenses in a single compound\neye). Compound eyes have low resolution, which makes them not so good at recog-\nnizing objects at a far distance. But they are very sensitive to motion, which is essen-\ntial for survival while flying at high speed. Bees don’t need high-resolution pictures.\nTheir vision systems are built to allow them to pick up the smallest movements while\nflying fast.\nCompound eyes are low resolution but sensitive to motion.Compound eyes How bees see a flower' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 28}
|
27
|
page_content='9 Computer vision\nDL methods learn representations through a sequence of transformations of data\nthrough layers of neurons. In this book, we will explore different DL architectures,\nsuch as ANNs and convolutional neural networks, and how they are used in CV\napplications. Biological neuron Artificial neuron\nNeuron\nFlow of\ninformationDendrites\n(information coming\nfrom other neurons)\nSynapses\n(information output\nto other neurons)fx( ) Output\nxx\nn2\n...x1Input Neuron\nFigure 1.3 The similarities between biological neurons and artificial systems \nInputArtificial neural network (ANN)\nLayers of neuronsOutput\nFigure 1.4 Deep learning involves layers of neurons in a network.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 29}
|
28
|
page_content='10 CHAPTER 1Welcome to computer vision\nCAN MACHINE LEARNING ACHIEVE BETTER PERFORMANCE THAN THE HUMAN BRAIN ?\nWell, if you had asked me this question 10 years ago, I would’ve probably said no,\nmachines cannot surpass the accuracy of a human. But let’s take a look at the follow-\ning two scenarios: \n\uf0a1Suppose you were given a book of 10,000 dog images, classified by breed, and\nyou were asked to learn the properties of each breed. How long would it take\nyou to study the 130 breeds in 10,000 images? And if you were given a test of\n100 dog images and asked to label them based on what you learned, out of the\n100, how many would you get right? Well, a neural network that is trained in a\ncouple of hours can achieve more than 95% accuracy.\n\uf0a1On the creation side, a neural network can study the patterns in the strokes, col-\nors, and shading of a particular piece of art. Based on this analysis, it can then\ntransfer the style from the original artwork into a new image and create a new\npiece of original art within a few seconds.\nRecent AI and DL advances have allowed machines to surpass human visual ability in\nmany image classification and object detection applications, and capacity is rapidly\nexpanding to many other applications. But don’t take my word for it. In the next sec-\ntion, we’ll discuss some of the most popular CV applications using DL technology.\n1.2 Applications of computer vision\nComputers began to be able to recognize human faces in images decades ago, but now\nAI systems are rivaling the ability of computers to classify objects in photos and videos.\nThanks to the dramatic evolution in both computational power and the amount of data\navailable, AI and DL have managed to achieve superhuman performance on many com-\nplex visual perception tasks like image search and captioning, image and video classifi-\ncation, and object detection. Moreover, deep neural networks are not restricted to\nCV tasks: they are also successful at natural language processing and voice user inter-\nface tasks. In this book, we’ll focus on visual applications that are applied in CV tasks. \n DL is used in many computer vision applications to recognize objects and their\nbehavior. In this section, I’m not going to attempt to list all the CV applications that are\nout there. I would need an entire book for that. Instead, I’ll give you a bird’s-eye view of\nsome of the most popular DL algorithms and their possible applications across different\nindustries. Among these industries are autonomous cars, drones, robots, in-store cam-\neras, and medical diagnostic scanners that can detect lung cancer in early stages.\n1.2.1 Image classification\nImage classification is the task of assigning to an image a label from a predefined set of\ncategories. A convolutional neural network is a neural network type that truly shines in\nprocessing and classifying images in many different applications:\n\uf0a1Lung cancer diagnosis —Lung cancer is a growing problem. The main reason lung\ncancer is very dangerous is that when it is diagnosed, it is usually in the middle or' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 30}
|
29
|
page_content='11 Applications of computer vision\nlate stages. When diagnosing lung cancer, doctors typically use their eyes to\nexamine CT scan images, looking for small nodules in the lungs. In the early\nstages, the nodules are usually very small and hard to spot. Several CV compa-\nnies decided to tackle this challenge using DL technology. \nAlmost every lung cancer starts as a small nodule, and these nodules appear\nin a variety of shapes that doctors take years to learn to recognize. Doctors are\nvery good at identifying mid- and large-size nodules, such as 6–10 mm. But\nwhen nodules are 4 mm or smaller, sometimes doctors have difficulty identify-\ning them. DL networks, specifically CNNs, are now able to learn these features\nautomatically from X-ray and CT scan images and detect small nodules early,\nbefore they become deadly (figure 1.5).\n\uf0a1Traffic sign recognition —Traditionally, standard CV methods were employed to\ndetect and classify traffic signs, but this approach required time-consuming man-\nual work to handcraft important features in images. Instead, by applying DL to\nthis problem, we can create a model that reliably classifies traffic signs, learning to\nidentify the most appropriate features for this problem by itself (figure 1.6).\nNOTE Increasing numbers of image classification tasks are being solved with\nconvolutional neural networks. Due to their high recognition rate and fast\nexecution, CNNs have enhanced most CV tasks, both pre-existing and new.\nJust like the cancer diagnosis and traffic sign examples, you can feed tens or\nhundreds of thousands of images into a CNN to label them into as many\nclasses as you want. Other image classification examples include identifying\npeople and objects, classifying different animals (like cats versus dogs versus\nhorses), different breeds of animals, types of land suitable for agriculture, and\nso on. In short, if you have a set of labeled images, convolutional networks can\nclassify them into a set of predefined classes. CT scanTumor\nX-ray\nTumor\nFigure 1.5 Vision systems are now able to learn patterns in X-ray images to identify tumors in earlier \nstages of development.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 31}
|
30
|
page_content='12 CHAPTER 1Welcome to computer vision\n1.2.2 Object detection and localization\nImage classification problems are the most basic applications for CNNs. In these prob-\nlems, each image contains only one object, and our task is to identify it. But if we aim to\nreach human levels of understanding, we have to add complexity to these networks so they\ncan recognize multiple objects and their locations in an image. To do that, we can build\nobject detection systems like YOLO (you only look once), SSD (single-shot detector),\nand Faster R-CNN, which not only classify images but also can locate and detect each\nobject in images that contain multiple objects. These DL systems can look at an image,\nbreak it up into smaller regions, and label each region with a class so that a variable num-\nber of objects in a given image can be localized and labeled (figure 1.7). You can imag-\nine that such a task is a basic prerequisite for applications like autonomous systems.\n1.2.3 Generating art (style transfer)\nNeural style transfer , one of the most interesting CV applications, is used to transfer the\nstyle from one image to another. The basic idea of style transfer is this: you take one\nimage—say, of a city—and then apply a style of art to that image—say, The Starry Night\n(by Vincent Van Gogh)—and output the same city from the original image, but look-\ning as though it was painted by Van Gogh (figure 1.8). \n This is actually a neat application. The astonishing thing, if you know any painters,\nis that it can take days or even weeks to finish a painting, and yet here is an application\nthat can paint a new image inspired by an existing style in a matter of seconds. \nFigure 1.6 Vision systems can detect traffic signs with very high performance.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 32}
|
31
|
page_content='13 Applications of computer vision\n1.2.4 Creating images \nAlthough the earlier examples are truly impressive CV applications of AI, this is\nwhere I see the real magic happening: the magic of creation. In 2014, Ian Good-\nfellow invented a new DL model that can imagine new things called generative\nadversarial networks (GANs). The name makes them sound a little intimidating,\nbut I promise you that they are not. A GAN is an evolved CNN architecture that isBicycleClouds\nPedestrian\nFigure 1.7 Deep learning systems can segment objects in an image.\n+\nStyle Generated art\n=Original image\nFigure 1.8 Style transfer from Van Gogh’s The Starry Night onto the original image, producing a piece of art that \nfeels as though it was created by the original artist' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 33}
|
32
|
page_content='14 CHAPTER 1Welcome to computer vision\nconsidered a major advancement in DL. So when you understand CNNs, GANs will\nmake a lot more sense to you. \n GANs are sophisticated DL models that generate stunningly accurate synthesized\nimages of objects, people, and places, among other things. If you give them a set of\nimages, they can make entirely new, realistic-looking images. For example, StackGAN\nis one of the GAN architecture variations that can use a textual description of an\nobject to generate a high-resolution image of the object matching that description.\nThis is not just running an image search on a database. These “photos” have never\nbeen seen before and are totally imaginary (figure 1.9).\nThe GAN is one of the most promising advancements in machine learning in recent\nyears. Research into GANs is new, and the results are overwhelmingly promising. Most\nof the applications of GANs so have far have been for images. But it makes you won-\nder: if machines are given the power of imagination to create pictures, what else can\nthey create? In the future, will your favorite movies, music, and maybe even books\nbe created by computers? The ability to synthesize one data type (text) to another\n(image) will eventually allow us to create all sorts of entertainment using only detailed\ntext descriptions. \nGANs create artwork\nIn October 2018, an AI-created painting called The Portrait of Edmond Belamy sold\nfor $432,500. The artwork features a fictional person named Edmond de Belamy,\npossibly French and—to judge by his dark frock coat and plain white collar—a man\nof the church.This small blue bird\nhas a short, pointy beak\nand brown on its wings.\nThis bird is completely\nred with black wings and\na pointy beak.\nFigure 1.9 Generative adversarial networks (GANS) can create new, “made-up” images from a set of \nexisting images.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 34}
|
33
|
page_content='15 Applications of computer vision\n1.2.5 Face recognition \nFace recognition (FR) allows us to exactly identify or tag an image of a person. Day-to-\nday applications include searching for celebrities on the web and auto-tagging friends\nand family in images. Face recognition is a form of fine-grained classification.\n The famous Handbook of Face Recognition (Li et al., Springer, 2011) categorizes two\nmodes of an FR system:\n\uf0a1Face identification —Face identification involves one-to-many matches that com-\npare a query face image against all the template images in the database to deter-\nmine the identity of the query face. Another face recognition scenario involves\na watchlist check by city authorities, where a query face is matched to a list of\nsuspects (one-to-few matches). \n\uf0a1Face verification —Face verification involves a one-to-one match that compares a\nquery face image against a template face image whose identity is being claimed\n(figure 1.10).\n1.2.6 Image recommendation system\nIn this task, a user seeks to find similar images with respect to a given query image.\nShopping websites provide product suggestions (via images) based on the selection of\na particular product, for example, showing a variety of shoes similar to those the user\nselected. An example of an apparel search is shown in figure 1.11.The artwork was created by a team of three 25-year-old French students using\nGANs. The network was trained on a dataset of 15,000 portraits painted between\nthe fourteenth and twentieth centuries, and then it created one of its own. The team\nprinted the image, framed it, and signed it with part of a GAN algorithm.AI-generated artwork featuring a fictional \nperson named Edmond de Belamy sold for \n$432,500.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 35}
|
34
|
page_content='16 CHAPTER 1Welcome to computer vision\nFace verification\nFace\nverification\nsystemPerson\n1Person\n1\nPerson\n2\nNot\nperson\n1Face identification\nFace\nidentification\nsystem\nHaven’t\nseen her\nbefore\nFigure 1.10 Example of face verification (left) and face recognition (right)\nFigure 1.11 Apparel search. The \nleftmost image in each row is the \nquery/clicked image, and the \nsubsequent columns show similar \napparel. ( Source : Liu et al., 2016.) \nQuery Retrievals' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 36}
|
35
|
page_content='17 Computer vision pipeline: The big picture\n1.3 Computer vision pipeline: The big picture\nOkay, now that I have your attention, let’s dig one level deeper into CV systems.\nRemember that earlier in this chapter, we discussed how vision systems are composed\nof two main components: sensing devices and interpreting devices (figure 1.12 offers\na reminder). In this section, we will take a look at the pipeline the interpreting device\ncomponent uses to process and understand images.\nApplications of CV vary, but a typical vision system uses a sequence of distinct steps to\nprocess and analyze image data. These steps are referred to as a computer vision pipeline .\nMany vision applications follow the flow of acquiring images and data, processing that\ndata, performing some analysis and recognition steps, and then finally making a pre-\ndiction based on the extracted information (figure 1.13).\nLet’s apply the pipeline in figure 1.13 to an image classifier example. Suppose we have\nan image of a motorcycle, and we want the model to predict the probability of the\nobject from the following classes: motorcycle, car, and dog (see figure 1.14).Computer vision system\nDogs\ngrassOutput Interpreting device Sensing device\nFigure 1.12 Focusing on the interpreting device in computer vision systems \n1. Input data 2. Preprocessing 3. Feature extraction 4. ML model\n• Images\n• Videos (image\nframes)Getting the data\nready:\n• Standardize images\n• Color transformation\n• More...• Find distinguishing\ninformation about\nthe image• Learn from the\nextracted features\nto predict and\nclassify objects\nFigure 1.13 The computer vision pipeline, which takes input data, processes it, extracts \ninformation, and then sends it to the machine learning model to learn' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 37}
|
36
|
page_content='18 CHAPTER 1Welcome to computer vision\nDEFINITIONS An image classifier is an algorithm that takes in an image as input\nand outputs a label or “class” that identifies that image. A class (also called a\ncategory ) in machine learning is the output category of your data.\nHere is how the image flows through the classification pipeline:\n1A computer receives visual input from an imaging device like a camera. This\ninput is typically captured as an image or a sequence of images forming a video.\n2Each image is then sent through some preprocessing steps whose purpose is to\nstandardize the images. Common preprocessing steps include resizing an\nimage, blurring, rotating, changing its shape, or transforming the image from\none color to another, such as from color to grayscale. Only by standardizing the\nimages—for example, making them the same size—can you then compare\nthem and further analyze them.\n3We extract features. Features are what help us define objects, and they are usu-\nally information about object shape or color. For example, some features that\ndistinguish a motorcycle are the shape of the wheels, headlights, mudguards,\nand so on. The output of this process is a feature vector that is a list of unique\nshapes that identify the object.\n4The features are fed into a classification model . This step looks at the feature vec-\ntor from the previous step and predicts the class of the image. Pretend that you\nare the classifier model for a few minutes, and let’s go through the classification\nprocess. You look at the list of features in the feature vector one by one and try\nto determine what’s in the image:\naFirst you see a wheel feature; could this be a car, a motorcycle, or a dog?\nClearly it is not a dog, because dogs don’t have wheels (at least, normal dogs,\nnot robots). Then this could be an image of a car or a motorcycle.\nbYou move on to the next feature, the headlight s. There is a higher probability\nthat this is a motorcycle than a car.\ncThe next feature is rear mudguards —again, there is a higher probability that\nit is a motorcycle.1. Input data 2. Preprocessing\n• Geometric\ntransforming\n• Image blurring3. Feature extraction\nP(motorcycle) = 0.854. Classifier\nFeatures vectorP(car) = 0.14\nP(dog) = 0.01\nFigure 1.14 Using the machine learning model to predict the probability of the motorcycle object from the \nmotorcycle, car, and dog classes' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 38}
|
37
|
page_content='19 Image input\ndThe object has only two wheels; this is closer to a motorcycle.\neAnd you keep going through all the features like the body shape, pedal, and\nso on, until you arrive at a best guess of the object in the image.\nThe output of this process is the probability of each class. As you can see in our exam-\nple, the dog has the lowest probability, 1%, whereas there is an 85% probability that\nthis is a motorcycle. You can see that, although the model was able to predict the right\nclass with the highest probability, it is still a little confused about distinguishing\nbetween cars and motorcycles—it predicted that there is a 14% chance this is an\nimage of a car. Since we know that it is a motorcycle, we can say that our ML classifica-\ntion algorithm is 85% accurate. Not bad! To improve this accuracy, we may need to do\nmore of step 1 (acquire more training images), or step 2 (more processing to remove\nnoise), or step 3 (extract better features), or step 4 (change the classifier algorithm\nand tune some hyperparameters), or even allow more training time. The many differ-\nent approaches we can take to improve the performance of our model all lie in one or\nmore of the pipeline steps. \n That was the big picture of how images flow through the CV pipeline. Next, we’ll\nzoom in one level deeper on each of the pipeline steps.\n1.4 Image input\nIn CV applications, we deal with images or video data. Let’s talk about grayscale and\ncolor images for now, and in later chapters, we will talk about videos, since videos are\njust stacked sequential frames of images.\n1.4.1 Image as functions\nAn image can be represented as a function of two variables x and y, which define a two-\ndimensional area. A digital image is made of a grid of pixels. The pixel is the raw build-\ning block of an image. Every image consists of a set of pixels in which their values rep-\nresent the intensity of light that appears in a given place in the image. Let’s take a look\nat the motorcycle example again after applying the pixel grid to it (figure 1.15).\nGrayscale image (32 × 16)\nF(20, 7) = 0\nBlack pixel\nF(12, 13) = 255\nWhite pixelx 31 0\n0\ny\n15\nF(18, 9) = 190\nGray pixel\nFigure 1.15 Images consists of raw \nbuilding blocks called pixels . The pixel \nvalues represent the intensity of light that \nappears in a given place in the image.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 39}
|
38
|
page_content='20 CHAPTER 1Welcome to computer vision\nThe image in figure 1.14 has a size of 32 × 16. This means the dimensions of the image\nare 32 pixels wide and 16 pixels tall. The x-axis goes from 0 to 31, and the y-axis from\n0 to 16. Overall, the image has 512 (32 × 16) pixels. In this grayscale image, each pixel\ncontains a value that represents the intensity of light on that specific pixel. The pixel val-\nues range from 0 to 255. Since the pixel value represents the intensity of light, the\nvalue 0 represents very dark pixels (black), 255 is very bright (white), and the values in\nbetween represent the intensity on the grayscale. \n You can see that the image coordinate system is similar to the Cartesian coordinate\nsystem: images are two-dimensional and lie on the x-y plane. The origin (0, 0) is at the\ntop left of the image. To represent a specific pixel, we use the following notations: F as\na function, and x, y as the location of the pixel in x- and y-coordinates. For example,\nthe pixel located at x = 12 and y = 13 is white; this is represented by the following func-\ntion: F(12, 13) = 255. Similarly, the pixel (20, 7) that lies on the front of the motor-\ncycle is black, represented as F(20, 7) = 0.\nGrayscale => F(x, y) gives the intensity at position (x, y)\nThat was for grayscale images. How about color images?\n In color images, instead of representing the value of the pixel by just one number,\nthe value is represented by three numbers representing the intensity of each color in\nthe pixel. In an RGB system, for example, the value of the pixel is represented by\nthree numbers: the intensity of red, intensity of green, and intensity of blue. There are\nother color systems for images like HSV and Lab. All follow the same concept when\nrepresenting the pixel value (more on color images soon). Here is the function repre-\nsenting color images in the RGB system:\nColor image in RGB => F(x, y) = [ red (x, y), green (x, y), blue (x, y) ] \nThinking of an image as a function is very useful in image processing. We can think of\nan image as a function of F(x, y) and operate on it mathematically to transform it to a\nnew image function G(x, y). Let’s take a look at the image transformation examples in\ntable 1.1. \nTable 1.1 Image transformation example functions\nApplication Transformation\nDarken the image. G(x, y) = 0.5 * F(x, y)\nBrighten the image. G(x, y) = 2 * F(x, y)\nMove an object down 150 pixels. G(x, y) = F(x, y + 150)\nRemove the gray in an image to trans-\nform the image into black and white.G(x, y) = { 0 if F(x, y) < 130, 255 otherwise }' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 40}
|
39
|
page_content='21 Image input\n1.4.2 How computers see images\nWhen we look at an image, we see objects, landscape, colors, and so on. But that’s not\nthe case with computers. Consider figure 1.16. Your human brain can process it and\nimmediately know that it is a picture of a motorcycle. To a computer, the image looks\nlike a 2D matrix of the pixels’ values, which represent intensities across the color spec-\ntrum. There is no context here, just a massive pile of data.\nThe image in figure 1.16 is of size 24 × 24. This size indicates the width and height of\nthe image: there are 24 pixels horizontally and 24 vertically. That means there is a\ntotal of 576 (24 × 24) pixels. If the image is 700 × 500, then the dimensionality of the\nmatrix will be (700, 500), where each pixel in the matrix represents the intensity of\nbrightness in that pixel. Zero represents black, and 255 represents white.\n1.4.3 Color images\nIn grayscale images, each pixel represents the intensity of only one color, whereas\nin the standard RGB system, color images have three channels (red, green, and\nblue). In other words, color images are represented by three matrices: one represents\nthe intensity of red in the pixel, one represents green, and one represents blue\n(figure 1.17).\n As you can see in figure 1.17, the color image is composed of three channels: red,\ngreen, and blue. Now the question is, how do computers see this image? Again, they\nsee the matrix, unlike grayscale images, where we had only one channel. In this case,\nwe will have three matrices stacked on top of each other; that’s why it’s a 3D matrix.\nThe dimensionality of 700 × 700 color images is (700, 700, 3). Let’s say the first matrix\nrepresents the red channel; then each element of that matrix represents an intensity\nof red color in that pixel, and likewise with green and blue. Each pixel in a color\n08\n49\n81\n52\n22\n24\n32\n47\n24\n21\n78\n16\n84\n19\n04\n04\n04\n20\n20\n0102\n49\n49\n90\n31\n47\n98\n24\n55\n36\n17\n39\n56\n80\n52\n36\n42\n49\n23\n7022\n99\n31\n95\n14\n32\n81\n20\n58\n23\n53\n05\n00\n61\n08\n68\n14\n34\n35\n5497\n40\n73\n23\n71\n60\n28\n68\n05\n09\n28\n42\n48\n68\n83\n81\n73\n41\n29\n7138\n17\n55\n04\n51\n99\n64\n02\n66\n75\n22\n96\n35\n05\n97\n57\n38\n72\n78\n8315\n81\n79\n60\n67\n03\n23\n62\n73\n00\n75\n35\n71\n94\n35\n62\n25\n30\n31\n5100\n18\n14\n11\n43\n45\n67\n12\n99\n74\n31\n31\n89\n47\n99\n20\n39\n23\n90\n5440\n57\n29\n42\n59\n02\n10\n20\n26\n44\n67\n47\n07\n49\n14\n72\n11\n88\n01\n4900\n60\n93\n69\n41\n44\n26\n95\n97\n20\n15\n55\n05\n28\n07\n03\n24\n34\n74\n1675\n87\n71\n24\n92\n75\n38\n63\n17\n45\n94\n58\n44\n73\n97\n16\n94\n62\n31\n9204\n17\n40\n48\n34\n33\n40\n94\n78\n35\n03\n88\n44\n92\n57\n33\n72\n99\n49\n33What computers see What we see\n05\n40\n67\n56\n54\n53\n67\n39\n78\n14\n80\n24\n37\n13\n32\n67\n18\n69\n71\n4807\n98\n53\n01\n22\n78\n59\n63\n94\n00\n04\n00\n44\n86\n16\n46\n06\n82\n48\n6178\n43\n99\n32\n40\n36\n54\n04\n83\n41\n42\n17\n40\n52\n26\n55\n46\n47\n86\n4352\n69\n30\n54\n40\n64\n70\n49\n14\n33\n16\n54\n21\n17\n26\n12\n29\n59\n81\n5112\n46\n03\n71\n28\n20\n66\n91\n88\n97\n14\n24\n58\n77\n79\n32\n32\n85\n14\n0150\n04\n49\n37\n44\n35\n18\n44\n34\n34\n09\n34\n51\n04\n33\n43\n40\n74\n23\n8977\n56\n13\n02\n33\n09\n38\n49\n89\n31\n53\n29\n54\n09\n27\n93\n62\n04\n57\n1991\n62\n36\n34\n13\n12\n64\n94\n63\n33\n56\n85\n17\n55\n98\n53\n74\n34\n05\n6708\n00\n65\n91\n80\n80\n70\n21\n72\n95\n92\n57\n58\n40\n44\n69\n36\n24\n54\n48\nFigure 1.16 A computer sees images as matrices of values. The values represent the intensity of \nthe pixels across the color spectrum. For example, grayscale images range between pixel values \nof 0 for black and 255 for white.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 41}
|
40
|
page_content='22 CHAPTER 1Welcome to computer vision\nimage has three numbers (0 to 255) associated with it. These numbers represent\nintensity of red, green, and blue color in that particular pixel.\n If we take the pixel (0,0) as an example, we will see that it represents the top-left\npixel of the image of green grass. When we view this pixel in the color images, it looks\nlike figure 1.18. The example in figure 1.19 shows some shades of the color green and\ntheir RGB values.\n35\n166\n156165\n166\n158163\n164\n162165\n166\n165\n162\n157\n...158\n159\n159\n158\n167\n......\n...\n...\n...\n...\n...102\n170\n160169\n170\n162167\n168\n166169\n170\n169\n163\n158169\n170\n170\n168\n168\n......\n...\n...\n...\n...\n...RGB channels\nChannel 3\nBlue intensity\nvalues\nChannel 2\nGreen intensity\nvalues\nChannel 1\nRed intensity\nvaluesColor image\nF(0, 0) = [11, 102, 35]\n11\n159\n149\n146\n145\n...158\n159\n151\n146\n143\n...156\n157\n155\n149\n143\n...158\n159\n158\n153\n148\n...158\n159\n159\n158\n158\n......\n...\n...\n...\n...\n...\nFigure 1.17 Color images are represented by red, green, and blue channels, and matrices can be \nused to indicate those colors’ intensity.\nRed\n11 +Green\n102 + =Blue\n35Forest Green\n(11, 102, 35)\nFigure 1.18 An image of green grass is actually made of three colors of varying intensity.\nForest\nHEX #0B6623\nRGB 11 102 35Forest green\nCodes:\nHEX #0B6623\nRGB 11 102 35\nOlive\nHEX #708238\nRGB 112 130 56Olive green\nCodes:\nHEX #708238\nRGB 112 130 56\nJungle\nHEX #29AB87\nRGB 41 171 135Jungle green\nCodes:\nHEX #29AB87\nRGB 41 171 135Mint\nHEX #98FB98\nRGB 152 251 152Codes:Mint green\nHEX #98FB98\nRGB 152 251 152\nLime\nHEX #C7EA46\nRGB 199 234 70Codes:Lime green\nHEX #C7EA46\nRGB 199 234 70\nJade\nHEX #00A86B\nRGB 0 168 107Codes:Jade green\nHEX #00A86B\nRGB 0 168 107\nFigure 1.19 Different shades of green mean different intensities of the three image \ncolors (red, green, blue).' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 42}
|
41
|
page_content='23 Image preprocessing\n1.5 Image preprocessing\nIn machine learning (ML) projects, you usually go through a data preprocessing or\ncleaning step. As an ML engineer, you will spend a good amount of time cleaning up\nand preparing the data before you build your learning model. The goal of this step is\nto make your data ready for the ML model to make it easier to analyze and process\ncomputationally. The same thing is true with images. Based on the problem you are\nsolving and the dataset in hand, some data massaging is required before you feed your\nimages to the ML model. \n Image processing could involve simple tasks like image resizing. Later, you will learn\nthat in order to feed a dataset of images to a convolutional network, the images all have\nto be the same size. Other processing tasks can take place, like geometric and color\ntransformation, converting color to grayscale, and many more. We will cover various\nimage-processing techniques throughout the chapters of this book and in the projects.\n The acquired data is usually messy and comes from different sources. To feed it to\nthe ML model (or neural network), it needs to be standardized and cleaned up. Pre-\nprocessing is used to conduct steps that will reduce the complexity and increase the\naccuracy of the applied algorithm. We can’t write a unique algorithm for each of the\nconditions in which an image is taken; thus, when we acquire an image, we convert it\ninto a form that would allow a general algorithm to solve it. The following subsections\ndescribe some data-preprocessing techniques.\n1.5.1 Converting color images to grayscale to reduce \ncomputation complexity\nSometimes you will find it useful to remove unnecessary information from your\nimages to reduce space or computational complexity. For example, suppose you want\nto convert your colored images to grayscale, because for many objects, color is notHow do computers see color?\nComputers see an image as matrices. Grayscale images have one channel (gray);\nthus, we can represent grayscale images in a 2D matrix, where each element rep-\nresents the intensity of brightness in that particular pixel. Remember, 0 means black\nand 255 means white. Grayscale images have one channel, whereas color images\nhave three channels: red, green, and blue. We can represent color images in a 3D\nmatrix where the depth is three.\nWe’ve also seen how images can be treated as functions of space. This concept\nallows us to operate on images mathematically and change or extract information\nfrom them. Treating images as functions is the basis of many image-processing tech-\nniques, such as converting color to grayscale or scaling an image. Each of these\nsteps is just operating mathematical equations to transform an image pixel by pixel.\n\uf0a1Grayscale: f(x, y) gives the intensity at position ( x, y)\n\uf0a1Color image: f(x, y) = [ red ( x, y), green ( x, y), blue ( x, y) ]' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 43}
|
42
|
page_content='24 CHAPTER 1Welcome to computer vision\nnecessary to recognize and interpret an image. Grayscale can be good enough for rec-\nognizing certain objects. Since color images contain more information than black-\nand-white images, they can add unnecessary complexity and take up more space in\nmemory. Remember that color images are represented in three channels, which\nmeans that converting them to grayscale will reduce the number of pixels that need to\nbe processed (figure 1.20).\nIn this example, you can see how patterns of brightness and darkness (intensity) can\nbe used to define the shape and characteristics of many objects. However, in other\napplications, color is important to define certain objects, like skin cancer detection,\nwhich relies heavily on skin color (red rashes).\n\uf0a1Standardizing images —As you will see in chapter 3, one important constraint that\nexists in some ML algorithms, such as CNNs, is the need to resize the images in\nyour dataset to unified dimensions. This implies that your images must be pre-\nprocessed and scaled to have identical widths and heights before being fed to\nthe learning algorithm. \n\uf0a1Data augmentation —Another common preprocessing technique involves aug-\nmenting the existing dataset with modified versions of the existing images. Scal-\ning, rotations, and other affine transformations are typically used to enlarge\nyour dataset and expose the neural network to a wide variety of variations ofBicycleClouds\nPedestrian\nFigure 1.20 Converting color images to grayscale results in a reduced number of pixels that need \nto be processed. This could be a good approach for applications that do not rely a lot on the color \ninformation loss due to the conversion.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 44}
|
43
|
page_content='25 Image preprocessing\nyour images. This makes it more likely that your model will recognize objects\nwhen they appear in any form and shape. Figure 1.21 shows an example of image\naugmentation applied to a butterfly image.\n\uf0a1Other techniques —Many more preprocessing techniques are available to get your\nimages ready for training an ML model. In some projects, you might need to\nremove the background color from your images to reduce noise. Other projects\nmight require that you brighten or darken your images. In short, any adjustments\nthat you need to apply to your dataset are part of preprocessing. You will selectWhen is color important?\nConverting an image to grayscale might not be a good decision for some problems.\nThere are a number of applications for which color is very important: for example,\nbuilding a diagnostic system to identify red skin rashes in medical images. This appli-\ncation relies heavily on the intensity of the red color in the skin. Removing colors from\nthe image will make it harder to solve this problem. In general, color images provide\nvery helpful information in many medical applications.\nAnother example of the importance of color in images is lane-detection applications\nin a self-driving car, where the car has to identify the difference between yellow and\nwhite lines, because they are treated differently. Grayscale images do not provide\nenough information to distinguish between the yellow and white lines.\nThe rule of thumb to identify the importance of colors in your problem is to look at\nthe image with the human eye. If you are able to identify the object you are looking\nfor in a gray image, then you probably have enough information to feed to your model.\nIf not, then you definitely need more information (colors) for your model. The same\nrule can be applied for most other preprocessing techniques that we will discuss.\nYellowWhite\nGrayscale-based image processors cannot differentiate between color images.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 45}
|
44
|
page_content='26 CHAPTER 1Welcome to computer vision\nthe appropriate processing techniques based on the dataset at hand and the\nproblem you are solving. You will see many image-processing techniques through-\nout this book, helping you build your intuition of which ones you need when\nworking on your own projects.\nNo free lunch theorem \nThis is a phrase that was introduced by David Wolpert and William Macready in “No\nFree Lunch Theorems for Optimizations” ( IEEE Transactions on Evolutionary Compu-\ntation 1, 67). You will often hear this said when a team is working on an ML project.\nIt means that no one prescribed recipe fits all models. When working on ML proj-\nects, you will need to make many choices like building your neural network architec-\nture, tuning hyperparameters, and applying the appropriate data preprocessing\ntechniques. While there are some rule-of-thumb approaches to tackle certain prob-\nlems, there is really no single recipe that is guaranteed to work well in all situations.Data augmentationOriginal image\nDe-texturized\nDe-colorized\nEdge enhanced\nSalient edge map\nFlip/rotate\nFigure 1.21 Image-augmentation techniques create modified versions of the input image \nto provide more examples for the ML model to learn from.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 46}
|
45
|
page_content='27 Feature extraction\n1.6 Feature extraction\nFeature extraction is a core component of the CV pipeline. In fact, the entire DL model\nworks around the idea of extracting useful features that clearly define the objects in\nthe image. So we’ll spend a little more time here, because it is important that you\nunderstand what a feature is, what a vector of features is, and why we extract features.\nDEFINITION A feature in machine learning is an individual measurable prop-\nerty or characteristic of an observed phenomenon. Features are the input that\nyou feed to your ML model to output a prediction or classification. Suppose\nyou want to predict the price of a house: your input features (properties)\nmight include square_foot , number_of_rooms , bathrooms , and so on, and\nthe model will output the predicted price based on the values of your fea-\ntures. Selecting good features that clearly distinguish your objects increases\nthe predictive power of ML algorithms.\n1.6.1 What is a feature in computer vision?\nIn CV, a feature is a measurable piece of data in your image that is unique to that spe-\ncific object. It may be a distinct color or a specific shape such as a line, edge, or image\nsegment. A good feature is used to distinguish objects from one another. For example,\nif I give you a feature like a wheel and ask you to guess whether an object is a motorcy-\ncle or a dog, what would your guess be? A motorcycle. Correct! In this case, the wheel\nis a strong feature that clearly distinguishes between motorcycles and dogs. However,\nif I give you the same feature (a wheel) and ask you to guess whether an object is a\nbicycle or a motorcycle, this feature is not strong enough to distinguish between those\nobjects. You need to look for more features like a mirror, license plate, or maybe a\npedal, that collectively describe an object. In ML projects, we want to transform the\nraw data (image) into a feature vector to show to our learning algorithm, which can\nlearn the characteristics of the object (figure 1.22).\n In the figure, we feed the raw input image of a motorcycle into a feature extraction\nalgorithm. Let’s treat the feature extraction algorithm as a black box for now, and we\nwill come back to it. For now, we need to know that the extraction algorithm produces\na vector that contains a list of features. This feature vector is a 1D array that makes a\nrobust representation of the object.You must make certain assumptions about the dataset and the problem you are try-\ning to solve. For some datasets, it is best to convert the colored images to grayscale,\nwhile for other datasets, you might need to keep or adjust the color images.\nThe good news is that, unlike traditional machine learning, DL algorithms require min-\nimum data preprocessing because, as you will see soon, neural networks do most of\nthe heavy lifting in processing an image and extracting features.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 47}
|
46
|
page_content='28 CHAPTER 1Welcome to computer vision\n1.6.2 What makes a good (useful) feature?\nMachine learning models are only as good as the features you provide. That means\ncoming up with good features is an important job in building ML models. But what\nmakes a good feature? And how can you tell? Feature generalizability \nIt is important to point out that figure 1.22 reflects features extracted from just one\nmotorcycle. A very important characteristic of a feature is repeatability . The feature\nshould be able to detect motorcycles in general, not just this specific one. So in real-\nworld problems, a feature is not an exact copy of a piece of the input image.\nIf we take the wheel feature, for example, the feature doesn’t look exactly like the\nwheel of one particular motorcycle. Instead, it looks like a circular shape with some\npatterns that identify wheels in all images in the training dataset. When the feature\nextractor sees thousands of images of motorcycles, it recognizes patterns that define\nwheels in general, regardless of where they appear in the image and what type of\nmotorcycle they are part of. Input data Features\nFeature extraction\nalgorithm\nFigure 1.22 Example input image fed to a feature-extraction algorithm to find \npatterns within the image and create the feature vector\nFeature after looking\nat thousands of images\nFeature after looking\nat one image\nFeatures need to detect general patterns.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 48}
|
47
|
page_content='29 Feature extraction\n Let’s discuss this with an example. Suppose we want to build a classifier to tell the dif-\nference between two types of dogs: Greyhound and Labrador. Let’s take two features—\nthe dogs’ height and their eye color—and evaluate them (figure 1.23).\nLet’s begin with height. How useful do you think this feature is? Well, on average,\nGreyhounds tend to be a couple of inches taller than Labradors, but not always. There\nis a lot of variation in the dog world. So let’s evaluate this feature across different val-\nues in both breeds’ populations. Let’s visualize the height distribution on a toy exam-\nple in the histogram in figure 1.24.\nFrom the histogram, we can see that if the dog’s height is 20 inches or less, there is\nmore than an 80% probability that the dog is a Labrador. On the other side of the his-\ntogram, if we look at dogs that are taller than 30 inches, we can be pretty confident\nGreyhound Labrador\nFigure 1.23 Example of Greyhound \nand Labrador dogs\n300\n250\n200\n150\n100\n50\n0Number of dogs\n10 15 20 25 30 35 40\nHeightLabrador\nGreyhound\nFigure 1.24 A visualization of the height distribution on a toy dogs dataset' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 49}
|
48
|
page_content='30 CHAPTER 1Welcome to computer vision\nthe dog is a Greyhound. Now, what about the data in the middle of the histogram\n(heights from 20 to 30 inches)? We can see that the probability of each type of dog is\npretty close. The thought process in this case is as follows:\nif height ≤ 20:\n return higher probability to Labrador\nif height ≥ 30:\n return higher probability to Greyhound\nif 20 < height < 30:\n look for other features to classify the object\nSo the height of the dog in this case is a useful feature because it helps (adds informa-\ntion) in distinguishing between both dog types. We can keep it. But it doesn’t distin-\nguish between Greyhounds and Labradors in all cases, which is fine. In ML projects,\nthere is usually no one feature that can classify all objects on its own. That’s why, in\nmachine learning, we almost always need multiple features, where each feature cap-\ntures a different type of information. If only one feature would do the job, we could\njust write if-else statements instead of bothering with training a classifier. \nTIP Similar to what we did earlier with color conversion (color versus gray-\nscale), to figure out which features you should use for a specific problem, do a\nthought experiment. Pretend you are the classifier. If you want to differentiate\nbetween Greyhounds and Labradors, what information do you need to know?\nYou might ask about the hair length, the body size, the color, and so on. \nFor another quick example of a non-useful feature to drive this idea home, let’s look\nat dog eye color. For this toy example, imagine that we have only two eye colors, blue\nand brown. Figure 1.25 shows what a histogram might look like for this example.\nBlue eyes Brown eyesLabrador\nGreyhound\nFigure 1.25 A visualization of \nthe eye color distribution in a toy \ndogs dataset' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 50}
|
49
|
page_content='31 Feature extraction\nIt is clear that for most values, the distribution is about 50/50 for both types. So practi-\ncally, this feature tells us nothing, because it doesn’t correlate with the type of dog.\nHence, it doesn’t distinguish between Greyhounds and Labradors.\n1.6.3 Extracting features (handcrafted vs. automatic extracting)\nThis is a large topic in machine learning that could take up an entire book. It’s typi-\ncally described in the context of a topic called feature engineering. In this book, we are\nonly concerned with extracting features in images. So I’ll touch on the idea very\nquickly in this chapter and build on it in later chapters.\nTRADITIONAL MACHINE LEARNING USING HANDCRAFTED FEATURES\nIn traditional ML problems, we spend a good amount of time in manual feature selec-\ntion and engineering. In this process, we rely on our domain knowledge (or partner\nwith domain experts) to create features that make ML algorithms work better. We\nthen feed the produced features to a classifier like a support vector machine (SVM) or\nAdaBoost to predict the output (figure 1.26). Some of the handcrafted feature sets\nare these: \n\uf0a1Histogram of oriented gradients (HOG)\n\uf0a1Haar Cascades \n\uf0a1Scale-invariant feature transform (SIFT)\n\uf0a1Speeded-Up Robust Feature (SURF)What makes a good feature for object recognition?\nA good feature will help us recognize an object in all the ways it may appear. Charac-\nteristics of a good feature follow:\n\uf0a1Identifiable\n\uf0a1Easily tracked and compared\n\uf0a1Consistent across different scales, lighting conditions, and viewing angles\n\uf0a1Still visible in noisy images or when only part of an object is visible\nInputFeature extraction\n(handcrafted)Learning algorithm\nSVM or AdaBoost Output\nCar\nNot a car\nFigure 1.26 Traditional machine learning algorithms require handcrafted feature \nextraction.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 51}
|
50
|
page_content='32 CHAPTER 1Welcome to computer vision\nDEEP LEARNING USING AUTOMATICALLY EXTRACTED FEATURES\nIn DL, however, we do not need to manually extract features from the image. The net-\nwork extracts features automatically and learns their importance on the output by\napplying weights to its connections. You just feed the raw image to the network, and\nwhile it passes through the network layers, the network identifies patterns within the\nimage with which to create features (figure 1.27). Neural networks can be thought of\nas feature extractors plus classifiers that are end-to-end trainable, as opposed to tradi-\ntional ML models that use handcrafted features.\nHow do neural networks distinguish useful features from non-useful features?\nYou might get the impression that neural networks only understand the most useful\nfeatures, but that’s not entirely true. Neural networks scoop up all the features avail-\nable and give them random weights. During the training process, the neural network\nadjusts these weights to reflect their importance and how they should impact the out-\nput prediction. The patterns with the highest appearance frequency will have higher\nweights and are considered more useful features. Features with the lowest weights\nwill have very little impact on the output. This learning process will be discussed in\ndeeper detail in the next chapter.Input Feature extraction and classification Output\nCar\nNot a car\nFigure 1.27 A deep neural network passes the input image through its layers to automatically \nextract features and classify the object. No handcrafted features are needed. \nOutputFeaturesW\nW\nWW\n2\n3\n41Weights\nNeuronX2\nX3X1\n...\nXn\nWeighting different features to reflect their importance in identifying the object' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 52}
|
51
|
page_content='33 Classifier learning algorithm\nWHY USE FEATURES ?\nThe input image has too much extra information that is not necessary for classifica-\ntion. Therefore, the first step after preprocessing the image is to simplify it by extract-\ning the important information and throwing away nonessential information. By\nextracting important colors or image segments, we can transform complex and large\nimage data into smaller sets of features. This makes the task of classifying images\nbased on their features simpler and faster. \n Consider the following example. Suppose we have a dataset of 10,000 images of\nmotorcycles, each of 1,000 width by 1,000 height. Some images have solid backgrounds,\nand others have busy backgrounds of unnecessary data. When these thousands of\nimages are fed to the feature extraction algorithms, we lose all the unnecessary data that\nis not important to identify motorcycles, and we only keep a consolidated list of useful\nfeatures that can be fed directly to the classifier (figure 1.28). This process is a lot sim-\npler than having the classifier look at the raw dataset of 10,000 images to learn the\nproperties of motorcycles.\n1.7 Classifier learning algorithm\nHere is what we have discussed so far regarding the classifier pipeline: \n\uf0a1Input image —We’ve seen how images are represented as functions, and that com-\nputers see images as a 2D matrix for grayscale images and a 3D matrix (three\nchannels) for colored images. \n\uf0a1Image preprocessing —We discussed some image-preprocessing techniques to clean\nup our dataset and make it ready as input to the ML algorithm. \n\uf0a1Feature extraction —We converted our large dataset of images into a vector of use-\nful features that uniquely describe the objects in the image.Feature\nextractionFeatures vectorImages dataset of 10,000 images\n... ... ...Classifier\nalgorithm\nFigure 1.28 Extracting and consolidating features from thousands of images in one feature vector \nto be fed to the classifier' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 53}
|
52
|
page_content='34 CHAPTER 1Welcome to computer vision\nNow it is time to feed the extracted feature vector to the classifier to output a class\nlabel for the images (for example, motorcycle or otherwise). \n As we discussed in the previous section, the classification task is done one of these\nways: traditional ML algorithms like SVMs, or deep neural network algorithms like\nCNNs. While traditional ML algorithms might get decent results for some problems,\nCNNs truly shine in processing and classifying images in the most complex problems.\n In this book, we will discuss neural networks and how they work in detail. For now,\nI want you to know that neural networks automatically extract useful features from your\ndataset, and they act as a classifier to output class labels for your images. Input images\npass through the layers of the neural network to learn their features layer by layer\n(figure 1.29). The deeper your network is (the more layers), the more it will learn the\nfeatures of the dataset: hence the name deep learning . More layers come with some\ntrade-offs that we will discuss in the next two chapters. The last layer of the neural net-\nwork usually acts as the classifier that outputs the class label. \nSummary\n\uf0a1Both human and machine vision systems contain two basic components: a sens-\ning device and an interpreting device. \n\uf0a1The interpreting process consists of four steps: input the data, preprocess it, do\nfeature extraction, and produce a machine learning model.Deep learning classifier\nNetwork layers Input image\n...MotorcycleOutput\nNot motorcycle... ... ...... ...\nFeature extraction layers\n(The input image flows through the\nnetwork layers to learn its features.\nEarly layers detect patterns in the\nimage, then later layers detect\npatterns within patterns, and so on,\nuntil it creates the feature vector.)Classification layer\n(Looks at the feature vector\nextracted by the previous layer\nand fires the upper node if it sees\nthe features of a motorcycle or\nthe lower node if it doesn’t.)\nFigure 1.29 Input images pass through the layers of a neural network so it can learn features \nlayer by layer.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 54}
|
53
|
page_content='35 Summary\n\uf0a1An image can be represented as a function of x and y. Computers see an image\nas a matrix of pixel values: one channel for grayscale images and three channels\nfor color images.\n\uf0a1Image-processing techniques vary for each problem and dataset. Some of these\ntechniques are converting images to grayscale to reduce complexity, resizing\nimages to a uniform size to fit your neural network, and data augmentation. \n\uf0a1Features are unique properties in the image that are used to classify its objects.\nTraditional ML algorithms use several feature-extraction methods.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 55}
|
54
|
page_content='36Deep learning\nand neural networks\nIn the last chapter, we discussed the computer vision (CV) pipeline components:\nthe input image, preprocessing, extracting features, and the learning algorithm\n(classifier). We also discussed that in traditional ML algorithms, we manually\nextract features that produce a vector of features to be classified by the learning\nalgorithm, whereas in deep learning (DL), neural networks act as both the feature\nextractor and the classifier. A neural network automatically recognizes patterns and\nextracts features from the image and classifies them into labels (figure 2.1).\n In this chapter, we will take a short pause from the CV context to open the DL\nalgorithm box from figure 2.1. We will dive deeper into how neural networks\nlearn features and make predictions. Then, in the next chapter, we will comeThis chapter covers\n\uf0a1Understanding perceptrons and multilayer \nperceptrons\n\uf0a1Working with the different types of activation \nfunctions\n\uf0a1Training networks with feedforward, error \nfunctions, and error optimization\n\uf0a1Performing backpropagation' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 56}
|
55
|
page_content='37 Understanding perceptrons\nback to CV applications with one of the most popular DL architectures: convolutional\nneural networks. \n The high-level layout of this chapter is as follows:\n\uf0a1We will begin with the most basic component of the neural network: the perceptron ,\na neural network that contains only one neuron.\n\uf0a1Then we will move on to a more complex neural network architecture that con-\ntains hundreds of neurons to solve more complex problems. This network is\ncalled a multilayer perceptron (MLP), where neurons are stacked in hidden layers .\nHere, you will learn the main components of the neural network architecture:\nthe input layer, hidden layers, weight connections, and output layer.\n\uf0a1You will learn that the network training process consists of three main steps:\n1Feedforward operation\n2Calculating the error\n3Error optimization: using backpropagation and gradient descent to select\nthe most optimum parameters that minimize the error function\nWe will dive deep into each of these steps. You will see that building a neural network\nrequires making necessary design decisions: choosing an optimizer, cost function, and\nactivation functions, as well as designing the architecture of the network, including\nhow many layers should be connected to each other and how many neurons should be\nin each layer. Ready? Let’s get started!\n2.1 Understanding perceptrons\nLet’s take a look at the artificial neural network (ANN) diagram from chapter 1 (fig-\nure 2.2). You can see that ANNs consist of many neurons that are structured in layers\nto perform some kind of calculations and predict an output. This architecture can beFeature\nextractorFeatures vectorTraditional machine learning flow\nTraditional ML\nalgorithmOutput Input\nDeep learning algorithmDeep learning flow\nOutput Input\nFigure 2.1 Traditional ML algorithms require manual feature extraction. A deep neural network \nautomatically extracts features by passing the input image through its layers.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 57}
|
56
|
page_content='38 CHAPTER 2Deep learning and neural networks\nalso called a multilayer perceptron , which is more intuitive because it implies that the net-\nwork consists of perceptrons structured in multiple layers. Both terms, MLP and ANN,\nare used interchangeably to describe this neural network architecture.\n In the MLP diagram in figure 2.2, each node is called a neuron . We will discuss how\nMLP networks work soon, but first let’s zoom in on the most basic component of the\nneural network: the perceptron. Once you understand how a single perceptron works,\nit will become more intuitive to understand how multiple perceptrons work together\nto learn data features.\n2.1.1 What is a perceptron?\nThe most simple neural network is the perceptron, which consists of a single neuron.\nConceptually, the perceptron functions in a manner similar to a biological neuron\n(figure 2.3). A biological neuron receives electrical signals from its dendrites , modu-\nlates the electrical signals in various amounts, and then fires an output signal through\nits synapses only when the total strength of the input signals exceeds a certain thresh-\nold. The output is then fed to another neuron, and so forth.\n To model the biological neuron phenomenon, the artificial neuron performs two\nconsecutive functions: it calculates the weighted sum of the inputs to represent the total\nstrength of the input signals, and it applies a step function to the result to determine\nwhether to fire the output 1 if the signal exceeds a certain threshold or 0 if the signal\ndoesn’t exceed the threshold. \n As we discussed in chapter 1, not all input features are equally useful or important.\nTo represent that, each input node is assigned a weight value, called its connection\nweight , to reflect its importance.InputArtificial neural network (ANN)\nLayers of neuronsOutput\nFigure 2.2 An artificial neural network consists of layers of nodes, or neurons connected with edges.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 58}
|
57
|
page_content='39 Understanding perceptrons\nIn the perceptron diagram in figure 2.4, you can see the following:\n\uf0a1Input vector —The feature vector that is fed to the neuron. It is usually denoted\nwith an uppercase X to represent a vector of inputs ( x1, x2, . . ., xn).\n\uf0a1Weights vector —Each x1 is assigned a weight value w1 that represents its impor-\ntance to distinguish between different input datapoints.Connection weights\nNot all input features are equally important (or useful) features. Each input feature\n(x1) is assigned its own weight ( w1) that reflects its importance in the decision-making\nprocess. Inputs assigned greater weight have a greater effect on the output. If the\nweight is high, it amplifies the input signal; and if the weight is low, it diminishes the\ninput signal. In common representations of neural networks, the weights are repre-\nsented by lines or edges from the input node to the perceptron. \nFor example, if you are predicting a house price based on a set of features like size,\nneighborhood, and number of rooms, there are three input features ( x1, x2, and x3).\nEach of these inputs will have a different weight value that represents its effect on\nthe final decision. For example, if the size of the house has double the effect on the\nprice compared with the neighborhood, and the neighborhood has double the effect\ncompared with the number of rooms, you will see weights something like 8, 4, and\n2, respectively. \nHow the connection values are assigned and how the learning happens is the core\nof the neural network training process. This is what we will discuss for the rest of this\nchapter. Biological neuron Artificial neuron\nNeuron\nFlow of\ninformationDendrites\n(information coming\nfrom other neurons)\nSynapses\n(information output\nto other neurons)fx( ) Output\nxx\nn2\n...x1Input Neuron\nFigure 2.3 Artificial neurons were inspired by biological neurons. Different neurons are connected \nto each other by synapses that carry information.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 59}
|
58
|
page_content='40 CHAPTER 2Deep learning and neural networks\n\uf0a1Neuron functions —The calculations performed within the neuron to modulate\nthe input signals: the weighted sum and step activation function.\n\uf0a1Output —Controlled by the type of activation function you choose for your net-\nwork. There are different activation functions, as we will discuss in detail in this\nchapter. For a step function, the output is either 0 or 1. Other activation func-\ntions produce probability output or float numbers. The output node represents\nthe perceptron prediction.\nLet’s take a deeper look at the weighted sum and step function calculations that hap-\npen inside the neuron.\nWEIGHTED SUM FUNCTION\nAlso known as a linear combination , the weighted sum function is the sum of all inputs\nmultiplied by their weights, and then added to a bias term. This function produces a\nstraight line represented in the following equation:\nz = xi · wi + b (bias)\nz = x1 · w1 + x2 · w2 + x3 · w3 + … + xn · wn + b\nHere is how we implement the weighted sum in Python:\nz = np.dot(w.T,X) + b OutputSumActivation\nfunction\nInputsW\nW\nWW\n2\n3\n41\nX2\nX3X1\n...\nXn\nFigure 2.4 Input vectors are fed to the neuron, with weights \nassigned to represent importance. Calculations performed within \nthe neuron are weighted sum and activation functions. \n\uf0e5\nX is the input vector (uppercase X), \nw is the weights vector, and b is \nthe y-intercept.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 60}
|
59
|
page_content='41 Understanding perceptrons\nWhat is a bias in the perceptron, and why do we add it? \nLet’s brush up our memory on some linear algebra concepts to help understand\nwhat’s happening under the hood. Here is the function of the straight line:\nThe function of a straight line is represented by the equation ( y = mx + b), where b is\nthe y-intercept. To be able to define a line, you need two things: the slope of the line\nand a point on the line. The bias is that point on the y-axis. Bias allows you to move\nthe line up and down on the y-axis to better fit the prediction with the data. Without\nthe bias ( b), the line always has to go through the origin point (0,0), and you will get\na poorer fit. To visualize the importance of bias, look at the graph in the above figure\nand try to separate the circles from the star using a line that passes through the ori-\ngin (0,0). It is not possible. \nThe input layer can be given biases by introducing an extra input node that always has\na value of 1, as you can see in the next figure. In neural networks, the value of the bias\n(b) is treated as an extra weight and is learned and adjusted by the neuron to minimize\nthe cost function, as we will learn in the following sections of this chapter.\nThe input layer can be given biases by introducing an extra input that always has a value of 1.xyb\nym xb==\nThe equation of a straight line\nOutputActivation\nfunctionInputs\nWeights\nNet input\nfunction1\nX1W1\nW2\nWmW0\nX2\nXm......' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 61}
|
60
|
page_content="42 CHAPTER 2Deep learning and neural networks\nSTEP ACTIVATION FUNCTION\nIn both artificial and biological neural networks, a neuron does not just output the\nbare input it receives. Instead, there is one more step, called an activation function ; this\nis the decision-making unit of the brain. In ANNs, the activation function takes the\nsame weighted sum input from before ( z = Σxi · wi + b) and activates (fires) the neuron\nif the weighted sum is higher than a certain threshold. This activation happens based\non the activation function calculations. Later in this chapter, we’ll review the different\ntypes of activation functions and their general purpose in the broader context of neu-\nral networks. The simplest activation function used by the perceptron algorithm is the\nstep function that produces a binary output (0 or 1). It basically says that if the\nsummed input ≥ 0, it “fires” (output = 1); else (summed input < 0), it doesn’t fire (out-\nput = 0) (figure 2.5).\nThis is how the step function looks in Python:\ndef step_function(z): \n if z <= 0:\n return 0\n else:\n return 11.0\n0.8\n0.6\n0.4\n0.2\n0.0\n–4 –3 –2 –1 0 1 2 3 4\nZStep function\nxi i•wb+ y = g x g z (), where is an activation function and is the weighted sum =Output =0 If\n1 Ifwx b ≤\nw x b >•\n•+0\n+0\nFigure 2.5 The step function produces a binary output (0 or 1). If the summed input ≥ 0, it “fires” \n(output = 1); else (summed input < 0) it doesn't fire (output = 0).\nz is the weighted \nsum = Σxi · wi + b" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 62}
|
61
|
page_content='43 Understanding perceptrons\n2.1.2 How does the perceptron learn?\nThe perceptron uses trial and error to learn from its mistakes. It uses the weights as\nknobs by tuning their values up and down until the network is trained (figure 2.6).\nThe perceptron’s learning logic goes like this:\n1The neuron calculates the weighted sum and applies the activation function to\nmake a prediction yˆ. This is called the feedforward process:\nyˆ = activation( xi · wi + b)\n2It compares the output prediction with the correct label to calculate the error:\nerror = y – yˆ\n3It then updates the weight. If the prediction is too high, it adjusts the weight to\nmake a lower prediction the next time, and vice versa. \n4Repeat!\nThis process is repeated many times, and the neuron continues to update the weights\nto improve its predictions until step 2 produces a very small error (close to zero),\nwhich means the neuron’s prediction is very close to the correct value. At this point,\nwe can stop the training and save the weight values that yielded the best results to\napply to future cases where the outcome is unknown.\n2.1.3 Is one neuron enough to solve complex problems?\nThe short answer is no, but let’s see why. The perceptron is a linear function. This\nmeans the trained neuron will produce a straight line that separates our data.\n Suppose we want to train a perceptron to predict whether a player will be accepted\ninto the college squad. We collect all the data from previous years and train theOutputSumActivation\nfunctionX1\nX2\nX3\nXW\nW\nW\nW\n41\n2\n3\n4 Figure 2.6 Weights are tuned \nup and down during the learning \nprocess to optimize the value of \nthe loss function. \n\uf0e5' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 63}
|
62
|
page_content='44 CHAPTER 2Deep learning and neural networks\nperceptron to predict whether players will be accepted based on only two features\n(height and weight). The trained perceptron will find the best weights and bias values\nto produce the straight line that best separates the accepted from non-accepted (best\nfit). The line has this equation:\nz = height · w1 + age · w2 + b\nAfter the training is complete on the training data, we can start using the perceptron\nto predict with new players. When we get a player who is 150 cm in height and 12 years\nold, we compute the previous equation with the values (150, 12). When plotted in a\ngraph (figure 2.7), you can see that it falls below the line: the neuron is predicting\nthat this player will not be accepted. If it falls above the line, then the player will be\naccepted. \nIn figure 2.7, the single perceptron works fine because our data was linearly separable .\nThis means the training data can be separated by a straight line. But life isn’t always\nthat simple. What happens when we have a more complex dataset that cannot be sep-\narated by a straight line ( nonlinear dataset )?\n As you can see in figure 2.8, a single straight line will not separate our training\ndata. We say that it does not fit our data. We need a more complex network for more\ncomplex data like this. What if we built a network with two perceptrons? This would\nproduce two lines. Would that help us separate the data better?\n Okay, this is definitely better than the straight line. But, I still see some color mis-\npredictions. Can we add more neurons to make the function fit better? Now you are\ngetting it. Conceptually, the more neurons we add, the better the network will fit our210cm\n200\n140150160170180190\nHeight\n130\n120\n10 11 12 13 14 15 16 17 18 19\nAgex b\nFigure 2.7 Linearly separable data can be separated by a straight line.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 64}
|
63
|
page_content='45 Multilayer perceptrons\ntraining data. In fact, if we add too many neurons, this will make the network overfit\nthe training data (not good). But we will talk about this later. The general rule here is\nthat the more complex our network is, the better it learns the features of our data. \n2.2 Multilayer perceptrons \nWe saw that a single perceptron works great with simple datasets that can be separated\nby a line. But, as you can imagine, the real world is much more complex than that.\nThis is where neural networks can show their full potential.\nLinear vs. nonlinear problems\n\uf0a1Linear datasets —The data can be split with a single straight line.\n\uf0a1Nonlinear datasets —The data cannot be split with a single straight line. We\nneed more than one line to form a shape that splits the data.\nLook at this 2D data. In the linear problem, the stars and dots can be easily classified\nby drawing a single straight line. In nonlinear data, a single line will not separate both\nshapes.cm\nNeuron 1\nNeuron 2210\n200\n140150160170180190\nHeight\n130\n120\n10 11 12 13 14 15 16 17 18 19\nAge\nFigure 2.8 In a nonlinear dataset, a single straight line cannot separate the training \ndata. A network with two perceptrons can produce two lines and help separate the \ndata further in this example.\nLinear\n(can be split by\none straight line)(need more than oneNonlinear\nline to split the data)\nExamples of linear data \nand nonlinear data' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 65}
|
64
|
page_content='46 CHAPTER 2Deep learning and neural networks\nTo split a nonlinear dataset, we need more than one line. This means we need to\ncome up with an architecture to use tens and hundreds of neurons in our neural net-\nwork. Let’s look at the example in figure 2.9. Remember that a perceptron is a linear\nfunction that produces a straight line. So in order to fit this data, we try to create a\ntriangle-like shape that splits the dark dots. It looks like three lines would do the job.\nFigure 2.9 is an example of a small neural network that is used to model nonlinear data.\nIn this network, we used three neurons stacked together in one layer called a hidden layer ,\nso called because we don’t see the output of these layers during the training process. \n2.2.1 Multilayer perceptron architecture \nWe’ve seen how a neural network can be designed to have more than one neuron.\nLet’s expand on this idea with a more complex dataset. The diagram in figure 2.10 is\nfrom the Tensorflow playground website ( https:/ /playground.tensorflow.org ). We try\nto model a spiral dataset to distinguish between two classes. In order to fit this dataset,\nwe need to build a neural network that contains tens of neurons. A very common neu-\nral network architecture is to stack the neurons in layers on top of each other, called\nhidden layers . Each layer has n number of neurons. Layers are connected to each other\nby weight connections. This leads to the multilayer perceptron (MLP) architecture in\nthe figure.\n The main components of the neural network architecture are as follows:\n\uf0a1Input layer —Contains the feature vector.\n\uf0a1Hidden layers —The neurons are stacked on top of each other in hidden layers.\nThey are called “hidden” layers because we don’t see or control the input going\ninto these layers or the output. All we do is feed the feature vector to the input\nlayer and see the output coming out of the output layer.\n\uf0a1Weight connections (edges) —Weights are assigned to each connection between the\nnodes to reflect the importance of their influence on the final output predic-\ntion. In graph network terms, these are called edges connecting the nodes .Input features Output Hidden layer\nx\nx1\n2Figure 2.9 A perceptron is a linear \nfunction that produces a straight line. \nSo to fit this data, we need three \nperceptrons to create a triangle-like \nshape that splits the dark dots.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 66}
|
65
|
page_content='47 Multilayer perceptrons\n\uf0a1Output layer —We get the answer or prediction from our model from the output\nlayer. Depending on the setup of the neural network, the final output may be a\nreal-valued output (regression problem) or a set of probabilities (classification\nproblem). This is determined by the type of activation function we use in the\nneurons in the output layer. We’ll discuss the different types of activation func-\ntions in the next section.\nWe discussed the input layer, weights, and output layer. The next area of this architec-\nture is the hidden layers. \n2.2.2 What are hidden layers?\nThis is where the core of the feature-learning process takes place. When you look at\nthe hidden layer nodes in figure 2.10, you see that the early layers detect simple pat-\nterns to learn low-level features (straight lines). Later layers detect patterns within\npatterns to learn more complex features and shapes, then patterns within patterns\nwithin patterns, and so on. This concept will come in handy when we discuss convolu-\ntional networks in later chapters. For now, know that, in neural networks, we stack hid-\nden layers to learn complex features from each other until we fit our data. So when\nyou are designing your neural network, if your network is not fitting the data, the solu-\ntion could be adding more hidden layers.\n2.2.3 How many layers, and how many nodes in each layer?\nAs a machine learning engineer, you will mostly be designing your network and tun-\ning its hyperparameters. While there is no single prescribed recipe that fits all models,\nwe will try throughout this book to build your hyperparameter tuning intuition, asX16 neurons 6 neurons 6 neuronsSix hidden layers\nInput\nfeatures6 neurons 6 neurons 2 neuronsOutput\nX2\nThese are the new features that\nare learned after each layer.\nFigure 2.10 Tensorflow playground example representation of the feature learning in a deep neural network' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 67}
|
66
|
page_content='48 CHAPTER 2Deep learning and neural networks\nwell as recommend some starting points. The number of layers and the number of\nneurons in each layer are among the important hyperparameters you will be design-\ning when working with neural networks.\n A network can have one or more hidden layers (technically, as many as you want).\nEach layer has one or more neurons (again, as many as you want). Your main job, as a\nmachine learning engineer, is to design these layers. Usually, when we have two or\nmore hidden layers, we call this a deep neural network . The general rule is this: the\ndeeper your network is, the more it will fit the training data. But too much depth is\nnot a good thing, because the network can fit the training data so much that it fails to\ngeneralize when you show it new data (overfitting); also, it becomes more computa-\ntionally expensive. So your job is to build a network that is not too simple (one neu-\nron) and not too complex for your data. It is recommended that you read about\ndifferent neural network architectures that are successfully implemented by others to\nbuild an intuition about what is too simple for your problem. Start from that point,\nmaybe three to five layers (if you are training on a CPU), and observe the network\nperformance. If it is performing poorly (underfitting), add more layers. If you see\nsigns of overfitting (discussed later), then decrease the number of layers. To build a\nsense of how neural networks perform when you add more layers, play around with\nthe Tensorflow playground ( https:/ /playground.tensorflow.org ).\nFully connected layers\nIt is important to call out that the layers in classical MLP network architectures are\nfully connected to the next hidden layer. In the following figure, notice that each node\nin a layer is connected to all nodes in the previous layer. This is called a fully con-\nnected network . These edges are the weights that represent the importance of this\nnode to the output value.\nn_units n_units n_out\nInput features Hidden layer 1 Hidden layer 2 Output layer210\nA fully connected network' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 68}
|
67
|
page_content='49 Multilayer perceptrons\nIn later chapters, we will discuss other variations of neural network architecture (like\nconvolutional and recurrent networks). For now, know that this is the most basic neu-\nral network architecture, and it can be referred to by any of these names: ANN, MLP,\nfully connected network, or feedforward network.\nLet’s do a quick exercise to find out how many edges we have in our example. Sup-\npose that we designed an MLP network with two hidden layers, and each has five\nneurons:\n\uf0a1Weights_0_1 : (4 nodes in the input layer) × (5 nodes in layer 1) + 5 biases\n[1 bias per neuron] = 25 edges\n\uf0a1Weights_1_2 : 5 × 5 nodes + 5 biases = 30 edges\n\uf0a1Weights_2_output : 5 × 3 nodes + 3 bias = 18 edges\n\uf0a1Total edges (weights) in this network = 73\nWe have a total of 73 weights in this very simple network. The values of these\nweights are randomly initialized, and then the network performs feedforward and\nbackpropagation to learn the best values of weights that most fit our model to the\ntraining data.\nTo see the number of weights in this network, try to build this simple network in Keras\nas follows:\nmodel = Sequential([\n Dense(5, input_dim=4),\n Dense(5),\n Dense(3)\n])\nAnd print the model summary:\nmodel.summary()\nThe output will be as follows:\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense (Dense) (None, 5) 25 \n_________________________________________________________________\ndense_1 (Dense) (None, 5) 30 \n_________________________________________________________________\ndense_2 (Dense) (None, 3) 18 \n=================================================================\nTotal params: 73\nTrainable params: 73\nNon-trainable params: 0' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 69}
|
68
|
page_content='50 CHAPTER 2Deep learning and neural networks\n2.2.4 Some takeaways from this section \nLet’s recap what we’ve discussed so far:\n\uf0a1We talked about the analogy between biological and artificial neurons: both\nhave inputs and a neuron that does some calculations to modulate the input\nsignals and create output.\n\uf0a1We zoomed in on the artificial neuron’s calculations to explore its two main\nfunctions: weighted sum and the activation function. \n\uf0a1We saw that the network assigns random weights to all the edges. These weight\nparameters reflect the usefulness (or importance) of these features on the out-\nput prediction.\n\uf0a1Finally, we saw that perceptrons contain a single neuron. They are linear func-\ntions that produce a straight line to split linear data. In order to split more com-\nplex data (nonlinear), we need to apply more than one neuron in our network\nto form a multilayer perceptron.\n\uf0a1The MLP architecture contains input features, connection weights, hidden lay-\ners, and an output layer.\n\uf0a1We discussed the high-level process of how the perceptron learns. The learning\nprocess is a repetition of three main steps: feedforward calculations to produce\na prediction (weighted sum and activation), calculating the error, and back-\npropagating the error and updating the weights to minimize the error. \nWe should also keep in mind some of the important points about neural network\nhyperparameters:\n\uf0a1Number of hidden layers —You can have as many layers as you want, each with as\nmany neurons as you want. The general idea is that the more neurons you have,\nthe better your network will learn the training data. But if you have too many\nneurons, this might lead to a phenomenon called overfitting : the network\nlearned the training set so much that it memorized it instead of learning its fea-\ntures. Thus, it will fail to generalize. To get the appropriate number of layers,\nstart with a small network, and observe the network performance. Then start\nadding layers until you get satisfying results.\n\uf0a1Activation function —There are many types of activation functions, the most pop-\nular being ReLU and softmax. It is recommended that you use ReLU activation\nin the hidden layers and Softmax for the output layer (you will see how this is\nimplemented in most projects in this book).\n\uf0a1Error function —Measures how far the network’s prediction is from the true\nlabel. Mean square error is common for regression problems, and cross-entropy\nis common for classification problems.\n\uf0a1Optimizer —Optimization algorithms are used to find the optimum weight values\nthat minimize the error. There are several optimizer types to choose from. In\nthis chapter, we discuss batch gradient descent, stochastic gradient descent, and' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 70}
|
69
|
page_content='51 Activation functions\nmini-batch gradient descent. Adam and RMSprop are two other popular opti-\nmizers that we don’t discuss. \n\uf0a1Batch size —Mini-batch size is the number of sub-samples given to the network,\nafter which parameter update happens. Bigger batch sizes learn faster but\nrequire more memory space. A good default for batch size might be 32. Also try\n64, 128, 256, and so on.\n\uf0a1Number of epochs —The number of times the entire training dataset is shown to the\nnetwork while training. Increase the number of epochs until the validation accu-\nracy starts decreasing even when training accuracy is increasing (overfitting).\n\uf0a1Learning rate —One of the optimizer’s input parameters that we tune. Theoreti-\ncally, a learning rate that is too small is guaranteed to reach the minimum error\n(if you train for infinity time). A learning rate that is too big speeds up the\nlearning but is not guaranteed to find the minimum error. The default lr value\nof the optimizer in most DL libraries is a reasonable start to get decent results.\nFrom there, go down or up by one order of magnitude. We will discuss the\nlearning rate in detail in chapter 4.\n2.3 Activation functions\nWhen you are building your neural network, one of the design decisions that you will\nneed to make is what activation function to use for your neurons’ calculations. Activa-\ntion functions are also referred to as transfer functions or nonlinearities because they\ntransform the linear combination of a weighted sum into a nonlinear model. An acti-\nvation function is placed at the end of each perceptron to decide whether to activate\nthis neuron. More on hyperparameters\nOther hyperparameters that we have not discussed yet include dropout and regular-\nization. We will discuss hyperparameter tuning in detail in chapter 4, after we cover\nconvolutional neural networks in chapter 3. \nIn general, the best way to tune hyperparameters is by trial and error. By getting your\nhands dirty with your own projects as well as learning from other existing neural net-\nwork architectures, you will start to develop intuition about good starting points for\nyour hyperparameters. \nLearn to analyze your network’s performance and understand which hyperparameter\nyou need to tune for each symptom. And this is what we are going to do in this book.\nBy understanding the reasoning behind these hyperparameters and observing the\nnetwork performance in the projects at the end of the chapters, you will develop a\nfeel for which hyperparameter to tune for a particular effect. For example, if you see\nthat your error value is not decreasing and keeps oscillating, then you might fix that\nby reducing the learning rate. Or, if you see that the network is performing poorly in\nlearning the training data, this might mean that the network is underfitting and you\nneed to build a more complex model by adding more neurons and hidden layers.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 71}
|
70
|
page_content='52 CHAPTER 2Deep learning and neural networks\n Why use activation functions at all? Why not just calculate the weighted sum of our\nnetwork and propagate that through the hidden layers to produce an output?\n The purpose of the activation function is to introduce nonlinearity into the net-\nwork. Without it, a multilayer perceptron will perform similarly to a single perceptron\nno matter how many layers we add. Activation functions are needed to restrict the out-\nput value to a certain finite value. Let’s revisit the example of predicting whether a\nplayer gets accepted (figure 2.11).\nFirst, the model calculates the weighted sum and produces the linear function ( z):\nz = height · w1 + age · w2 + b\nThe output of this function has no bound. z could literally be any number. We use an\nactivation function to wrap the prediction values to a finite value. In this example, we\nuse a step function where if z > 0, then above the line (accepted) and if z < 0, then\nbelow the line (rejected). So without the activation function, we just have a linear\nfunction that produces a number, but no decision is made in this perceptron. The\nactivation function is what decides whether to fire this perceptron.\n There are infinite activation functions. In fact, the last few years have seen a lot of\nprogress in the creation of state-of-the-art activations. However, there are still relatively\nfew activations that account for the vast majority of activation needs. Let’s dive deeper\ninto some of the most common types of activation functions.x\n210cm\n200\n140150160170180190\nHeight\n130\n120\n10 11 12 13 14 15 16 17 18 19\nAgeb\nFigure 2.11 This example revisits the prediction of whether a player gets \naccepted from section 2.1.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 72}
|
71
|
page_content="53 Activation functions\n2.3.1 Linear transfer function \nA linear transfer function , also called an identity function , indicates that the function\npasses a signal through unchanged. In practical terms, the output will be equal to the\ninput, which means we don’t actually have an activation function. So no matter how\nmany layers our neural network has, all it is doing is computing a linear activation\nfunction or, at most, scaling the weighted average coming in. But it doesn’t transform\ninput into a nonlinear function.\nactivation( z) = z = wx + b\nThe composition of two linear functions is a linear function, so unless you throw a\nnonlinear activation function in your neural network, you are not computing any\ninteresting functions no matter how deep you make your network. No learning here!\n To understand why, let’s calculate the derivative of the activation z(x) = w · x + b,\nwhere w = 4 and b = 0. When we plot this function, it looks like figure 2.12. Then the\nderivative of z(x) = 4 x is z'(x) = 4 (figure 2.13).\nThe derivative of a linear function is constant: it does not depend on the input value\nx. This means that every time we do a backpropagation, the gradient will be the same.\nAnd this is a big problem: we are not really improving the error, since the gradient is\npretty much the same. This will be clearer when we discuss backpropagation later in\nthis chapter.fx x( ) = 4\n4y\n3\n2\n1\n–1\n–4–31 2 3 4x –4 –3 –2 –1\n–2\nFigure 2.12 The plot for the \nactivation function f(x) = 4 x" metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 73}
|
72
|
page_content="54 CHAPTER 2Deep learning and neural networks\n2.3.2 Heaviside step function (binary classifier)\nThe step function produces a binary output. It basically says that if the input x > 0, it\nfires (output y = 1); else (input < 0), it doesn’t fire (output y = 0). It is mainly used in\nbinary classification problems like true or false, spam or not spam, and pass or fail\n(figure 2.14).f' x g x( ) = ( ) = 4y\n3\n2\n1\n–1\n–2\n–4–31 2 3 4 x –4 –3 –2 –14\nFigure 2.13 The plot for the \nderivative of z(x) = 4x is z'(x) = 4.\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\n–4 –3 –2 –1 0 1 2 3 4\nZStep function\nOutput =0 If\n1 Ifwx b ≤\nw x b >•\n•+0\n+0\nFigure 2.14 Step functions are commonly used in binary classification problems because they \ntransform the input into zero or one." metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 74}
|
73
|
page_content='55 Activation functions\n2.3.3 Sigmoid/logistic function\nThis is one of the most common activation functions. It is often used in binary classifi-\ners to predict the probability of a class when you have two classes. The sigmoid squishes\nall the values to a probability between 0 and 1, which reduces extreme values or out-\nliers in the data without removing them. Sigmoid or logistic functions convert infinite\ncontinuous variables (range between – ∞ to + ∞) into simple probabilities between 0\nand 1. It is also called the S-shape curve because when plotted in a graph, it produces\nan S-shaped curve. While the step function is used to produce a discrete answer (pass\nor fail), sigmoid is used to produce the probability of passing and probability of failing\n(figure 2.15): \nσ(z) = \nHere is how sigmoid is implemented in Python:\nimport numpy as np \ndef sigmoid(x): \n return 1 / (1 + np.exp(-x))1\n1ez–+---------------\nSigmoid\n) =σ(z1\n1 + e\n0–\n–5 –100.01.0\n1.8\n1.6\n1.4\n1.2\n51 0z\nFigure 2.15 While the step function is used to produce a discrete \nanswer (pass or fail), sigmoid is used to produce the probability of \npassing or failing.\nImports numpy\nSigmoid activation \nfunction' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 75}
|
74
|
page_content='56 CHAPTER 2Deep learning and neural networks\nJust-in-time linear algebra (optional)\nLet’s take a deeper dive into the math side of the sigmoid function to understand the\nproblem it helps solve and how the sigmoid function equation is driven. Suppose that\nwe are trying to predict whether patients have diabetes based on only one feature:\ntheir age. When we plot the data we have about our patients, we get the linear model\nshown in the figure: \nz = β0 + β1 age\nIn this plot, you can observe the balance of probabilities that should go from 0 to 1.\nNote that when patients are below the age of 25, the predicted probabilities are neg-\native; meanwhile, they are higher than 1 (100%) when patients are older than 43\nyears old. This is a clear example of why linear functions do not work in most cases.\nNow, how do we fix this to give us probabilities within the range of 0 < probability < 1? \nFirst, we need to do something to eliminate all the negative probability values. The\nexponential function is a great solution for this problem because the exponent of any-\nthing (and I mean anything ) is always going to be positive. So let’s apply that to our\nlinear equation to calculate the probability ( p):\np = exp( z) = exp( β0 + β1 age)\nThis equation ensures that we always get probabilities greater than 0. Now, what\nabout the values that are higher than 1? We need to do something about them. With\nproportions, any given number divided by a number that is greater than it will give us\na number smaller than 1. Let’s do exactly that to the previous equation. We divide\nthe equation by its value plus a small value: either 1 or a (in some cases very small)\nvalue—let’s call it epsilon ( ε):\np = 2\n1\np1.5\n0.5\n0\n–0.520 30 35 40 45 50 55\nAge25\nThe linear model we get when we \nplot our data about our patients\nz()exp\nz()exp ε+------------------------' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 76}
|
75
|
page_content='57 Activation functions\n2.3.4 Softmax function\nThe softmax function is a generalization of the sigmoid function. It is used to obtain\nclassification probabilities when we have more than two classes. It forces the outputs\nof a neural network to sum to 1 (for example, 0 < output < 1). A very common use\ncase in deep learning problems is to predict a single class out of many options (more\nthan two). \n The softmax equation is as follows:\nσ(xj) = \nFigure 2.16 shows an example of the softmax function.If you divide the equation by exp( z), you get\np = \nWhen we plot the probability of this equation, we get the S shape of the sigmoid func-\ntion, where probability is no longer below 0 or above 1. In fact, as patients’ ages\ngrow, the probability asymptotically gets closer to 1; and as the weights move down,\nthe function asymptotically gets closer to 0 but is never outside the 0 < p < 1 range.\nThis is the plot of the sigmoid function and logistic regression.1\n1 z–()exp+----------------------------\n2\n1\np1.5\n0.5\n0\n–0.520 30 35 40 45 50 55\nAge25As patients get older, the \nprobability asymptotically gets \ncloser to 1. This is the plot of the \nsigmoid function and logistic \nregression.\nexj\nΣiexi------------\nSoftmax0.46\n0.34\n0.201.2\n0.9\n0.4Figure 2.16 The softmax function transforms \nthe input values to probability values between \n0 and 1.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 77}
|
76
|
page_content='58 CHAPTER 2Deep learning and neural networks\nTIP Softmax is the go-to function that you will often use at the output layer of\na classifier when you are working on a problem where you need to predict a\nclass between more than two classes. Softmax works fine if you are classifying\ntwo classes, as well. It will basically work like a sigmoid function. By the end of\nthis section, I’ll tell you my recommendations about when to use each activa-\ntion function.\n2.3.5 Hyperbolic tangent function (tanh)\nThe hyperbolic tangent function is a shifted version of the sigmoid version. Instead of\nsqueezing the signal values between 0 and 1, tanh squishes all values to the range –1 to 1.\nTanh almost always works better than the sigmoid function in hidden layers because it\nhas the effect of centering your data so that the mean of the data is close to zero\nrather than 0.5, which makes learning for the next layer a little bit easier:\ntanh( x) = = \nOne of the downsides of both sigmoid and tanh functions is that if ( z) is very large or\nvery small, then the gradient (or derivative or slope) of this function becomes very\nsmall (close to zero), which will slow down gradient descent (figure 2.17). This is\nwhen the ReLU activation function (explained next) provides a solution.\n2.3.6 Rectified linear unit \nThe rectified linear unit (ReLU) activation function activates a node only if the input\nis above zero. If the input is below zero, the output is always zero. But when the input\nis higher than zero, it has a linear relationship with the output variable. The ReLU\nfunction is represented as follows:\n f(x) = max (0, x)x()sinh\nx() cosh------------------- -exex––\nexex–+---------------- -\ntanh ( ) x\n–1.0–0.5–4 –20.51.0\n4x\n2\nFigure 2.17 If (z) is very large \nor very small, then the gradient \n(or derivative or slope) of this \nfunction becomes very small \n(close to zero).' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 78}
|
77
|
page_content='59 Activation functions\nAt the time of writing, ReLU is considered the state-of-the-art activation function\nbecause it works well in many different situations, and it tends to train better than sig-\nmoid and tanh in hidden layers (figure 2.18).\nHere is how ReLU is implemented in Python:\ndef relu(x): \n if x < 0:\n return 0\n else:\nreturn x\n2.3.7 Leaky ReLU\nOne disadvantage of ReLU activation is that the derivative is equal to zero when ( x) is\nnegative. Leaky ReLU is a ReLU variation that tries to mitigate this issue. Instead of\nhaving the function be zero when x < 0, leaky ReLU introduces a small negative slope\n(around 0.01) when ( x) is negative. It usually works better than the ReLU function,\nalthough it’s not used as much in practice. Take a look at the leaky ReLU graph in fig-\nure 2.19; can you see the leak?\nf(x) = max(0.01 x, x)\nWhy 0.01? Some people like to use this as another hyperparameter to tune, but that\nwould be overkill, since you already have other, bigger problems to worry about. Feel\nfree to try different values (0.1, 0.01, 0.002) in your model and see how they work.Rectifier\nReLU( ) = x0 if < 0x\nxxif > = 06\n5\n4\n3\n2\n1\n0\n–1\n–2\n–3\n–2 –4 0 2 4\nFigure 2.18 The ReLU function eliminates all negative values of the input by transforming \nthem into zeros.\nReLU activation \nfunction' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 79}
|
78
|
page_content='60 CHAPTER 2Deep learning and neural networks\nHere is how Leaky ReLU is implemented in Python:\ndef leaky_relu(x): \n if x < 0:\n return x * 0.01\n else:\nreturn x\nTable 2.1 summarizes the various activation functions we’ve discussed in this section. \nTable 2.1 A cheat sheet of the most common activation functions\nActivation \nfunctionDescription Plot Equation \nLinear trans-\nfer function \n(identity \nfunction)The signal passes \nthrough it \nunchanged. It \nremains a linear \nfunction. Almost \nnever used.f(x) = x\nHeaviside \nstep function \n(binary \nclassifier)Produces a binary \noutput of 0 or 1. \nMainly used in \nbinary classifica-\ntion to give a dis-\ncrete value.output = {Leaky ReLU\nfx( ) =0.01 for < 0xx\nxxfor = > 010\n8\n6\n4\n2\n51 0 –10 –5\nFigure 2.19 Instead of having the function be zero when x < 0, leaky ReLU introduces a small \nnegative slope (around 0.01) when ( x) is negative.\nLeaky ReLU \nactivation function \nwith a 0.0 1 leak\n3\n2\n1\n0\n–6 –4 –2 2 4 045\n–3\n–4\n–5–2–1\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\n–4 –3 –2 –1 0 1 2 3 4\nZStep function 0ifwx b 0≤+⋅\n1i fwx b 0>+⋅' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 80}
|
79
|
page_content='61 Activation functions\nSigmoid/\nlogistic \nfunctionSquishes all the \nvalues to a probabil-\nity between 0 and 1, \nwhich reduces \nextreme values \nor outliers in the \ndata. Usually \nused to classify \ntwo classes.σ(z) = \nSoftmax \nfunctionA generalization of \nthe sigmoid func-\ntion. Used to obtain \nclassification proba-\nbilities when we \nhave more than \ntwo classes. σ(xj) = \nHyperbolic \ntangent func-\ntion (tanh)Squishes all values \nto the range of –1 \nto 1. Tanh almost \nalways works better \nthan the sigmoid \nfunction in hidden \nlayers.tanh( x) = \n = \nRectified \nlinear unit \n(ReLU)Activates a node \nonly if the input is \nabove zero. Always \nrecommended for \nhidden layers. \nBetter than tanh.f(x) = max (0, x)\nLeaky ReLU Instead of having \nthe function be zero \nwhen x < 0, leaky \nReLU introduces a \nsmall negative \nslope (around 0.01) \nwhen ( x) is negative.f(x) = max(0.01 x, x)Table 2.1 A cheat sheet of the most common activation functions\nActivation \nfunctionDescription Plot Equation \n0\nz–6 –4 –2 –80.01.0\n1.5 Ø( )z\n24681\n1ez–+-------------------------\n0\nz–6 –4 –2 –80.01.0\n1.5 Ø( )z\n2468exj\nΣiexi---------\ntanh x\n–1.0–0.5–4 –20.51.0\n4x\n2x()sinh\nx()cosh-------------------------------\nexex––\nexex–+--------------------------\nRectifier\n6\n5\n4\n3\n2\n1\n0\n–1\n–2\n–3–2 –4 0 2 4\n10\n8\n6\n4\n2\n51 0 –10 –5' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 81}
|
80
|
page_content='62 CHAPTER 2Deep learning and neural networks\n2.4 The feedforward process\nNow that you understand how to stack perceptrons in layers, connect them with\nweights/edges, perform a weighted sum function, and apply activation functions, let’s\nimplement the complete forward-pass calculations to produce a prediction output.\nThe process of computing the linear combination and applying the activation func-\ntion is called feedforward . We briefly discussed feedforward several times in the previ-\nous sections; let’s take a deeper look at what happens in this process. \n The term feedforward is used to imply the forward direction in which the informa-\ntion flows from the input layer through the hidden layers, all the way to the output\nlayer. This process happens through the implementation of two consecutive functions:\nthe weighted sum and the activation function. In short, the forward pass is the calcula-\ntions through the layers to make a prediction.\n Let’s take a look at the simple three-layer neural network in figure 2.20 and\nexplore each of its components:\n\uf0a1Layers —This network consists of an input layer with three input features, and\nthree hidden layers with 3, 4, 1 neurons in each layer.Hyperparameter alert\nDue to the number of activation functions, it may appear to be an overwhelming task\nto select the appropriate activation function for your network. While it is important to\nselect a good activation function, I promise this is not going to be a challenging task\nwhen you design your network. There are some rules of thumb that you can start with,\nand then you can tune the model as needed. If you are not sure what to use, here\nare my two cents about choosing an activation function:\n\uf0a1For hidden layers —In most cases, you can use the ReLU activation function\n(or leaky ReLU) in hidden layers, as you will see in the projects that we will\nbuild throughout this book. It is increasingly becoming the default choice\nbecause it is a bit faster to compute than other activation functions. More\nimportantly, it reduces the likelihood of the gradient vanishing because it does\nnot saturate for large input values—as opposed to the sigmoid and tanh acti-\nvation functions, which saturate at ~ 1. Remember, the gradient is the slope.\nWhen the function plateaus, this will lead to no slope; hence, the gradient\nstarts to vanish. This makes it harder to descend to the minimum error (we\nwill talk more about this phenomenon, called vanishing/exploding gradients ,\nin later chapters). \n\uf0a1For the output layer —The softmax activation function is generally a good\nchoice for most classification problems when the classes are mutually exclu-\nsive. The sigmoid function serves the same purpose when you are doing\nbinary classification. For regression problems, you can simply use no activa-\ntion function at all, since the weighted sum node produces the continuous\noutput that you need: for example, if you want to predict house pricing based\non the prices of other houses in the same neighborhood.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 82}
|
81
|
page_content='63 The feedforward process\n\uf0a1Weights and biases (w, b) —The edges between nodes are assigned random\nweights denoted as Wab(n), where ( n) indicates the layer number and ( ab) indi-\ncates the weighted edge connecting the ath neuron in layer ( n) to the bth neu-\nron in the previous layer ( n – 1). For example, W23(2) is the weight that connects\nthe second node in layer 2 to the third node in layer 1 ( a22 to a13). (Note that\nyou can see different denotations of Wab(n) in other DL literature, which is fine as\nlong as you follow one convention for your entire network.)\nThe biases are treated similarly to weights because they are randomly initial-\nized, and their values are learned during the training process. So, for conve-\nnience, from this point forward we are going to represent the basis with the\nsame notation that we gave for the weights ( w). In DL literature, you will mostly\nfind all weights and biases represented as ( w) for simplicity. \n\uf0a1Activation functions σ(x)—In this example, we are using the sigmoid function\nσ(x) as an activation function. \n\uf0a1Node values (a) —We will calculate the weighted sum, apply the activation func-\ntion, and assign this value to the node amn, where n is the layer number and m is\nthe node index in the layer. For example, a23 means node number 2 in layer 3.\nLayer 1\nn= 3Layer 2\nn= 4Input layer\nn= 3Layer 3\nn= 1\na21\n31\naa\n11 11W1\n12W211W2\n13W2\n41W2\n42W2\n43W233W231W2\n32W223W221W2\n22W2\n11W3\n12W3\n13W3\n14W312W1\n32W113W1\n21W1\n23W122W1\n31W1\n33W1a12\na22\na32\na42a g x\nxx\n132\n31\nFigure 2.20 A simple three-layer neural network' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 83}
|
82
|
page_content='64 CHAPTER 2Deep learning and neural networks\n2.4.1 Feedforward calculations\nWe have all we need to start the feedforward calculations:\na1(1) = σ(w11(1)x1 + w21(1)x2 + w31(1)x3)\na2(1) = σ(w12(1)x1 + w22(1)x2 + w32(1)x3)\na3(1) = σ(w13(1)x1 + w23(1)x2 + w33(1)x3)\nThen we do the same calculations for layer 2 \n , and a4(2)\nall the way to the output prediction in layer 3: \nyˆ = a1(2) = σ (w11(3)a1(2) + w12(3)a2(2) + w13(3)a3(2) + w14(3)a4(2))\nAnd there you have it! You just calculated the feedforward of a two-layer neural net-\nwork. Let’s take a moment to reflect on what we just did. Take a look at how many\nequations we need to solve for such a small network. What happens when we have a\nmore complex problem with hundreds of nodes in the input layer and hundreds\nmore in the hidden layers? It is more efficient to use matrices to pass through multi-\nple inputs at once. Doing this allows for big computational speedups, especially when\nusing tools like NumPy, where we can implement this with one line of code. \n Let’s see how the matrices computation looks (figure 2.21). All we did here is sim-\nply stack the inputs and weights in matrices and multiply them together. The intuitive\nway to read this equation is from the right to the left. Start at the far right and follow\nwith me: \n\uf0a1We stack all the inputs together in one vector (row, column), in this case (3, 1). \n\uf0a1We multiply the input vector by the weights matrix from layer 1 ( W(1)) and then\napply the sigmoid function.\n\uf0a1We multiply the result for layer 2 ⇒ σ · W(2) and layer 3 ⇒ σ · W(3).\n\uf0a1If we have a fourth layer, you multiply the result from step 3 by σ · W(4), and so\non, until we get the final prediction output yˆ!\n Here is a simplified representation of this matrices formula:\nyˆ = σ · W(3) · σ · W(2) · σ · W(1) · (x)a12()a22()a32(),,' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 84}
|
83
|
page_content='65 The feedforward process\n2.4.2 Feature learning\nThe nodes in the hidden layers ( ai) are the new features that are learned after each\nlayer. For example, if you look at figure 2.20, you see that we have three feature\ninputs ( x1, x2, and x3). After computing the forward pass in the first layer, the net-\nwork learns patterns, and these features are transformed to three new features with\ndifferent values ( ). Then, in the next layer, the network learns patterns\nwithin the patterns and produces new features ( , and , and so forth).\nThe produced features after each layer are not totally understood, and we don’t see\nthem, nor do we have much control over them. It is part of the neural network\nmagic. That’s why they are called hidden layers. What we do is this: we look at the\nfinal output prediction and keep tuning some parameters until we are satisfied by\nthe network’s performance. \n To reiterate, let’s see this in a small example. In figure 2.22, you see a small neural\nnetwork to estimate the price of a house based on three features: how many bedrooms\nit has, how big it is, and which neighborhood it is in. You can see that the original\ninput feature values 3, 2000, and 1 were transformed into new feature values after\nperforming the feedforward process in the first layer ( ). Then they were\ntransformed again to a prediction output value ( yˆ). When training a neural network,\nwe see the prediction output and compare it with the true price to calculate the error\nand repeat the process until we get the minimum error.\n To help visualize the feature-learning process, let’s take another look at figure 2.9\n(repeated here in figure 2.23) from the Tensorflow playground. You can see that the\nfirst layer learns basic features like lines and edges. The second layer begins to learn\nmore complex features like corners. The process continues until the last layers of\nthe network learn even more complex feature shapes like circles and spirals that\nfit the dataset.\n Figure 2.21 Reading from left to right, we stack the inputs together in one vector, multiply \nthe input vector by the weights matrix from layer 1, apply the sigmoid function, and multiply \nthe result.WW(3) (2)\nLayer 3 Layer 2W ŷ113=3\n13W3\n14W3/c115 W212\n22W2\n23W2\n31W2\n32W2\n33W2\n41W2\n42W2\n43W211W2\n12W2\n13W2W(1)\nLayer 1W211\n22W1\n23W1\n31W1\n32W1\n33W111W1\n12W1\n13W1\nInput vectorX\nXX\n2\n31\n/c115/c115 W12/c183/c183\na11()a21()a31(),,\na12()a22()a32(),, a42()\na12()a22()a32()a42(),,,' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 85}
|
84
|
page_content='66 CHAPTER 2Deep learning and neural networks\nThat is how a neural network learns new features: via the network’s hidden layers.\nFirst, they recognize patterns in the data. Then, they recognize patterns within patterns;\nthen patterns within patterns within patterns, and so on. The deeper the network is,\nthe more it learns about the training data.Bedrooms\nSquare feet\nNeighborhood\n(mapped to\nan ID number)\nWeightsInput\nfeatures Hidden layerOutput\nprediction ( ) ŷ\nWeights3\n2,000\n1Final\nprice\nestimate\nNew\nfeature\na4New\nfeature\na1\nNew\nfeature\na2\nNew\nfeature\na3\nFigure 2.22 A small neural network \nto estimate the price of a house \nbased on three features: how many \nbedrooms it has, how big it is, and \nwhich neighborhood it is in\nx16 neurons 6 neurons 6 neuronsSix hidden layers\nInput\nfeatures6 neurons 6 neurons 2 neuronsOutput\nx2\nThese are the new features that\nare learned after each layer.\nFigure 2.23 Learning features in multiple hidden layers' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 86}
|
85
|
page_content='67 The feedforward process\nVectors and matrices refresher\nIf you understood the matrix calculations we just did in the feedforward discussion, feel\nfree to skip this sidebar. If you are still not convinced, hang tight: this sidebar is for you.\nThe feedforward calculations are a set of matrix multiplications. While you will not do\nthese calculations by hand, because there are a lot of great DL libraries that do them\nfor you with just one line of code, it is valuable to understand the mathematics that\nhappens under the hood so you can debug your network. Especially because this is\nvery trivial and interesting, let’s quickly review matrix calculations.\nLet’s start with some basic definitions of matrix dimensions:\n\uf0a1A scalar is a single number.\n\uf0a1A vector is an array of numbers.\n\uf0a1A matrix is a 2D array.\n\uf0a1A tensor is an n-dimensional array with n > 2.\nWe will follow the conventions used in most mathematical literature:\n\uf0a1Scalars are written in lowercase and italics: for instance, n.\n\uf0a1Vectors are written in lowercase, italics, and bold type: for instance, x.\n\uf0a1Matrices are written in uppercase, italics, and bold: for instance, X.\n\uf0a1Matrix dimensions are written as follows: (row × column).\nMultiplication: \n\uf0a1Scalar multiplication —Simply multiply the scalar number by all the numbers in\nthe matrix. Note that scalar multiplications don’t change the matrix dimensions:\n\uf0a1Matrix multiplication —When multiplying two matrices, such as in the case of\n(row 1 × column 1) × (row 2 × column 2), column 1 and row 2 must be equal to each\nother, and the product will have the dimensions (row 1 × column 2). For example,11\n22\n41\n3Scalar Vector Tensor Matrix\n2 1\n7 12 3\n4 5Matrix dimensions: a scalar is a single \nnumber, a vector is an array of numbers, \na matrix is a 2D array, and a tensor is an \nn-dimensional array.\n2 ·10\n4 2 · 4 2 · 36\n3=2 · 10 2 · 6\n34\n1 × 3 1 × 32 xyz 8=\n613\n7\n4\n3 × 3Same\nProduct9\n4\n07\n·' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 87}
|
86
|
page_content='68 CHAPTER 2Deep learning and neural networks\n2.5 Error functions \nSo far, you have learned how to implement the forward pass in neural networks to\nproduce a prediction that consists of the weighted sum plus activation operations.\nNow, how do we evaluate the prediction that the network just produced? More\nimportantly, how do we know how far this prediction is from the correct answer (the\nlabel)? The answer is this: measure the error. The selection of an error function is\nanother important aspect of the design of a neural network. Error functions can\nalso be referred to as cost functions or loss functions , and these terms are used inter-\nchangeably in DL literature. where x = 3 · 13 + 4 · 8 + 2 · 6 = 83, and the same for y = 63 and z = 37.\nNow that you know the matrices multiplication rules, pull out a piece of paper and\nwork through the dimensions of matrices in the earlier neural network example. The\nfollowing figure shows the matrix equation again for your convenience. \nThe last thing I want you to understand about matrices is transposition . With transpo-\nsition, you can convert a row vector to a column vector and vice versa, where the\nshape ( m × n) is inverted and becomes ( n × m). The superscript ( AT) is used for trans-\nposed matrices:WW(3) (2)\nLayer 3 Layer 2W ŷ113=3\n13W3\n14W3W212\n22W2\n23W2\n31W2\n32W2\n33W2\n41W2\n42W2\n43W211W2\n12W2\n13W2W(1)\nLayer 1W211\n22W1\n23W1\n31W1\n32W1\n33W111W1\n12W1\n13W1\nInput vectorX\nXX\n2\n31\n/c115/c115 /c115 /c183/c183 W12\nThe matrix equation from the main text. Use it to work through matrix dimensions.\nAA= = [2 8]T/c2222\n8\nAA==T/c2221\n4\n72\n5\n83\n6\n9 91\n24\n57\n8\n9 6 3\nAA==T/c2220\n2\n11\n4\n–1021\n–141' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 88}
|
87
|
page_content='69 Error functions\n2.5.1 What is the error function? \nThe error function is a measure of how “wrong” the neural network prediction is with\nrespect to the expected output (the label). It quantifies how far we are from the cor-\nrect solution. For example, if we have a high loss, then our model is not doing a good\njob. The smaller the loss, the better the job the model is doing. The larger the loss,\nthe more our model needs to be trained to increase its accuracy.\n2.5.2 Why do we need an error function?\nCalculating error is an optimization problem, something all machine learning engi-\nneers love (mathematicians, too). Optimization problems focus on defining an error\nfunction and trying to optimize its parameters to get the minimum error (more on\noptimization in the next section). But for now, know that, in general, when we are\nworking on an optimization problem, if we are able to define the error function for\nthe problem, we have a very good shot at solving it by running optimization algo-\nrithms to minimize the error function.\n In optimization problems, our ultimate goal is to find the optimum variables\n(weights) that would minimize the error function as much as we can. If we don’t know\nhow far from the target we are, how will we know what to change in the next iteration?\nThe process of minimizing this error is called error function optimization . We will review\nseveral optimization methods in the next section. But for now, all we need to know\nfrom the error function is how far we are from the correct prediction, or how much\nwe missed the desired degree of performance. \n2.5.3 Error is always positive\nConsider this scenario: suppose we have two data points that we are trying to get our\nnetwork to predict correctly. If the first gives an error of 10 and the second gives an\nerror of –10, then our average error is zero! This is misleading because “error = 0”\nmeans our network is producing perfect predictions, when, in fact, it missed by 10\ntwice. We don’t want that. We want the error of each prediction to be positive, so the\nerrors don’t cancel each other when we take the average error. Think of an archer\naiming at a target and missing by 1 inch. We are not really concerned about which\ndirection they missed; all we need to know is how far each shot is from the target.\n A visualization of loss functions of two separate models plotted over time is shown\nin figure 2.24. You can see that model #1 is doing a better job of minimizing error,\nwhereas model #2 starts off better until epoch 6 and then plateaus.\n Different loss functions will give different errors for the same prediction, and thus\nhave a considerable effect on the performance of the model. A thorough discussion\nof loss functions is outside the scope of this book. Instead, we will focus on the two\nmost commonly used loss functions: mean squared error (and its variations), usually\nused for regression problems, and cross-entropy, used for classification problems.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 89}
|
88
|
page_content='70 CHAPTER 2Deep learning and neural networks\n2.5.4 Mean square error \nMean squared error (MSE) is commonly used in regression problems that require the\noutput to be a real value (like house pricing). Instead of just comparing the predic-\ntion output with the label ( yˆi – yi), the error is squared and averaged over the number\nof data points, as you see in this equation:\nE(W, b) = ( yˆi – yi)2\nMSE is a good choice for a few reasons. The square ensures the error is always positive,\nand larger errors are penalized more than smaller errors. Also, it makes the math\nnice, which is always a plus. The notations in the formula are listed in table 2.2.\n MSE is quite sensitive to outliers, since it squares the error value. This might not be\nan issue for the specific problem that you are solving. In fact, this sensitivity to outliers\nmight be beneficial in some cases. For example, if you are predicting a stock price,\nyou would want to take outliers into account, and sensitivity to outliers would be a\ngood thing. In other scenarios, you wouldn’t want to build a model that is skewed by\noutliers, such as predicting a house price in a city. In that case, you are more inter-\nested in the median and less in the mean. A variation error function of MSE calledEpoch number5 10 15 20 25 30 35 40 0Model 1\n0.21.8\n1.6\n1.4\n1.2\n1.0\nLoss\n0.8\n0.6\n0.4Model 2\nFigure 2.24 A visualization of the loss functions of two separate models plotted over time\n1\nN--- -\ni1=N\n\uf0e5' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 90}
|
89
|
page_content='71 Error functions\nmean absolute error (MAE) was developed just for this purpose. It averages the absolute\nerror over the entire dataset without taking the square of the error:\nE(W, b) = | yˆi – yi|\n2.5.5 Cross-entropy\nCross-entropy is commonly used in classification problems because it quantifies the\ndifference between two probability distributions. For example, suppose that for a spe-\ncific training instance, we are trying to classify a dog image out of three possible\nclasses (dogs, cats, fish). The true distribution for this training instance is as follows:\nProbability(cat) P(dog) P(fish)\n 0.0 1.0 0.0 \nWe can interpret this “true” distribution to mean that the training instance has 0%\nprobability of being class A, 100% probability of being class B, and 0% probability of\nbeing class C. Now, suppose our machine learning algorithm predicts the following\nprobability distribution:\nProbability(cat) P(dog) P(fish)\n 0.2 0.3 0.5 \nHow close is the predicted distribution to the true distribution? That is what the cross-\nentropy loss function determines. We can use this formula:\nE(W, b) = – yˆi log( pi)Table 2.2 Meanings of notation used in regression problems\nNotation Meaning\nE(W, b) The loss function. Is also annotated as J(W, b) in other literature.\nW Weights matrix. In some literature, the weights are denoted by the theta sign ( θ).\nb Biases vector.\nN Number of training examples.\nyˆi Prediction output. Also notated as hw, b(X) in some DL literature. \nyi The correct output (the label).\n(yˆi – yi) Usually called the residual . \n1\nN--- -\ni1=N\n\uf0e5\ni1=m\n\uf0e5' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 91}
|
90
|
page_content='72 CHAPTER 2Deep learning and neural networks\nwhere ( y) is the target probability, ( p) is the predicted probability, and ( m) is the num-\nber of classes. The sum is over the three classes: cat, dog, and fish. In this case, the loss\nis 1.2:\nE = - (0.0 * log(0.2) + 1.0 * log(0.3) + 0.0 * log(0.5)) = 1.2\nSo that is how “wrong” or “far away” our prediction is from the true distribution.\n Let’s do this one more time, just to show how the loss changes when the network\nmakes better predictions. In the previous example, we showed the network an image\nof a dog, and it predicted that the image was 30% likely to be a dog, which was very far\nfrom the target prediction. In later iterations, the network learns some patterns and\ngets the predictions a little better, up to 50%:\nProbability(cat) P(dog) P(fish)\n 0.3 0.5 0.2 \nThen we calculate the loss again:\nE = - (0.0*log(0.3) + 1.0*log(0.5) + 0.0*log(0.2)) = 0.69\nYou see that when the network makes a better prediction (dog is up to 50% from\n30%), the loss decreases from 1.2 to 0.69. In the ideal case, when the network predicts\nthat the image is 100% likely to be a dog, the cross-entropy loss will be 0 (feel free to\ntry the math). \n To calculate the cross-entropy error across all the training examples ( n), we use\nthis general formula:\nE(W, b) = – yˆij log( pij)\nNOTE It is important to note that you will not be doing these calculations by\nhand. Understanding how things work under the hood gives you better intu-\nition when you are designing your neural network. In DL projects, we usually\nuse libraries like Tensorflow, PyTorch, and Keras where the error function is\ngenerally a parameter choice.\n2.5.6 A final note on errors and weights\nAs mentioned before, in order for the neural network to learn, it needs to minimize\nthe error function as much as possible (0 is ideal). The lower the errors, the higher\nthe accuracy of the model in predicting values. How do we minimize the error? \n Let’s look at the following perceptron example with a single input to understand\nthe relationship between the weight and the error:i1=n\n\uf0e5\ni1=m\n\uf0e5\nX YW\nf(x)' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 92}
|
91
|
page_content='73 Error functions\nSuppose the input x = 0.3, and its label (goal prediction) y = 0.8. The prediction out-\nput ( yˆ) of this perception is calculated as follows:\nyˆi = w · x = w · 0.3\nAnd the error, in its simplest form, is calculated by comparing the prediction yˆ and\nthe label y:\nerror = | yˆ – y|\n = |( w · x) – y|\n = | w · 0.3 – 0.8| \nIf you look at this error function, you will notice that the input ( x) and the goal predic-\ntion ( y) are fixed values. They will never change for these specific data points. The only\ntwo variables that we can change in this equation are the error and the weight. Now, if\nwe want to get to the minimum error, which variable can we play with? Correct: the\nweight! The weight acts as a knob that the network needs to adjust up and down until it\ngets the minimum error. This is how the network learns: by adjusting weight. When we\nplot the error function with respect to the weight, we get the graph shown in figure 2.25.\nAs mentioned before, we initialize the network with random weights. The weight lies\nsomewhere on this curve, and our mission is to make it descend this curve to its optimal\nvalue with the minimum error. The process of finding the goal weights of the neural\nnetwork happens by adjusting the weight values in an iterative process using an optimi-\nzation algorithm. 2\n1.8\n1.6\n1.4\n1.2\n1\n0.8\n0.6\n0.4\n0.2Cost function: ( )Jw\n0\n0 –5 5 10 15 20 25 30 35 40\nwSlopeStarting weight\nGoal weightFigure 2.25 The network \nlearns by adjusting weight. \nWhen we plot the error function \nwith respect to weight, we get \nthis type of graph.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 93}
|
92
|
page_content='74 CHAPTER 2Deep learning and neural networks\n2.6 Optimization algorithms\nTraining a neural network involves showing the network many examples (a training\ndataset); the network makes predictions through feedforward calculations and com-\npares them with the correct labels to calculate the error. Finally, the neural network\nneeds to adjust the weights (on all edges) until it gets the minimum error value, which\nmeans maximum accuracy. Now, all we need to do is build algorithms that can find\nthe optimum weights for us.\n2.6.1 What is optimization?\nAhh, optimization! A topic that is dear to my heart, and dear to every machine learn-\ning engineer (mathematicians too). Optimization is a way of framing a problem to\nmaximize or minimize some value. The best thing about computing an error function\nis that we turn the neural network into an optimization problem where our goal is to\nminimize the error . \n Suppose you want to optimize your commute from home to work. First, you need\nto define the metric that you are optimizing (the error function). Maybe you want to\noptimize the cost of the commute, or the time, or the distance. Then, based on that\nspecific loss function, you work on minimizing its value by changing some parameters.\nChanging the parameters to minimize (or maximize) a value is called optimization . If\nyou choose the loss function to be the cost, maybe you will choose a longer commute\nthat will take two hours, or (hypothetically) you might walk for five hours to minimize\nthe cost. On the other hand, if you want to optimize the time spent commuting,\nmaybe you will spend $50 to take a cab that will decrease the commute time to 20 min-\nutes. Based on the loss function you defined, you can start changing your parameters\nto get the results you want.\nTIP In neural networks, optimizing the error function means updating the\nweights and biases until we find the optimal weights , or the best values for the\nweights to produce the minimum error.\nLet’s look at the space that we are trying to optimize:\nIn a neural network of the simplest form, a perceptron with one input, we have only\none weight. We can easily plot the error (that we are trying to minimize) with respect\nto this weight, represented by the 2D curve in figure 2.26 (repeated from earlier).\n But what if we have two weights? If we graph all the possible values of the two\nweights, we get a 3D plane of the error (figure 2.27).\n What about more than two weights? Your network will probably have hundreds or\nthousands of weights (because each edge in your network has its own weight value).X YW\nf(x)' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 94}
|
93
|
page_content='75 Optimization algorithms\nSince we humans are only equipped to understand a maximum of 3 dimensions, it is\nimpossible for us to visualize error graphs when we have 10 weights, not to mention\nhundreds or thousands of weight parameters. So, from this point on, we will study the\nerror function using the 2D or 3D plane of the error. In order to optimize the model,\nour goal is to search this space to find the best weights that will achieve the lowest pos-\nsible error.\n Why do we need an optimization algorithm? Can’t we just brute-force through a\nlot of weight values until we get the minimum error?\n Suppose we used a brute-force approach where we just tried a lot of different possi-\nble weights (say 1,000 values) and found the weight that produced the minimum\nerror. Could that work? Well, theoretically, yes. This approach might work when we2\n1.8\n1.6\n1.4\n1.2\n1\n0.8\n0.6\n0.4\n0.2Cost function: ( )Jw\n0\n0 –5 5 10 15 20 25 30 35 40\nwSlopeStarting weight\nGoal weight\nFigure 2.26 The error \nfunction with respect to its \nweight for a single perceptron \nis a 2D curve. \nError\nww\n1280\n300\n250\n200\n150\n100\n5050\n0100150200300\n250\n060\n40\n20\nGoal weight\nFigure 2.27 Graphing \nall possible values of two \nweights gives a 3D error \nplane.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 95}
|
94
|
page_content='76 CHAPTER 2Deep learning and neural networks\nhave very few inputs and only one or two neurons in our network. Let me try to con-\nvince you that this approach wouldn’t scale. Let’s take a look at a scenario where we\nhave a very simple neural network. Suppose we want to predict house prices based on\nonly four features (inputs) and one hidden layer of five neurons (see figure 2.28).\nAs you can see, we have 20 edges (weights) from the input to the hidden layer, plus 5\nweights from the hidden layer to the output prediction, totaling 25 weight variables\nthat need to be adjusted for optimum values. To brute-force our way through a simple\nneural network of this size, if we are trying 1,000 different values for each weight, then\nwe will have a total of 1075 combinations: \n1,000 × 1,000 × . . . × 1,000 = 1,00025 = 1075 combinations\nLet’s say we were able to get our hands on the fastest supercomputer in the world: Sun-\nway TaihuLight, which operates at a speed of 93 petaflops ⇒ 93 × 1015 floating-pointPrice\nInput layer Hidden layer Output layerArea (feet )2\nBedrooms\nDistance to city (miles)\nAgex\nx1\n2\nxx\n43y\nFigure 2.28 If we want to predict house prices based on only four features (inputs) and \none hidden layer of five neurons, we’ll have 20 edges (weights) from the input to the \nhidden layer, plus 5 weights from the hidden layer to the output prediction.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 96}
|
95
|
page_content='77 Optimization algorithms\noperations per second (FLOPs). In the best-case scenario, this supercomputer would\nneed\n = 1.08 × 1058 seconds = 3.42 × 1050 years\nThat is a huge number: it’s longer than the universe has existed. Who has that kind of\ntime to wait for the network to train? Remember that this is a very simple neural net-\nwork that usually takes a few minutes to train using smart optimization algorithms. In\nthe real world, you will be building more complex networks that have thousands of\ninputs and tens of hidden layers, and you will be required to train them in a matter\nof hours (or days, or sometimes weeks). So we have to come up with a different\napproach to find the optimal weights.\n Hopefully I have convinced you that brute-forcing through the optimization pro-\ncess is not the answer. Now, let’s study the most popular optimization algorithm for\nneural networks: gradient descent. Gradient descent has several variations: batch gradi-\nent descent (BGD), stochastic gradient descent (SGD), and mini-batch GD (MB-GD). \n2.6.2 Batch gradient descent\nThe general definition of a gradient (also known as a derivative ) is that it is the function\nthat tells you the slope or rate of change of the line that is tangent to the curve at any\ngiven point. It is just a fancy term for the slope or steepness of the curve (figure 2.29).\nGradient descent simply means updating the weights iteratively to descend the slope of\nthe error curve until we get to the point with minimum error. Let’s take a look at the\nerror function that we introduced earlier with respect to the weights. At the initial\nweight point, we calculate the derivative of the error function to get the slope (direc-\ntion) of the next step. We keep repeating this process to take steps down the curve\nuntil we reach the minimum error (figure 2.30).1075\n93 1015×----------------------\na\nef\nbcdSlope at\npoint aSlope at\npoint cSlope at\npoint d\nSlope at\npoint f\nSlope at\npoint e\nSlope at\npoint bFigure 2.29 A gradient is the function \nthat describes the rate of change of the \nline that is tangent to a curve at any \ngiven point.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 97}
|
96
|
page_content='78 CHAPTER 2Deep learning and neural networks\nHOW DOES GRADIENT DESCENT WORK ?\nTo visualize how gradient descent works, let’s plot the error function in a 3D graph\n(figure 2.31) and go through the process step by step. The random initial weight\n(starting weight) is at point A, and our goal is to descend this error mountain to the\ngoal w1 and w2 weight values, which produce the minimum error value. The way we do\nthat is by taking a series of steps down the curve until we get the minimum error. In\norder to descend the error mountain, we need to determine two things for each step: \n\uf0a1The step direction (gradient)\n\uf0a1The step size (learning rate)Cost\nWeightGradientInitial weight\nIncremental\nstepDerivative\nof cost\nMinimum\ncost\nFigure 2.30 Gradient descent takes \nincremental steps to descend the \nerror function.\nError\nWW\n1280\n300\n250\n200\n150\n100\n5050\n0100150200300\n250\n060\n40\n20Starting weight\nGoal weight\n4\n31\n2\nBA\nFigure 2.31 The random initial weight (starting weight) is at point A. We \ndescend the error mountain to the w1 and w2 weight values that produce the \nminimum error value.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 98}
|
97
|
page_content='79 Optimization algorithms\nTHE DIRECTION (GRADIENT )\nSuppose you are standing on the top of the error mountain at point A. To get to the\nbottom, you need to determine the step direction that results in the deepest descent\n(has the steepest slope). And what is the slope, again? It is the derivative of the curve.\nSo if you are standing on top of that mountain, you need to look at all the directions\naround you and find out which direction will result in the deepest descent (1, 2, 3,\nor 4, for example). Let’s say it is direction 3; we choose that way. This brings us to\npoint B, and we restart the process (calculate feedforward and error) and find the\ndirection of deepest descent, and so forth, until we get to the bottom of the mountain. \n This process is called gradient descent . By taking the derivative of the error with\nrespect to the weight ( ), we get the direction that we should take. Now there’s one\nthing left. The gradient only determines the direction. How large should the size of\nthe step be? It could be a 1-foot step or a 100-foot jump. This is what we need to deter-\nmine next.\nTHE STEP SIZE (LEARNING RATE α)\nThe learning rate is the size of each step the network takes when it descends the error\nmountain, and it is usually denoted by the Greek letter alpha ( α). It is one of the most\nimportant hyperparameters that you tune when you train your neural network (more\non that later). A larger learning rate means the network will learn faster (since it is\ndescending the mountain with larger steps), and smaller steps mean slower learning.\nWell, this sounds simple enough. Let’s use large learning rates and complete the neu-\nral network training in minutes instead of waiting for hours. Right? Not quite. Let’s\ntake a look at what could happen if we set a very large learning rate value.\n In figure 2.32, you are starting at point A. When you take a large step in the direc-\ntion of the arrow, instead of descending the error mountain, you end up at point B,\non the other side. Then another large step takes you to C, and so forth. The error will\nkeep oscillating and will never descend. We will talk more later about tuning the learn-\ning rate and how to determine if the error is oscillating. But for now, you need to\nknow this: if you use a very small learning rate, the network will eventually descend thedE\ndw------ -\nError\nWW\n1280\n300\n250\n200\n150\n100\n5050\n0100150200300\n250\n060\n40\n20\nGoal weight\nBCA\nFigure 2.32 Setting a very \nlarge learning rate causes the \nerror to oscillate and never \ndescend.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 99}
|
98
|
page_content='80 CHAPTER 2Deep learning and neural networks\nmountain and will get to the minimum error. But this training will take longer (maybe\nweeks or months). On the other hand, if you use a very large learning rate, the net-\nwork might keep oscillating and never train. So we usually initialize the learning rate\nvalue to 0.1 or 0.01 and see how the network performs, and then tune it further.\nPUTTING DIRECTION AND STEP TOGETHER\nBy multiplying the direction (derivative) by the step size (learning rate), we get the\nchange of the weight for each step:\nΔwi = –α\nWe add the minus sign because the derivative always calculates the slope in the\nu p w a r d d i r e c t i o n . S i n c e w e n e e d t o d e s c e n d t h e m o u n t a i n , w e g o i n t h e o p p o s i t e\ndirection of the slope:\nwnext–step = wcurrent + Δw\nCalculus refresher: Calculating the partial derivative \nThe derivative is the study of change. It measures the steepness of a curve at a par-\nticular point on a graph.\nIt looks like mathematics has given us just what we are looking for. On the error graph,\nwe want to find the steepness of the curve at the exact weight point. Thank you, math!dE\ndwi--------\n0 –4 –2 –6–1530\n2025 fx x( ) =2\n15\n510\n–10–50\n4 26Gradient at = 2 x\nWe want to find the steepness of the curve at the exact weight point.' metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 100}
|
99
|
page_content="81 Optimization algorithms\nOther terms for derivative are slope and rate of change . If the error function is denoted\nas E(x), then the derivative of the error function with respect to the weight is denoted as\n d dE(x) E(x) or just dw dw\nThis formula shows how much the total error will change when we change the weight.\nLuckily, mathematicians created some rules for us to calculate the derivative. Since\nthis is not a mathematics book, we will not discuss the proof of the rules. Instead,\nwe will start applying these rules at this point to calculate our gradient. Here are the\nbasic derivative rules:\nLet’s take a look at a simple function to apply the derivative rules:\nf(x) = 10 x5 + 4x7 + 12 x\nWe can apply the power, constant, and sum rules to get also denoted as f'(x):\nthen, f'(x) = 50 x4 + 28 x6 + 12\nTo get an intuition of what this means, let’s plot f(x):Constant Rule: ( c) = 0 Difference Rule: [ f(x) – g(x)] – f'(x) – g'(x)\nConstant Multiple Rule: [ cf(x)] = cf'(x) Product Rule: [ f(x)g(x)] = f(x)g'(x) + g(x)f'(x)\nPower Rule: ( xn) = xn–1Quotient Rule: [f(x)\ng(x)] = g(x)f'(x) – f(x)g'(x) [g(x)]2\nSum Rule: [ f(x) – g(x)] = f'(x) – g'(x) Chain Rule: f(g(x)) = f'(g(x))g'(x)d\ndx----- -d\ndx----- -\nd\ndx----- -d\ndx----- -\nd\ndx----- -d\ndx----- -\nd\ndx----- -d\ndx----- -\ndf\ndx---------\nx= 2 x= 6 xfx()\nUsing a simple function to apply derivative rules. To get the slope at any \npoint, we can compute f'(x) at that point." metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 101}
|
100
|
page_content="82 CHAPTER 2Deep learning and neural networks\nPITFALLS OF BATCH GRADIENT DESCENT\nGradient descent is a very powerful algorithm to get to the minimum error. But it has\ntwo major pitfalls.\n First, not all cost functions look like the simple bowls we saw earlier. There may be\nholes, ridges, and all sorts of irregular terrain that make reaching the minimum error\nvery difficult. Consider figure 2.33, where the error function is a little more complex\nand has ups and downs. (continued)\nIf we want to get the slope at any point, we can compute f'(x) at that point. So f'(2) gives\nus the slope of the line on the left, and f'(6) gives the slope of the second line. Get it? \nFor a last example of derivatives, let’s apply the power rule to calculate the derivative\nof the sigmoid function:\nNote that you don’t need to memorize the derivative rules, nor do you need to calcu-\nlate the derivatives of the functions yourself. Thanks to the awesome DL community,\nwe have great libraries that will compute these functions for you in just one line of\ncode. But it is valuable to understand how things are happening under the hood. σ(x) = \n = (1 + e–x) \n = –(1 + e–x)–2(–e–x)\n = \n = · \n = σ(x) · (1 – σ(x))If you want to write out the derivative of the sigmoid \nactivation function in code, it will look like this: \ndef sigmoid(x):\n return 1/(1+np.exp(-x))\ndef sigmoid_derivative(x):\n return sigmoid(x) * (1 - sigmoid(x))d\ndx----- -d\ndx----- -1\n1 + ex–-----------------\nd\ndx----- -power \nrule\nex–\n(1 + ex–)2--------------------------- -\n1\n1 + ex–-----------------ex–\n1 + ex– -----------------\nGlobal minimaLocal minimaStarting point\nError\nFigure 2.33 Complex error functions are \nrepresented by more complex curves with \nmany local minima values. Our goal is to \nreach the global minimum value." metadata={'source': '/content/pdf-books/deep-learning-for-vision-systems-1nbsped-1617296198-9781617296192.pdf', 'page': 102}
|
Deep Learning Books Dataset
Dataset Information
Features:
page_no: Integer (int64) - Page number in the book.page_content: String - Text content of the page.
Splits:
train: Training split.- Number of examples: 474
- Number of bytes: 1,030,431
Download Size: 509,839 bytes
Dataset Size: 1,030,431 bytes
Dataset Application
This dataset "deep_learning_books_dataset" contains text data from various pages of books related to deep learning. It can be used for various natural language processing (NLP) tasks such as text classification, language modeling, text generation, and more.
Using Python and Hugging Face's Transformers Library
To use this dataset for NLP text generation and language modeling tasks, you can follow these steps:
- Install the required libraries:
pip install datasets
from datasets import load_dataset
dataset = load_dataset("Falah/deep_learning_books_dataset")
Citation
Please use the following citation when referencing this dataset:
@dataset{deep_learning_books_dataset,
author = {Falah.G.Salieh},
title = {Deep Learning Books Dataset,},
year = {2023},
publisher = {HuggingFace Hub},
version = {1.0},
location = {Online},
url = {https://huggingface.co/datasets/Falah/deep_learning_books_dataset}
}
Apache License:
The "{Deep Learning Books Dataset" is distributed under the Apache License 2.0. The specific licensing and usage terms for this dataset can be found in the dataset repository or documentation. Please make sure to review and comply with the applicable license and usage terms before downloading and using the dataset.
- Downloads last month
- 7