url
stringclasses
675 values
text
stringlengths
0
9.95k
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
Furthermore, the following reconstruction plot shows that our autoencoder is doing a fantastic job of reconstructing our input digits. Figure 3: Visualizing reconstructed data from an autoencoder trained on MNIST using TensorFlow and Keras for image search engine purposes. The fact that our autoencoder is doing such a good job also implies that our latent-space representation vectors are doing a good job compressing, quantifying, and representing the input image — having such a representation is a requirement when building an image retrieval system. If the feature vectors cannot capture and quantify the contents of the image, then there is no way that the CBIR system will be able to return relevant images. If you find that your autoencoder is failing to properly reconstruct your images, then it’s unlikely your autoencoder will perform well for image retrieval. Take the proper care to train an accurate autoencoder — doing so will help ensure your image retrieval system returns similar images. Implementing image indexer using the trained autoencoder With our autoencoder successfully trained (Phase #1), we can move on to the feature extraction/indexing phase of the image retrieval pipeline (Phase #2). This phase, at a bare minimum, requires us to use our trained autoencoder (specifically the “encoder” portion) to accept an input image, perform a forward pass, and then take the output of the encoder portion of the network to generate our index of feature vectors. These feature vectors are meant to quantify the contents of each image. Optionally, we may also use specialized data structures such as VP-Trees and Random Projection Trees to improve the query speed of our image retrieval system.
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
Open up the index_images.py file in your directory structure and we’ll get started: # import the necessary packages from tensorflow.keras.models import Model from tensorflow.keras.models import load_model from tensorflow.keras.datasets import mnist import numpy as np import argparse import pickle # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-m", "--model", type=str, required=True, help="path to trained autoencoder") ap.add_argument("-i", "--index", type=str, required=True, help="path to output features index file") args = vars(ap.parse_args()) We begin with imports. Our tf.keras imports include (1) Model so we can construct our encoder, (2) load_model so we can load our autoencoder model we trained in the previous step, and (3) our mnist dataset. Our feature vector index will be serialized as a Python pickle file. We have two required command line arguments: --model: The trained autoencoder input path from the previous step --index: The path to the output features index file in .pickle format From here, we’ll load and preprocess our MNIST digit data: # load the MNIST dataset print("[INFO] loading MNIST training split...") ((trainX, _), (testX, _)) = mnist.load_data() # add a channel dimension to every image in the training split, then # scale the pixel intensities to the range [0, 1] trainX = np.expand_dims(trainX, axis=-1) trainX = trainX.astype("float32") / 255.0 Notice that the preprocessing steps are identical to that of our training procedure. We’ll then load our autoencoder: # load our autoencoder from disk print("[INFO] loading autoencoder model...") autoencoder = load_model(args["model"]) # create the encoder model which consists of *just* the encoder # portion of the autoencoder encoder = Model(inputs=autoencoder.input, outputs=autoencoder.get_layer("encoded").output) # quantify the contents of our input images using the encoder print("[INFO] encoding images...") features = encoder.predict(trainX) Line 28 loads our autoencoder (trained in the previous step) from disk. Then, using the autoencoder’s input, we create a Model while only accessing the encoder portion of the network (i.e., the latent-space feature vector) as the output (Lines 32 and 33). We then pass the MNIST digit image data through the encoder to compute our feature vectors (features) on Line 37. Finally, we construct a dictionary map of our feature data: # construct a dictionary that maps the index of the MNIST training # image to its corresponding latent-space representation indexes = list(range(0, trainX.shape[0])) data = {"indexes": indexes, "features": features} # write the data dictionary to disk print("[INFO] saving index...") f = open(args["index"], "wb") f.write(pickle.dumps(data)) f.close() Line 42 builds a data dictionary consisting of two components: indexes: Integer indices of each MNIST digit image in the dataset features: The corresponding feature vector for each image in the dataset To close out, Lines 46-48 serialize the data to disk in Python’s pickle format. Indexing our image dataset for image retrieval We are now ready to quantify our image dataset using the autoencoder, specifically using the latent-space output of the encoder portion of the network.
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
To quantify our image dataset using the trained autoencoder, make sure you use the “Downloads” section of this tutorial to download the source code and pre-trained model. From there, open up a terminal and execute the following command: $ python index_images.py --model output/autoencoder.h5 \ --index output/index.pickle [INFO] loading MNIST training split... [INFO] loading autoencoder model... [INFO] encoding images... [INFO] saving index... If you check the contents of your output directory, you should now see your index.pickle file: $ ls output/*.pickle output/index.pickle Implementing the image search and retrieval script using Keras and TensorFlow Our final script, our image searcher, puts all the pieces together and allows us to complete our autoencoder image retrieval project (Phase #3). Again, we’ll be using Keras and TensorFlow for this implementation. Open up the search.py script, and insert the following contents: # import the necessary packages from tensorflow.keras.models import Model from tensorflow.keras.models import load_model from tensorflow.keras.datasets import mnist from imutils import build_montages import numpy as np import argparse import pickle import cv2 As you can see, this script needs the same tf.keras imports as our indexer. Additionally, we’ll use my build_montages convenience script in my imutils package to display our autoencoder CBIR results. Let’s define a function to compute the similarity between two feature vectors: def euclidean(a, b): # compute and return the euclidean distance between two vectors return np.linalg.norm(a - b) Here we’re the Euclidean distance to calculate the similarity between two feature vectors, a and b. There are multiple ways to compute distances — the cosine distance can be a good alternative for many CBIR applications. I also cover other distance algorithms inside the PyImageSearch Gurus course. Next, we’ll define our searching function: def perform_search(queryFeatures, index, maxResults=64): # initialize our list of results results = [] # loop over our index for i in range(0, len(index["features"])): # compute the euclidean distance between our query features # and the features for the current image in our index, then # update our results list with a 2-tuple consisting of the # computed distance and the index of the image d = euclidean(queryFeatures, index["features"][i]) results.append((d, i)) # sort the results and grab the top ones results = sorted(results)[:maxResults] # return the list of results return results Our perform_search function is responsible for comparing all feature vectors for similarity and returning the results. This function accepts both the queryFeatures, a feature vector for the query image, and the index of all features to search through. Our results will contain the top maxResults (in our case 64 is the default but we will soon override it to 225).
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
Line 17 initializes our list of results, which Lines 20-20 then populate. Here, we loop over all entries in our index, computing the Euclidean distance between our queryFeatures and the current feature vector in the index. When it comes to the distance: The smaller the distance, the more similar the two images are The larger the distance, the less similar they are We sort and grab the top results such that images that are more similar to the query are at the front of the list via Line 29. Finally, we return the the search results to the calling function (Line 32). With both our distance metric and searching utility defined, we’re now ready to parse command line arguments: # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-m", "--model", type=str, required=True, help="path to trained autoencoder") ap.add_argument("-i", "--index", type=str, required=True, help="path to features index file") ap.add_argument("-s", "--sample", type=int, default=10, help="# of testing queries to perform") args = vars(ap.parse_args()) Our script accepts three command line arguments: --model: The path to the trained autoencoder from the “Training the autoencoder” section --index: Our index of features to search through (i.e., the serialized index from the “Indexing our image dataset for image retrieval” section) --sample: The number of testing queries to perform with a default of 10 Now, let’s load and preprocess our digit data: # load the MNIST dataset print("[INFO] loading MNIST dataset...") ((trainX, _), (testX, _)) = mnist.load_data() # add a channel dimension to every image in the dataset, then scale # the pixel intensities to the range [0, 1] trainX = np.expand_dims(trainX, axis=-1) testX = np.expand_dims(testX, axis=-1) trainX = trainX.astype("float32") / 255.0 testX = testX.astype("float32") / 255.0 And then we’ll load our autoencoder and index: # load the autoencoder model and index from disk print("[INFO] loading autoencoder and index...") autoencoder = load_model(args["model"]) index = pickle.loads(open(args["index"], "rb").read()) # create the encoder model which consists of *just* the encoder # portion of the autoencoder encoder = Model(inputs=autoencoder.input, outputs=autoencoder.get_layer("encoded").output) # quantify the contents of our input testing images using the encoder print("[INFO] encoding testing images...") features = encoder.predict(testX) Here, Line 57 loads our trained autoencoder from disk, while Line 58 loads our pickled index from disk. We then build a Model that will accept our images as an input and the output of our encoder layer (i.e., feature vector) as our model’s output (Lines 62 and 63). Given our encoder, Line 67 performs a forward-pass of our set of testing images through the network, generating a list of features to quantify them. We’ll now take a random sample of images, marking them as queries: # randomly sample a set of testing query image indexes queryIdxs = list(range(0, testX.shape[0])) queryIdxs = np.random.choice(queryIdxs, size=args["sample"], replace=False) # loop over the testing indexes for i in queryIdxs: # take the features for the current image, find all similar # images in our dataset, and then initialize our list of result # images queryFeatures = features[i] results = perform_search(queryFeatures, index, maxResults=225) images = [] # loop over the results for (d, j) in results: # grab the result image, convert it back to the range # [0, 255], and then update the images list image = (trainX[j] * 255).astype("uint8") image = np.dstack([image] * 3) images.append(image) # display the query image query = (testX[i] * 255).astype("uint8") cv2.imshow("Query", query) # build a montage from the results and display it montage = build_montages(images, (28, 28), (15, 15))[0] cv2.imshow("Results", montage) cv2.waitKey(0) Lines 70-72 sample a set of testing image indices, marking them as our search engine queries. We then loop over the queries beginning on Line 75.
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
Inside, we: Grab the queryFeatures, and perform the search (Lines 79 and 80) Initialize a list to hold our result images (Line 81) Loop over the results, scaling the image back to the range [0, 255], creating an RGB representation from the grayscale image for display, and then adding it to our images results (Lines 84-89) Display the query image in its own OpenCV window (Lines 92 and 93) Display a montage of search engine results (Lines 96 and 97) When the user presses a key, we repeat the process (Line 98) with a different query image; you should continue to press a key as you inspect results until all of our query samples have been searched To recap our search searching script, first we loaded our autoencoder and index. We then grabbed the encoder portion of the autoencoder and used it to quantify our images (i.e., create feature vectors). From there, we created a sample of random query images to test our searching method which is based on the Euclidean distance computation. Smaller distances indicate similar images — the similar images will be shown first because our results are sorted (Line 29). We searched our index for each query showing only a maximum of maxResults in each montage. In the next section, we’ll get the chance to visually validate how our autoencoder-based search engine works. Image retrieval results using autoencoders, Keras, and TensorFlow We are now ready to see our autoencoder image retrieval system in action! Start by making sure you have: Used the “Downloads” section of this tutorial to download the source code Executed the train_autoencoder.py file to train the convolutional autoencoder Run the index_images.py to quantify each image in our dataset From there, you can execute the search.py script to perform a search: $ python search.py --model output/autoencoder.h5 \ --index output/index.pickle [INFO] loading MNIST dataset... [INFO] loading autoencoder and index... [INFO] encoding testing images... Below is an example providing a query image containing the digit 9 (top) along with the search results from our autoencoder image retrieval system (bottom): Figure 4: Top: MNIST query image. Bottom: Autoencoder-based image search engine results. We learn how to use Keras, TensorFlow, and OpenCV to build a Content-based Image Retrieval (CBIR) system.
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
Here, you can see that our system has returned search results also containing nines. Let’s now use a 2 as our query image: Figure 5: Content-based Image Retrieval (CBIR) is used with an autoencoder to find images of handwritten 2s in our dataset. Sure enough, our CBIR system returns digits containing twos, implying that latent-space representation has correctly quantified what a 2 looks like. Here’s an example of using a 4 as a query image: Figure 6: Content-based Image Retrieval (CBIR) is used with an autoencoder to find images of handwritten 4s in our dataset. Again, our autoencoder image retrieval system returns all fours as the search results. Let’s look at one final example, this time using a 0 as a query image: Figure 7: No image search engine is perfect. Here, there are mistakes in our results from searching MNIST for handwritten 0s using an autoencoder-based image search engine built with TensorFlow, Keras, and OpenCV. This result is more interesting — note the two highlighted results in the screenshot. The first highlighted result is likely a 5, but the tail of the five seems to be connecting to the middle part, creating a digit that looks like a cross between a 0 and an 8. We then have what I think is an 8 near the bottom of the search results (also highlighted in red).
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
Again, we can appreciate how our image retrieval system may see that 8 as visually similar to a 0. Tips to improve autoencoder image retrieval accuracy and speed In this tutorial, we performed image retrieval on the MNIST dataset to demonstrate how autoencoders can be used to build image search engines. However, you will more than likely want to use your own image dataset rather than the MNIST dataset. Swapping in your own dataset is as simple as replacing the MNIST dataset loader helper function with your own dataset loader — you can then train an autoencoder on your dataset. However, make sure your autoencoder accuracy is sufficient. If your autoencoder cannot reasonably reconstruct your input data, then: The autoencoder is failing to capture the patterns in your dataset The latent-space vector will not properly quantify your images And without proper quantification, your image retrieval system will return irrelevant results Therefore, nearly the entire accuracy of your CBIR system hinges on your autoencoder — take the time to ensure it is properly trained. Once your autoencoder is performing well, you can then move on to optimizing the speed of your search procedure. Secondly, you should also consider the scalability of your CBIR system. Our implementation here is an example of a linear search with O(N) complexity, meaning that it will not scale well. To improve the speed of the retrieval system, you should use Approximate Nearest Neighbor algorithms and specialized data structures such as VP-Trees, Random Projection trees, etc.,
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
which can reduce the computational complexity to O(log N). To learn more about these techniques, refer to my article on Building an Image Hashing Search Engine with VP-Trees and OpenCV. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms.
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to use convolutional autoencoders for image retrieval using TensorFlow and Keras. To create our image retrieval system, we: Trained a convolutional autoencoder on our image dataset Used the trained autoencoder to compute the latent-space representation of each image in our dataset — this representation serves as our feature vector that quantifies the contents of the image Compared the feature vector from our query image to all feature vectors in our dataset using a distance function (in this case, the Euclidean distance, but cosine distance would also work well here).
https://pyimagesearch.com/2020/03/30/autoencoders-for-content-based-image-retrieval-with-keras-and-tensorflow/
The smaller the distance between the vectors the more similar our images were. We then sorted our results based on the computed distance and displayed our results to the user. Autoencoders can be extremely useful for CBIR applications — the downside is that they require a lot of training data, which you may or may not have. More advanced deep learning image retrieval systems rely on siamese networks and triplet loss to embed vectors for images such that more similar images lie closer together in a Euclidean space, while less similar images are farther away — I’ll be covering these types of network architectures and techniques at a future date. To download the source code to this post (including the pre-trained autoencoder), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
Click here to download the source code to this pos In this tutorial, you will learn how to blur and anonymize faces using OpenCV and Python. Today’s blog post is inspired by an email I received last week from PyImageSearch reader, Li Wei: Hi Adrian, I’m working on a research project for my university. I’m in charge of creating the dataset but my professor has asked me to “anonymize” each image by detecting faces and then blurring them to ensure privacy is protected and that no face can be recognized (apparently this is a requirement at my institution before we publicly distribute the dataset). Do you have any tutorials on face anonymization? How can I blur faces using OpenCV? Thanks, Li Wei Li asks a great question — we often utilize face detection in our projects, typically as the first step in a face recognition pipeline. But what if we wanted to do the “opposite” of face recognition? What if we instead wanted to anonymize the face by blurring it, thereby making it impossible to identify the face? Practical applications of face blurring and anonymization include: Privacy and identity protection in public/private areas Protecting children online (i.e., blur faces of minors in uploaded photos) Photo journalism and news reporting (e.g., blur faces of people who did not sign a waiver form) Dataset curation and distribution (e.g., anonymize individuals in dataset) … and more! To learn how to blur and anonymize faces with OpenCV and Python, just keep reading!
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
Looking for the source code to this post? Jump Right To The Downloads Section Blur and anonymize faces with OpenCV and Python In the first part of this tutorial, we’ll briefly discuss what face blurring is and how we can use OpenCV to anonymize faces in images and video streams. From there, we’ll discuss the four-step method to blur faces with OpenCV and Python. We’ll then review our project structure and implement two methods for face blurring with OpenCV: Using a Gaussian blur to anonymize faces in images and video streams Applying a “pixelated blur” effect to anonymize faces in images and video Given our two implementations, we’ll create Python driver scripts to apply these face blurring methods to both images and video. We’ll then review the results of our face blurring and anonymization methods. What is face blurring, and how can it be used for face anonymization? Figure 1: In this tutorial, we will learn how to blur faces with OpenCV and Python, similar to the face in this example (image source). Face blurring is a computer vision method used to anonymize faces in images and video. An example of face blurring and anonymization can be seen in Figure 1 above — notice how the face is blurred, and the identity of the person is indiscernible. We use face blurring to help protect the identity of a person in an image.
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
4 Steps to perform face blurring and anonymization Figure 2: Face blurring with OpenCV and Python can be broken down into four steps. Applying face blurring with OpenCV and computer vision is a four-step process. Step #1 is to perform face detection. Figure 3: The first step for face blurring with OpenCV and Python is to detect all faces in an image/video (image source). Any face detector can be used here, provided that it can produce the bounding box coordinates of a face in an image or video stream. Typical face detectors that you may use include Haar cascades HOG + Linear SVM Deep learning-based face detectors. You can refer to this face detection guide for more information on how to detect faces in an image. Once you have detected a face, Step #2 is to extract the Region of Interest (ROI): Figure 4: The second step for blurring faces with Python and OpenCV is to extract the face region of interest (ROI). Your face detector will give you the bounding box (x, y)-coordinates of a face in an image. These coordinates typically represent: The starting x-coordinate of the face bounding box The ending x-coordinate of the face The starting y-coordinate of the face location The ending y-coordinate of the face You can then use this information to extract the face ROI itself, as shown in Figure 4 above.
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
Given the face ROI, Step #3 is to actually blur/anonymize the face: Figure 5: The third step for our face blurring method using OpenCV is to apply your blurring algorithm. In this tutorial, we learn two such blurring algorithms — Gaussian blur and pixelation. Typically, you’ll apply a Gaussian blur to anonymize the face. You may also apply methods to pixelate the face if you find the end result more aesthetically pleasing. Exactly how you “blur” the image is up to you — the important part is that the face is anonymized. With the face blurred and anonymized, Step #4 is to store the blurred face back in the original image: Figure 6: The fourth and final step for face blurring with Python and OpenCV is to replace the original face ROI with the blurred face ROI. Using the original (x, y)-coordinates from the face detection (i.e., Step #2), we can take the blurred/anonymized face and then store it back in the original image (if you’re utilizing OpenCV and Python, this step is performed using NumPy array slicing). The face in the original image has been blurred and anonymized — at this point the face anonymization pipeline is complete. Let’s see how we can implement face blurring and anonymization with OpenCV in the remainder of this tutorial. How to install OpenCV for face blurring To follow my face blurring tutorial, you will need OpenCV installed on your system.
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
I recommend installing OpenCV 4 using one of my tutorials: pip install opencv — the easiest and fastest method How to install OpenCV 4 on Ubuntu Install OpenCV 4 on macOS I recommend the pip installation method for 99% of readers — it’s also how I typically install OpenCV for quick projects like face blurring. If you think you might need the full install of OpenCV with patented algorithms, you should consider either the second or third bullet depending on your operating system. Both of these guides require compiling from source, which takes considerably longer as well, but can (1) give you the full OpenCV install and (2) allow you to optimize OpenCV for your operating system and system architecture. Once you have OpenCV installed, you can move on with the rest of the tutorial. Note: I don’t support the Windows OS here at PyImageSearch. See my FAQ page. Project structure Go ahead and use the “Downloads” section of this tutorial to download the source code, example images, and pre-trained face detector model. From there, let’s inspect the contents: $ tree --dirsfirst . ├── examples │   ├── adrian.jpg │   ├── chris_evans.png │   ├── robert_downey_jr.png │   ├── scarlett_johansson.png │   └── tom_king.jpg ├── face_detector │   ├── deploy.prototxt │   └── res10_300x300_ssd_iter_140000.caffemodel ├── pyimagesearch │   ├── __init__.py │   └── face_blurring.py ├── blur_face.py └── blur_face_video.py 3 directories, 11 files The first step of face blurring is perform face detection to localize faces in a image/frame. We’ll use a deep learning-based Caffe model as shown in the face_detector/ directory.
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
Our two Python driver scripts, blur_face.py and blur_face_video.py, first detect faces and then perform face blurring in images and video streams. We will step through both scripts so that you can adapt them for your own projects. First, we’ll review face blurring helper functions inside the face_blurring.py file. Blurring faces with a Gaussian blur and OpenCV Figure 7: Gaussian face blurring with OpenCV and Python (image source). We’ll be implementing two helper functions to aid us in face blurring and anonymity: anonymize_face_simple: Performs a simple Gaussian blur on the face ROI (such as in Figure 7 above) anonymize_face_pixelate: Creates a pixelated blur-like effect (which we’ll cover in the next section) Let’s take a look at the implementation of anonymize_face_simple — open up the face_blurring.py file in the pyimagesearch module, and insert the following code: # import the necessary packages import numpy as np import cv2 def anonymize_face_simple(image, factor=3.0): # automatically determine the size of the blurring kernel based # on the spatial dimensions of the input image (h, w) = image.shape[:2] kW = int(w / factor) kH = int(h / factor) # ensure the width of the kernel is odd if kW % 2 == 0: kW -= 1 # ensure the height of the kernel is odd if kH % 2 == 0: kH -= 1 # apply a Gaussian blur to the input image using our computed # kernel size return cv2.GaussianBlur(image, (kW, kH), 0) Our face blurring utilities require NumPy and OpenCV imports as shown on Lines 2 and 3. Beginning on Line 5, we define our anonymize_face_simple function, which accepts an input face image and blurring kernel scale factor. Lines 8-18 derive the blurring kernel’s width and height as a function of the input image dimensions: The larger the kernel size, the more blurred the output face will be The smaller the kernel size, the less blurred the output face will be Increasing the factor will therefore increase the amount of blur applied to the face. When applying a blur, our kernel dimensions must be odd integers such that the kernel can be placed at a central (x, y)-coordinate of the input image (see my tutorial on convolutions with OpenCV for more information on why kernels must be odd integers). Once we have our kernel dimensions, kW and kH, Line 22 applies a Gaussian blur kernel to the face image and returns the blurred face to the calling function. In the next section, we’ll cover an alternative anonymity method: pixelated blurring.
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
Creating a pixelated face blur with OpenCV Figure 8: Creating a pixelated face effect on an image with OpenCV and Python (image source). The second method we’ll be implementing for face blurring and anonymization creates a pixelated blur-like effect — an example of such a method can be seen in Figure 8. Notice how we have pixelated the image and made the identity of the person indiscernible. This pixelated type of face blurring is typically what most people think of when they hear “face blurring” — it’s the same type of face blurring you’ll see on the evening news, mainly because it’s a bit more “aesthetically pleasing” to the eye than a Gaussian blur (which is indeed a bit “jarring”). Let’s learn how to implement this pixelated face blurring method with OpenCV — open up the face_blurring.py file (the same file we used in the previous section), and append the following code: def anonymize_face_pixelate(image, blocks=3): # divide the input image into NxN blocks (h, w) = image.shape[:2] xSteps = np.linspace(0, w, blocks + 1, dtype="int") ySteps = np.linspace(0, h, blocks + 1, dtype="int") # loop over the blocks in both the x and y direction for i in range(1, len(ySteps)): for j in range(1, len(xSteps)): # compute the starting and ending (x, y)-coordinates # for the current block startX = xSteps[j - 1] startY = ySteps[i - 1] endX = xSteps[j] endY = ySteps[i] # extract the ROI using NumPy array slicing, compute the # mean of the ROI, and then draw a rectangle with the # mean RGB values over the ROI in the original image roi = image[startY:endY, startX:endX] (B, G, R) = [int(x) for x in cv2.mean(roi)[:3]] cv2.rectangle(image, (startX, startY), (endX, endY), (B, G, R), -1) # return the pixelated blurred image return image Beginning on Line 24, we define our anonymize_face_pixilate function and parameters. This function accepts a face image and the number of pixel blocks. Lines 26-28 grab our face image dimensions and divide it into MxN blocks. From there, we proceed to loop over the blocks in both the x and y directions (Lines 31 and 32). In order to compute the starting and ending bounding coordinates for the current block, we use our step indices, i and j (Lines 35-38). Subsequently, we extract the current block ROI and compute the mean RGB pixel intensities for the ROI (Lines 43 and 44).
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
We then annotate a rectangle on the block using the computed mean RGB values, thereby creating the “pixelated”-like effect (Lines 45 and 46). Note: To learn more about OpenCV drawing functions, be sure to spend some time on my OpenCV Tutorial. Finally, Line 49 returns our pixelated face image to the caller. Implementing face blurring in images with OpenCV Now that we have our two face blurring methods implemented, let’s learn how we can apply them to blur a face in an image using OpenCV and Python. Open up the blur_face.py file in your project structure, and insert the following code: # import the necessary packages from pyimagesearch.face_blurring import anonymize_face_pixelate from pyimagesearch.face_blurring import anonymize_face_simple import numpy as np import argparse import cv2 import os # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to input image") ap.add_argument("-f", "--face", required=True, help="path to face detector model directory") ap.add_argument("-m", "--method", type=str, default="simple", choices=["simple", "pixelated"], help="face blurring/anonymizing method") ap.add_argument("-b", "--blocks", type=int, default=20, help="# of blocks for the pixelated blurring method") ap.add_argument("-c", "--confidence", type=float, default=0.5, help="minimum probability to filter weak detections") args = vars(ap.parse_args()) Our most notable imports are both our face pixelation and face blurring functions from the previous two sections (Lines 2 and 3). Our script accepts five command line arguments, the first two of which are required: --image: The path to your input image containing faces --face: The path to your face detector model directory --method: Either the simple blurring or pixelated methods can be chosen with this flag. The simple method is the default --blocks: For pixelated face anonymity, you must provide the number of blocks you want to use, or you can keep the default of 20 --confidence: The minimum probability to filter weak face detections is set to 50% by default Given our command line arguments, we’re now ready to perform face detection: # load our serialized face detector model from disk print("[INFO] loading face detector model...") prototxtPath = os.path.sep.join([args["face"], "deploy.prototxt"]) weightsPath = os.path.sep.join([args["face"], "res10_300x300_ssd_iter_140000.caffemodel"]) net = cv2.dnn.readNet(prototxtPath, weightsPath) # load the input image from disk, clone it, and grab the image spatial # dimensions image = cv2.imread(args["image"]) orig = image.copy() (h, w) = image.shape[:2] # construct a blob from the image blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300), (104.0, 177.0, 123.0)) # pass the blob through the network and obtain the face detections print("[INFO] computing face detections...") net.setInput(blob) detections = net.forward() First, we load the Caffe-based face detector model (Lines 26-29). We then load and preprocess our input --image, generating a blob for inference (Lines 33-39). Read my How OpenCV’s blobFromImage works tutorial to learn the “why” and “how” behind the function call on Lines 38 and 39.
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
Deep learning face detection inference (Step #1) takes place on Lines 43 and 44. Next, we’ll begin looping over the detections: # loop over the detections for i in range(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with the # detection confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the confidence is greater # than the minimum confidence if confidence > args["confidence"]: # compute the (x, y)-coordinates of the bounding box for the # object box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # extract the face ROI face = image[startY:endY, startX:endX] Here, we loop over detections and check the confidence, ensuring it meets the minimum threshold (Lines 47-54). Assuming so, we then extract the face ROI (Step #2) via Lines 57-61. We’ll then anonymize the face (Step #3): # check to see if we are applying the "simple" face blurring # method if args["method"] == "simple": face = anonymize_face_simple(face, factor=3.0) # otherwise, we must be applying the "pixelated" face # anonymization method else: face = anonymize_face_pixelate(face, blocks=args["blocks"]) # store the blurred face in the output image image[startY:endY, startX:endX] = face Depending on the --method, we’ll perform simple blurring or pixelation to anonymize the face (Lines 65-72). Step #4 entails overwriting the original face ROI in the image with our anonymized face ROI (Line 75). Steps #2-#4 are then repeated for all faces in the input --image until we’re ready to display the result: # display the original image and the output image with the blurred # face(s) side by side output = np.hstack([orig, image]) cv2.imshow("Output", output) cv2.waitKey(0) To wrap up, the original and altered images are displayed side by side until a key is pressed (Lines 79-81). Face blurring and anonymizing in images results Let’s now put our face blurring and anonymization methods to work. Go ahead and use the “Downloads” section of this tutorial to download the source code, example images, and pre-trained OpenCV face detector. From there, open up a terminal, and execute the following command: $ python blur_face.py --image examples/adrian.jpg --face face_detector [INFO] loading face detector model... [INFO] computing face detections... Figure 9: Left: A photograph of me. Right: My face has been blurred with OpenCV and Python using a Gaussian approach.
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
On the left, you can see the original input image (i.e., me), while the right shows that my face has been blurred using the Gaussian blurring method — without seeing the original image, you would have no idea it was me (other than the tattoos, I suppose). Let’s try another image, this time applying the pixelated blurring technique: $ python blur_face.py --image examples/tom_king.jpg --face face_detector --method pixelated [INFO] loading face detector model... [INFO] computing face detections... Figure 10: Tom King’s face has been pixelated with OpenCV and Python; you can adjust the block settings until you’re comfortable with the level of anonymity. ( image source) On the left, we have the original input image of Tom King, one of my favorite comic writers. Then, on the right, we have the output of the pixelated blurring method — without seeing the original image, you would have no idea whose face was in the image. Implementing face blurring in real-time video with OpenCV Our previous example only handled blurring and anonymizing faces in images — but what if we wanted to apply face blurring and anonymization to real-time video streams? Is that possible? You bet it is! Open up the blur_face_video.py file in your project structure, and let’s learn how to blur faces in real-time video with OpenCV: # import the necessary packages from pyimagesearch.face_blurring import anonymize_face_pixelate from pyimagesearch.face_blurring import anonymize_face_simple from imutils.video import VideoStream import numpy as np import argparse import imutils import time import cv2 import os # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-f", "--face", required=True, help="path to face detector model directory") ap.add_argument("-m", "--method", type=str, default="simple", choices=["simple", "pixelated"], help="face blurring/anonymizing method") ap.add_argument("-b", "--blocks", type=int, default=20, help="# of blocks for the pixelated blurring method") ap.add_argument("-c", "--confidence", type=float, default=0.5, help="minimum probability to filter weak detections") args = vars(ap.parse_args()) We begin with our imports on Lines 2-10. For face recognition in video, we’ll use the VideoStream API in my imutils package (Line 4).
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
Our command line arguments are the same as previously (Lines 13-23). We’ll then load our face detector and initialize our video stream: # load our serialized face detector model from disk print("[INFO] loading face detector model...") prototxtPath = os.path.sep.join([args["face"], "deploy.prototxt"]) weightsPath = os.path.sep.join([args["face"], "res10_300x300_ssd_iter_140000.caffemodel"]) net = cv2.dnn.readNet(prototxtPath, weightsPath) # initialize the video stream and allow the camera sensor to warm up print("[INFO] starting video stream...") vs = VideoStream(src=0).start() time.sleep(2.0) Our video stream accesses our computer’s webcam (Line 34). We’ll then proceed to loop over frames in the stream and perform Step #1 — face detection: # loop over the frames from the video stream while True: # grab the frame from the threaded video stream and resize it # to have a maximum width of 400 pixels frame = vs.read() frame = imutils.resize(frame, width=400) # grab the dimensions of the frame and then construct a blob # from it (h, w) = frame.shape[:2] blob = cv2.dnn.blobFromImage(frame, 1.0, (300, 300), (104.0, 177.0, 123.0)) # pass the blob through the network and obtain the face detections net.setInput(blob) detections = net.forward() Once faces are detected, we’ll ensure they meet the minimum confidence threshold: # loop over the detections for i in range(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with # the detection confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the confidence is # greater than the minimum confidence if confidence > args["confidence"]: # compute the (x, y)-coordinates of the bounding box for # the object box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # extract the face ROI face = frame[startY:endY, startX:endX] # check to see if we are applying the "simple" face # blurring method if args["method"] == "simple": face = anonymize_face_simple(face, factor=3.0) # otherwise, we must be applying the "pixelated" face # anonymization method else: face = anonymize_face_pixelate(face, blocks=args["blocks"]) # store the blurred face in the output image frame[startY:endY, startX:endX] = face Looping over high confidence detections, we extract the face ROI (Step #2) on Lines 55-69. To accomplish Step #3, we apply our chosen anonymity --method via Lines 73-80. And finally, for Step #4, we replace the anonymous face in our camera’s frame (Line 83). To close out our face blurring loop, we display the frame (with blurred out faces) on the screen: # show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() vs.stop() If the q key is pressed, we break out of the face blurring loop and perform cleanup. Great job — in the next section, we’ll analyze results! Real-time face blurring OpenCV results We are now ready to apply face blurring with OpenCV to real-time video streams. Start by using the “Downloads” section of this tutorial to download the source code and pre-trained OpenCV face detector. You can then launch the blur_face_video.py using the following command: $ python blur_face_video.py --face face_detector --method simple [INFO] loading face detector model... [INFO] starting video stream... Notice how my face is blurred in the video stream using the Gaussian blurring method.
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
We can apply the pixelated face blurring method by supplying the --method pixelated flag: $ python blur_face_video.py --face face_detector --method pixelated [INFO] loading face detector model... [INFO] starting video stream...   Again, my face is anonymized/blurred using OpenCV, but using the more “aesthetically pleasing” pixelated method. Handling missed face detections and “detection flickering”   The face blurring method we’re applying here assumes that a face can be detected in each and every frame of our input video stream. But what happens if our face detector misses a detection, such as in video at the top of this section? If our face detector misses a face detection, then the face cannot be blurred, thereby defeating the purpose of face blurring and anonymization. So what do we do in those situations? Typically, the easiest method is to take the last known location of the face (i.e., the previous detection location) and then blur that region. Faces don’t tend to move very quickly, so blurring the last known location will help ensure the face is anonymized even when your face detector misses the face. A more advanced option is to use dedicated object trackers similar to what we do in our people/footfall counter guide. Using this method you would: Detect faces in the video stream Create an object tracker for each face Use the object tracker and face detector to correlate the position of the face If the face detector misses a detection, then fall back on the tracker to provide the location of the face This method is more computationally complex than the simple “last known location,” but it’s also far more robust. I’ll leave implementing those methods up to you (although I am tempted to cover them in a future tutorial, as they are pretty fun methods to implement).
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to blur and anonymize faces in both images and real-time video streams using OpenCV and Python. Face blurring and anonymization is a four-step process: Step #1: Apply a face detector (i.e., Haar cascades, HOG + Linear SVM, deep learning-based face detectors) to detect the presence of a face in an image Step #2: Use the bounding box (x, y)-coordinates to extract the face ROI from the input image Step #3: Blur the face in the image, typically with a Gaussian blur or pixelated blur, thereby anonymizing the face and protecting the identity of the person in the image Step #4: Store the blurred/anonymized face back in the original image We then implemented this entire pipeline using only OpenCV and Python. I hope you’ve found this tutorial helpful! To download the source code to this post (including the example images and pre-trained face detector), just enter your email address in the form below!
https://pyimagesearch.com/2020/04/06/blur-and-anonymize-faces-with-opencv-and-python/
Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Click here to download the source code to this pos In this tutorial, you will learn how to build a scalable image hashing search engine using OpenCV, Python, and VP-Trees. Image hashing algorithms are used to: Uniquely quantify the contents of an image using only a single integer. Find duplicate or near-duplicate images in a dataset of images based on their computed hashes. Back in 2017, I wrote a tutorial on image hashing with OpenCV and Python (which is required reading for this tutorial). That guide showed you how to find identical/duplicate images in a given dataset. However, there was a scalability problem with that original tutorial — namely that it did not scale! To find near-duplicate images, our original image hashing method would require us to perform a linear search, comparing the query hash to each individual image hash in our dataset. In a practical, real-world application that’s far too slow — we need to find a way to reduce that search to sub-linear time complexity. But how can we reduce search time so dramatically? The answer is a specialized data structure called a VP-Tree.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Using a VP-Tree we can reduce our search complexity from O(n) to O(log n), enabling us to obtain our sub-linear goal! In the remainder of this tutorial you will learn how to: Build an image hashing search engine to find both identical and near-identical images in a dataset. Utilize a specialized data structure, called a VP-Tree, that can be used used to scale image hashing search engines to millions of images. To learn how to build your first image hashing search engine with OpenCV, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Building an Image Hashing Search Engine with VP-Trees and OpenCV In the first part of this tutorial, I’ll review what exactly an image search engine is for newcomers to PyImageSearch. Then, we’ll discuss the concept of image hashing and perceptual hashing, including how they can be used to build an image search engine. We’ll also take a look at problems associated with image hashing search engines, including algorithmic complexity. Note: If you haven’t read my tutorial on Image Hashing with OpenCV and Python, make sure you do so now. That guide is required reading before you continue here.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
From there, we’ll briefly review Vantage-point Trees (VP-Trees) which can be used to dramatically improve the efficiency and performance of image hashing search engines. Armed with our knowledge we’ll implement our own custom image hashing search engine using VP-Trees and then examine the results. What is an image search engine? Figure 1: An example of an image search engine. A query image is presented and the search engine finds similar images in a dataset. In this section, we’ll review the concept of an image search engine and direct you to some additional resources. PyImageSearch has roots in image search engines — that was my main interest when I started the blog back in 2014. This tutorial is a fun one for me to share as I have a soft spot for image search engines as a computer vision topic. Image search engines are a lot like textual search engines, only instead of using text as a query, we instead use an image. When you use a text search engines, such as Google, Bing, or DuckDuckGo, etc.,
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
you enter your search query — a word or phrase. Indexed websites of interest are returned to you as results, and ideally, you’ll find what you are looking for. Similarly, for an image search engine, you present a query image (not a textual word/phrase). The image search engine then returns similar image results based solely on the contents of the image. Of course, there is a lot that goes on under the hood in any type of search engine — just keep this key concept of query/results in mind going forward as we build an image search engine today. To learn more about image search engines, I suggest you refer to the following resources: The complete guide to building an image search engine with Python and OpenCV A great guide for those who want to get started with enough knowledge to be dangerous. Image Search Engines Blog Category This category link returns all of my image search engine content on the PyImageSearch blog. PyImageSearch Gurus My flagship computer vision course has 13 modules, one of which is dedicated to Content-Based Image Retrieval (a fancy name for image search engines). Read those guides to obtain a basic understanding of what an image search engine is, then come back to this post to learn about image hash search engines. What is image hashing/perceptual hashing?
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Figure 2: An example of an image hashing function. Top-left: An input image. Top-right: An image hashing function. Bottom: The resulting hash value. We will build a basic image hashing search engine with VP-Trees and OpenCV in this tutorial. Image hashing, also called perceptual hashing, is the process of: Examining the contents of an image. Constructing a hash value (i.e., an integer) that uniquely quantifies an input image based on the contents of the image alone. One of the benefits of using image hashing is that the resulting storage used to quantify the image is super small. For example, let’s suppose we have an 800x600px image with 3 channels. If we were to store that entire image in memory using an 8-bit unsigned integer data type, the image would require 1.44MB of RAM.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Of course, we would rarely, if ever, store the raw image pixels when quantifying an image. Instead, we would use algorithms such as keypoint detectors and local invariant descriptors (i.e., SIFT, SURF, etc.). Applying these methods can typically lead to 100s to 1000s of features per image. If we assume a modest 500 keypoints detected, each resulting in a feature vector of 128-d with a 32-bit floating point data type, we would require a total of 0.256MB to store the quantification of each individual image in our dataset. Image hashing, on the other hand, allows us to quantify an image using only a 32-bit integer, requiring only 4 bytes of memory! Figure 3: An image hash requires far less disk space in comparison to the original image bitmap size or image features (SIFT, etc.). We will use image hashes as a basis for an image search engine with VP-Trees and OpenCV. Furthermore, image hashes should also be comparable. Let’s suppose we compute image hashes for three input images, two of which near-identical images: Figure 4: Three images with different hashes. The Hamming Distance between the top two hashes is closer than the Hamming distance to the third image.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
We will use a VP-Tree data structure to make an image hashing search engine. To compare our image hashes we will use the Hamming distance. The Hamming distance, in this context, is used to compare the number of different bits between two integers. In practice, this means that we count the number of 1s when taking the XOR between two integers. Therefore, going back to our three input images above, the Hamming distance between our two similar images should be smaller (indicating more similarity) than the Hamming distance between the third less similar image: Figure 5: The Hamming Distance between image hashes is shown. Take note that the Hamming Distance between the first two images is smaller than that of the first and third (or 2nd and 3rd). The Hamming Distance between image hashes will play a role in our image search engine using VP-Trees and OpenCV. Again, note how the Hamming distance between the two images is smaller than the distances between the third image: The smaller the Hamming distance is between two hashes, the more similar the images are. And conversely, the larger the Hamming distance is between two hashes, the less similar the images are. Also note how the distance between identical images (i.e., along the diagonal of Figure 5) are all zero — the Hamming distance between two hashes will be zero if the two input images are identical, otherwise the distance will be > 0, with larger values indicating less similarity.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
There are a number of image hashing algorithms, but one of the most popular ones is called the difference hash, which includes four steps: Step #1: Convert the input image to grayscale. Step #2: Resize the image to fixed dimensions, N + 1 x N, ignoring aspect ratio. Typically we set N=8 or N=16. We use N + 1 for the number of rows so that we can compute the difference (hence “difference hash”) between adjacent pixels in the image. Step #3: Compute the difference. If we set N=8 then we have 9 pixels per row and 8 pixels per column. We can then compute the difference between adjacent column pixels, yielding 8 differences. 8 rows of 8 differences (i.e., 8×8) results in 64 values. Step #4: Finally, we can build the hash. In practice all we actually need to perform is a “greater than” operation comparing the columns, yielding binary values.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
These 64 binary values are compacted into an integer, forming our final hash. Typically, image hashing algorithms are used to find near-duplicate images in a large dataset. I’ve covered image hashing in detail inside this tutorial so if the concept is new to you, I would suggest reading that guide before continuing here. What is an image hashing search engine? Figure 6: Image search engines consist of images, an indexer, and a searcher. We’ll index all of our images by computing and storing their hashes. We’ll build a VP-Tree of the hashes. The searcher will compute the hash of the query image and search the VP tree for similar images and return the closest matches. Using Python, OpenCV, and vptree, we can implement our image hashing search engine. An image hashing search engine consists of two components: Indexing: Taking an input dataset of images, computing the hashes, and storing them in a data structure to facilitate fast, efficient search.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Searching/Querying: Accepting an input query image from the user, computing the hash, and finding all near-identical images in our indexed dataset. A great example of an image hashing search engine is TinEye, which is actually a reverse image search engine. A reverse image search engine: Accepts an input image. Finds all near-duplicates of that image on the web, telling you the website/URL of where the near duplicate can be found. Using this tutorial you will learn how to build your own TinEye! What makes scaling image hashing search engines problematic? One of the biggest issues with building an image hashing search engine is scalability — the more images you have, the longer it can take to perform the search. For example, let’s suppose we have the following scenario: We have a dataset of 1,000,000 images. We have already computed image hashes for each of these 1,000,000 images. A user comes along, presents us with an image, and then asks us to find all near-identical images in that dataset.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
How might you go about performing that search? Would you loop over all 1,000,000 image hashes, one by one, and compare them to the hash of the query image? Unfortunately, that’s not going to work. Even if you assume that each Hamming distance comparison takes 0.00001 seconds, with a total 1,000,000 images, it would take you 10 seconds to complete the search — far too slow for any type of search engine. Instead, to build an image hashing search engine that scales, you need to utilize specialized data structures. What are VP-Trees and how can they help scale image hashing search engines? Figure 7: We’ll use VP-Trees for our image hash search engine using Python and OpenCV. VP-Trees are based on a recursive algorithm that computes vantage points and medians until we reach child nodes containing an individual image hash. Child nodes that are closer together (i.e. smaller Hamming Distances in our case) are assumed to be more similar to each other. ( image source) In order to scale our image hashing search engine, we need to use a specialized data structure that: Reduces our search from linear complexity, O(n) … …down to sub-linear complexity, ideally O(log n) .
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
To accomplish that task we can use Vantage-point Trees (VP-Trees). VP-Trees are a metric tree that operates in a metric space by selecting a given position in space (i.e., the “vantage point”) and then partitioning the data points into two sets: Points that are near the vantage point Points that are far from the vantage point We then recursively apply this process, partitioning the points into smaller and smaller sets, thus creating a tree where neighbors in the tree have smaller distances. To visualize the process of constructing a VP-Tree, consider the following figure: Figure 8: A visual depiction of the process of building a VP-Tree (vantage point tree). We will use the vptree Python implementation by Richard Sjogren. ( image source) First, we select a point in space (denoted as the v in the center of the circle) — we call this point the vantage point. The vantage point is the point furthest from the parent vantage point in the tree. We then compute the median, μ, for all points, X. Once we have μ, we then divide X into two sets, S1 and S2: All points with distance <= μ belong to S1. All points with distance > μ belong to S2. We then recursively apply this process, building a tree as we go, until we are left with a child node. A child node contains only a single data point (in this case, one individual hash).
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Child nodes that are closer together in the tree thus have: Smaller distances between them. And therefore assumed to be more similar to each other than the rest of the data points in the tree. After recursively applying the VP-Tree construction method, we end up with a data structure, that, as the name suggests, is a tree: Figure 9: An example VP-Tree is depicted. We will use Python to build VP-Trees for use in an image hash search engine. Notice how we recursively split subsets of our dataset into smaller and smaller subsets, until we eventually reach the child nodes. VP-Trees take O(n log n) to build, but once we’ve constructed it, a search takes only O(log n), thus reducing our search time to sub-linear complexity! Later in this tutorial, you’ll learn to utilize VP-Trees with Python to build and scale our image hashing search engine. Note: This section is meant to be a gentle introduction to VP-Trees. If you are interested in learning more about them, I would recommend (1) consulting a data structures textbook, (2) following this guide from Steve Hanov’s blog, or (3) reading this writeup from Ivan Chen. The CALTECH-101 dataset Figure 10: The CALTECH-101 dataset consists of 101 object categories.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Our image hash search engine using VP-Trees, Python, and OpenCV will use the CALTECH-101 dataset for our practical example. The dataset we’ll be working today is the CALTECH-101 dataset which consists of 9,144 total images across 101 categories (with 40 to 800 images per category). The dataset is large enough to be interesting to explore from an introductory to image hashing perspective but still small enough that you can run the example Python scripts in this guide without having to wait hours and hours for your system to finish chewing on the images. You can download the CALTECH 101 dataset from their official webpage or you can use the following wget command: $ wget http://www.vision.caltech.edu/Image_Datasets/Caltech101/101_ObjectCategories.tar.gz $ tar xvzf 101_ObjectCategories.tar.gz Project structure Let’s inspect our project structure: $ tree --dirsfirst . ├── pyimagesearch │ ├── __init__.py │ └── hashing.py ├── 101_ObjectCategories [9,144 images] ├── queries │ ├── accordion.jpg │ ├── accordion_modified1.jpg │ ├── accordion_modified2.jpg │ ├── buddha.jpg │ └── dalmation.jpg ├── index_images.py └── search.py The pyimagesearch module contains hashing.py which includes three hashing functions. We will review the functions in the “Implementing our hashing utilities” section below. Our dataset is in the 101_ObjectCategories/ folder (CALTECH-101) contains 101 sub-directories with our images. Be sure to read the previous section to learn how to download the dataset. There are five query images in the queries/ directory. We will search for images with similar hashes to these images.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
The accordion_modified1.jpg and accordion_modiied2.jpg images will present unique challenges to our VP-Trees image hashing search engine. The core of today’s project lies in two Python scripts: index_images.py and search.py : Our indexer will calculate hashes for all 9,144 images and organize the hashes in a VP-Tree. This index will reside in two .pickle files: (1) a dictionary of all computed hashes, and (2) the VP-Tree. The searcher will calculate the hash for a query image and search the VP-Tree for the closest images via Hamming Distance. The results will be returned to the user. If that sounds like a lot, don’t worry! This tutorial will break everything down step-by-step. Configuring your development environment For this blog post, your development environment needs the following packages installed: OpenCV NumPy imutils vptree (pure Python implementation of the VP-Tree data structure) Luckily for us, everything is pip-installable. My recommendation for you is to follow the first OpenCV link to pip-install OpenCV in a virtual environment on your system. From there you’ll just pip-install everything else in the same environment.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
It will look something like this: # setup pip, virtualenv, and virtualenvwrapper (using the "pip install OpenCV" instructions) $ workon <env_name> $ pip install numpy $ pip install opencv-contrib-python $ pip install imutils $ pip install vptree Replace <env_name> with the name of your virtual environment. The workon command will only be available once you set up virtualenv and virtualenvwrapper following these instructions. Implementing our image hashing utilities Before we can build our image hashing search engine, we first need to implement a few helper utilities. Open up the hashing.py file in the project structure and insert the following code: # import the necessary packages import numpy as np import cv2 def dhash(image, hashSize=8): # convert the image to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # resize the grayscale image, adding a single column (width) so we # can compute the horizontal gradient resized = cv2.resize(gray, (hashSize + 1, hashSize)) # compute the (relative) horizontal gradient between adjacent # column pixels diff = resized[:, 1:] > resized[:, :-1] # convert the difference image to a hash return sum([2 ** i for (i, v) in enumerate(diff.flatten()) if v]) We begin by importing OpenCV and NumPy (Lines 2 and 3). The first function we’ll look at, dhash, is used to compute the difference hash for a given input image. Recall from above that our dhash requires four steps: (1) convert to grayscale, (2) resize, (3) compute the difference, and (4) build the hash. Let’s break it down a little further: Line 7 converts the image to grayscale. Line 11 resizes the image to N + 1 rows by N columns, ignoring the aspect ratio. This ensures that the resulting image hash will match similar photos regardless of their initial spatial dimensions. Line 15 computes the horizontal gradient difference between adjacent column pixels.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Assuming hashSize=8 , will be 8 rows of 8 differences (there are 9 rows allowing for 8 comparisons). We will thus have a 64-bit hash as 8×8=64. Line 18 converts the difference image to a hash. For more details, refer to this blog post. Next, let’s look at convert_hash function: def convert_hash(h): # convert the hash to NumPy's 64-bit float and then back to # Python's built in int return int(np.array(h, dtype="float64")) When I first wrote the code for this tutorial, I found that the VP-Tree implementation we’re using internally converts points to a NumPy 64-bit float. That would be okay; however, hashes need to be integers and if we convert them to 64-bit floats, they become an unhashable data type. To overcome the limitation of the VP-Tree implementation, I came up with the convert_hash hack: We accept an input hash, h. That hash is then converted to a NumPy 64-bit float. And that NumPy float is then converted back to Python’s built-in integer data type. This hack ensures that hashes are represented consistently throughout the hashing, indexing, and searching process. We then have one final helper method, hamming, which is used to compute the Hamming distance between two integers: def hamming(a, b): # compute and return the Hamming distance between the integers return bin(int(a) ^ int(b)).count("1") The Hamming distance is simply a count of the number of 1s when taking the XOR (^) between two integers (Line 27).
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Implementing our image hash indexer Before we can perform a search, we first need to: Loop over our input dataset of images. Compute difference hash for each image. Build a VP-Tree using the hashes. Let’s start that process now. Open up the index_images.py file and insert the following code: # import the necessary packages from pyimagesearch.hashing import convert_hash from pyimagesearch.hashing import hamming from pyimagesearch.hashing import dhash from imutils import paths import argparse import pickle import vptree import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--images", required=True, type=str, help="path to input directory of images") ap.add_argument("-t", "--tree", required=True, type=str, help="path to output VP-Tree") ap.add_argument("-a", "--hashes", required=True, type=str, help="path to output hashes dictionary") args = vars(ap.parse_args()) Lines 2-9 import the packages, functions, and modules necessary for this script. In particular Lines 2-4 import our three hashing related functions: convert_hash , hamming , and dhash . Line 8 imports the vptree implementation that we will be using. Next, Lines 12-19 parse our command line arguments: --images : The path to our images which we will be indexing. --tree : The path to the output VP-tree .pickle file which will be serialized to disk.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
--hashes : The path to the output hashes dictionary which will be stored in .pickle format. Now let’s compute hashes for all images: # grab the paths to the input images and initialize the dictionary # of hashes imagePaths = list(paths.list_images(args["images"])) hashes = {} # loop over the image paths for (i, imagePath) in enumerate(imagePaths): # load the input image print("[INFO] processing image {}/{}".format(i + 1, len(imagePaths))) image = cv2.imread(imagePath) # compute the hash for the image and convert it h = dhash(image) h = convert_hash(h) # update the hashes dictionary l = hashes.get(h, []) l.append(imagePath) hashes[h] = l Lines 23 and 24 grab image paths and initialize our hashes dictionary. Line 27 then begins a loop over all the imagePaths . Inside the loop, we: Load the image (Line 31). Compute and convert the hash, h (Lines 34 and 35). Grab a list of all image paths, l , with the same hash (Line 38). Add this imagePath to the list, l (Line 39). Update our dictionary with the hash as the key and our list of image paths with the same corresponding hash as the value (Line 40). From here, we build our VP-Tree: # build the VP-Tree print("[INFO] building VP-Tree...") points = list(hashes.keys()) tree = vptree. VPTree(points, hamming) To construct the VP-Tree, Lines 44 and 45 pass in (1) a list of data points (i.e., the hash integer values themselves), and (2) our distance function (the Hamming distance method) to the VPTree constructor.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Internally, the VP-Tree computes the Hamming distances between all input points and then constructs the VP-Tree such that data points with smaller distances (i.e., more similar images) lie closer together in the tree space. Be sure to refer the “What are VP-Trees and how can they help scale image hashing search engines?” section and Figures 7, 8, and 9. With our hashes dictionary populated and VP-Tree constructed, we’ll now serialize them both to disk as .pickle files: # serialize the VP-Tree to disk print("[INFO] serializing VP-Tree...") f = open(args["tree"], "wb") f.write(pickle.dumps(tree)) f.close() # serialize the hashes to dictionary print("[INFO] serializing hashes...") f = open(args["hashes"], "wb") f.write(pickle.dumps(hashes)) f.close() Extracting image hashes and building the VP-Tree Now that we’ve implemented our indexing script, let’s put it to work. Make sure you’ve: Downloaded the CALTECH-101 dataset using the instructions above. Used the “Downloads” section of this tutorial to download the source code and example query images. Extracted the .zip of the source code and changed directory to the project. From there, open up a terminal and issue the following command: $ time python index_images.py --images 101_ObjectCategories \ --tree vptree.pickle --hashes hashes.pickle [INFO] processing image 1/9144 [INFO] processing image 2/9144 [INFO] processing image 3/9144 [INFO] processing image 4/9144 [INFO] processing image 5/9144 ... [INFO] processing image 9140/9144 [INFO] processing image 9141/9144 [INFO] processing image 9142/9144 [INFO] processing image 9143/9144 [INFO] processing image 9144/9144 [INFO] building VP-Tree... [INFO] serializing VP-Tree... [INFO] serializing hashes... real 0m10.947s user 0m9.096s sys 0m1.386s As our output indicates, we were able to hash all 9,144 images in just over 10 seconds. Checking project directory after running the script we’ll find two .pickle files: $ ls -l *.pickle -rw-r--r-- 1 adrianrosebrock 796620 Aug 22 07:53 hashes.pickle -rw-r--r-- 1 adrianrosebrock 707926 Aug 22 07:53 vptree.pickle The hashes.pickle (796.62KB) file contains our computed hashes, mapping the hash integer value to file paths with the same hash. The vptree.pickle (707.926KB) is our constructed VP-Tree.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
We’ll be using this VP-Tree to perform queries and searches in the following section. Implementing our image hash searching script The second component of an image hashing search engine is the search script. The search script will: Accept an input query image. Compute the hash for the query image. Search the VP-Tree using the query hash to find all duplicate/near-duplicate images. Let’s implement our image hash searcher now — open up the search.py file and insert the following code: # import the necessary packages from pyimagesearch.hashing import convert_hash from pyimagesearch.hashing import dhash import argparse import pickle import time import cv2 # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-t", "--tree", required=True, type=str, help="path to pre-constructed VP-Tree") ap.add_argument("-a", "--hashes", required=True, type=str, help="path to hashes dictionary") ap.add_argument("-q", "--query", required=True, type=str, help="path to input query image") ap.add_argument("-d", "--distance", type=int, default=10, help="maximum hamming distance") args = vars(ap.parse_args()) Lines 2-9 import the necessary components for our searching script. Notice that we need the dhash and convert_hash functions once again as we’ll have to compute the hash for our --query image. Lines 10-19 parse our command line arguments (the first three are required): --tree : The path to our pre-constructed VP-Tree on disk. --hashes : The path to our pre-computed hashes dictionary on disk.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
--query : Our query image’s path. --distance : The maximum hamming distance between hashes is set with a default of 10 . You may override it if you so choose. It’s important to note that the larger the --distance is, the more hashes the VP-Tree will compare, and thus the searcher will be slower. Try to keep your --distance as small as possible without compromising the quality of your results. Next, we’ll (1) load our VP-Tree + hashes dictionary, and (2) compute the hash for our --query image: # load the VP-Tree and hashes dictionary print("[INFO] loading VP-Tree and hashes...") tree = pickle.loads(open(args["tree"], "rb").read()) hashes = pickle.loads(open(args["hashes"], "rb").read()) # load the input query image image = cv2.imread(args["query"]) cv2.imshow("Query", image) # compute the hash for the query image, then convert it queryHash = dhash(image) queryHash = convert_hash(queryHash) Lines 23 and 24 load the pre-computed index including the VP-Tree and hashes dictionary. From there, we load and display the --query image (Lines 27 and 28). We then take the query image and compute the queryHash (Lines 31 and 32). At this point, it is time to perform a search using our VP-Tree: # perform the search print("[INFO] performing search...") start = time.time() results = tree.get_all_in_range(queryHash, args["distance"]) results = sorted(results) end = time.time() print("[INFO] search took {} seconds".format(end - start)) Lines 37 and 38 perform a search by querying the VP-Tree for hashes with the smallest Hamming distance relative to the queryHash . The results are sorted so that “more similar” hashes are at the front of the results list.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Both of these lines are sandwiched with timestamps for benchmarking purposes, the results of which are printed via Line 40. Finally, we will loop over results and display each of them: # loop over the results for (d, h) in results: # grab all image paths in our dataset with the same hash resultPaths = hashes.get(h, []) print("[INFO] {} total image(s) with d: {}, h: {}".format( len(resultPaths), d, h)) # loop over the result paths for resultPath in resultPaths: # load the result image and display it to our screen result = cv2.imread(resultPath) cv2.imshow("Result", result) cv2.waitKey(0) Line 43 begins a loop over the results : The resultPaths for the current hash, h , are grabbed from the hashes dictionary (Line 45). Each result image is displayed as a key is pressed on the keyboard (Lines 50-54). Image hashing search engine results We are now ready to test our image search engine! But before we do that, make sure you have: Downloaded the CALTECH-101 dataset using the instructions above. Used the “Downloads” section of this tutorial to download the source code and example query images. Extracted the .zip of the source code and changed directory to the project. Ran the index_images.py file to generate the hashes.pickle and vptree.pickle files. After all the above steps are complete, open up a terminal and execute the following command: python search.py --tree vptree.pickle --hashes hashes.pickle \ --query queries/buddha.jpg [INFO] loading VP-Tree and hashes... [INFO] performing search... [INFO] search took 0.015203237533569336 seconds [INFO] 1 total image(s) with d: 0, h: 8.162938100012111e+18 Figure 11: Our Python + OpenCV image hashing search engine found a match in the VP-Tree in just 0.015 seconds! On the left, you can see our input query image of our Buddha.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
On the right, you can see that we have found the duplicate image in our indexed dataset. The search itself took only 0.015 seconds. Additionally, note that the distance between the input query image and the hashed image in the dataset is zero, indicating that the two images are identical. Let’s try again, this time with an image of a Dalmatian: $ python search.py --tree vptree.pickle --hashes hashes.pickle \ --query queries/dalmation.jpg [INFO] loading VP-Tree and hashes... [INFO] performing search... [INFO] search took 0.014827728271484375 seconds [INFO] 1 total image(s) with d: 0, h: 6.445556196029652e+18 Figure 12: With a Hamming Distance of 0, the Dalmation query image yielded an identical image in our dataset. We built an OpenCV + Python image hash search engine with VP-Trees successfully. Again, we see that our image hashing search engine has found the identical Dalmatian in our indexed dataset (we know the images are identical due to the Hamming distance of zero). The next example is of an accordion: $ python search.py --tree vptree.pickle --hashes hashes.pickle \ --query queries/accordion.jpg [INFO] loading VP-Tree and hashes... [INFO] performing search... [INFO] search took 0.014187097549438477 seconds [INFO] 1 total image(s) with d: 0, h: 3.380309217342405e+18 Figure 13: An example of providing a query image and finding the best resulting image with an image hash search engine created with Python and OpenCV. We once again find our identical matched image in the indexed dataset. We know our image hashing search engine is working great for identical images… …but what about images that are slightly modified? Will our hashing search engine still perform well?
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Let’s give it a try: $ python search.py --tree vptree.pickle --hashes hashes.pickle \ --query queries/accordion_modified1.jpg [INFO] loading VP-Tree and hashes... [INFO] performing search... [INFO] search took 0.014217138290405273 seconds [INFO] 1 total image(s) with d: 4, h: 3.380309217342405e+18 Figure 14: Our image hash search engine was able to find the matching image despite a modification (red square) to the query image. Here I’ve added a small red square in the bottom left corner of the accordion query image. This addition will change the difference hash value! However, if you take a look at the output result, you’ll see that we were still able to detect the near-duplicate image. We were able to find the near-duplicate image by comparing the Hamming distance between the hashes. The difference in hash values is 4, indicating that 4 bits differ between the two hashes. Next, let’s try a second query, this one much more modified than the first: $ python search.py --tree vptree.pickle --hashes hashes.pickle \ --query queries/accordion_modified2.jpg [INFO] loading VP-Tree and hashes... [INFO] performing search... [INFO] search took 0.013727903366088867 seconds [INFO] 1 total image(s) with d: 9, h: 3.380309217342405e+18 Figure 15: On the left is the query image for our image hash search engine with VP-Trees. It has been modified with yellow and purple shapes as well as red text. The image hash search engine returns the correct resulting image (right) from an index of 9,144 in just 0.0137 seconds, proving the robustness of our search engine system. Despite dramatically altering the query by adding in a large blue rectangle, a yellow circle, and text, we’re still able to find the near-duplicate image in our dataset in under 0.014 seconds!
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Whenever you need to find duplicate or near-duplicate images in a dataset, definitely consider using image hashing and image searching algorithms — when used correctly, they can be extremely powerful! What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to build a basic image hashing search engine using OpenCV and Python. To build an image hashing search engine that scaled we needed to utilize VP- Trees, a specialized metric tree data structure that recursively partitions a dataset of points such that nodes of the tree that are closer together are more similar than nodes that are farther away. By using VP-Trees we were able to build an image hashing search engine capable of finding duplicate and near-duplicate images in a dataset in under 0.01 seconds.
https://pyimagesearch.com/2019/08/26/building-an-image-hashing-search-engine-with-vp-trees-and-opencv/
Furthermore, we demonstrated that our combination of hashing algorithm and VP-Tree search was capable of finding matches in our dataset, even if our query image was modified, damaged, or altered! If you are ever building a computer vision application that requires quickly finding duplicate or near-duplicate images in a large dataset, definitely give this method a try. To download the source code to this post, and be notified when future posts are published here on PyImageSearch, just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
Click here to download the source code to this pos In this tutorial, you will learn how to use Keras to train a neural network, stop training, update your learning rate, and then resume training from where you left off using the new learning rate. Using this method you can increase your accuracy while decreasing model loss. Today’s tutorial is inspired by a question I received from PyImageSearch reader, Zhang Min. Zhang Min writes: Hi Adrian, thanks for the PyImageSearch blog. I have two questions: First, I am working on my graduation project and my university is allowing me to share time on their GPU machines. The problem is that I can only access a GPU machine in two hour increments — after my two hours is up I’m automatically booted off the GPU. How can I save my training progress, safely stop training, and then resume training from where I left off? Secondly, my initial experiments aren’t going very well. My model quickly jumps to 80%+ accuracy but then stays there for another 50 epochs. What else can I be doing to improve my model accuracy?
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
My advisor said I should look into adjusting the learning rate but I’m not really sure how to do that. Thanks Adrian! Learning how to start, stop, and resume training a deep learning model is a super important skill to master — at some point in your deep learning practitioner career you’ll run into a situation similar to Zhang Min’s where: You have limited time on a GPU instance (which can happen on Google Colab or when using Amazon EC2’s cheaper spot instances). Your SSH connection is broken and you forgot to use a terminal multiplexer to save your session (such as screen or tmux). Your deep learning rig locks up and forces shuts down. Just imagine spending an entire week to train a state-of-the-art deep neural network…only to have your model lost due to a power failure! Luckily, there’s a solution — but when those situations happen you need to know how to: Take a snapshotted model that was saved/serialized to disk during training. Load the model into memory. Resume training from where you left off. Secondly, starting, stopping, and resume training is standard practice when manually adjusting the learning rate: Start training your model until loss/accuracy plateau Snapshot your model every N epochs (typically N={1, 5, 10}) Stop training, normally by force exiting via ctrl + c Open your code editor and adjust your learning rate (typically lowering it by an orderof magnitude) Go back to your terminal and restart the training script, picking up from the lastsnapshot of model weights Using this ctrl + c method of training you can boost your model accuracy while simultaneously driving down loss, leading to a more accurate model.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
The ability to adjust the learning rate is a critical skill for any deep learning practitioner to master, so take the time now to study and practice it! To learn how to start, stop, and resume training with Keras, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Keras: Starting, stopping, and resuming training 2020-06-05 Update: This blog post is now TensorFlow 2+ compatible! In the first part of this blog post, we’ll discuss why we would want to start, stop, and resume training of a deep learning model. We’ll also discuss how stopping training to lower your learning rate can improve your model accuracy (and why a learning rate schedule/decay may not be sufficient). From there we’ll implement a Python script to handle starting, stopping, and resuming training with Keras. I’ll then walk you through the entire training process, including: Starting the initial training script Monitoring loss/accuracy Noticing when loss/accuracy is plateauing Stopping training Lowering your learning rate Resuming training from where you left off with the new, lowered learning rate Using this method of training you’ll often be able to improve your model accuracy. Let’s go ahead and get started! Why do we need to start, stop, and resume training?
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
There are a number of reasons you may need to start, stop, and resume training of your deep learning model, but the two primary grounds include: Your training session being terminated and training stopping (due to a power outage, GPU session timing out, etc.). Needing to adjust your learning rate to improve model accuracy (typically by lowering the learning rate by an order of magnitude). The second point is especially important — if you go back and read the seminal AlexNet, SqueezeNet, ResNet, etc. papers you’ll find that the authors all say something along the lines of: We started training our model with the SGD optimizer and an initial learning rate of 1e-1. We reduced our learning rate by an order of magnitude on epochs 30 and 50, respectively. Why is the drop-in learning rate so important? And how can it lead to a more accurate model? To explore that question, take a look at the following plot of ResNet-18 trained on the CIFAR-10 dataset: Figure 1: Training ResNet-18 on the CIFAR-10 dataset. The characteristic drops in loss and increases in accuracy are evident of learning rate changes. Here, (1) training was stopped on epochs 30 and 50, (2) the learning rate was lowered, and (3) training was resumed. (
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
image source) Notice for epochs 1-29 there is a fairly “standard” curve that you come across when training a network: Loss starts off very high but then quickly drops Accuracy starts off very low but then quickly rises Eventually loss and accuracy plateau out But what is going on around epoch 30? Why does the loss drop so dramatically? And why does the accuracy rise so considerably? The reason for this behavior is because: Training was stopped The learning rate was lowered by an order of magnitude And then training was resumed The same goes for epoch 50 as well — again, training was stopped, the learning rate lowered, and then training resumed. Each time we encounter a characteristic drop in loss and then a small increase in accuracy. As the learning rate becomes smaller, the impact of the learning rate reduction has less and less impact. Eventually, we run into two issues: The learning rate becomes very small which in turn makes the weight updates very small and thus the model cannot make any meaningful progress. We start to overfit due to the small learning rate. The model descends into areas of lower loss in the loss landscape, overfitting to the training data and not generalizing to the validation data. The overfitting behavior is evident past epoch 50 in Figure 1 above.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
Notice how validation loss has plateaued and is even started to rise a bit. And the same time training loss is continuing to drop, a clear sign of overfitting. Dropping your learning rate is a great way to boost the accuracy of your model during training, just realize there is (1) a point of diminishing returns, and (2) a chance of overfitting if training is not properly monitored. Why not use learning rate schedulers or decay? Figure 2: Learning rate schedulers are great for some training applications; however, starting/stopping Keras training typically leads to more control over your deep learning model. You might be wondering “Why not use a learning rate scheduler?” There are a number of learning rate schedulers available to us, including: Linear and polynomial decay Cyclical Learning Rates (CLRs) Keras’ ReduceLROnPlateau class If the goal is to improve model accuracy by dropping the learning rate, then why not just rely on those respective schedules and classes? Great question. The problem is that you may not have a good idea on: The approximate number of epochs to train for What a proper initial learning rate is What learning rate range to use for CLRs Additionally, one of the benefits of using what I call ctrl + c training is that it gives you more fine-grained control over your model. Being able to manually stop your training at a specific epoch, adjust your learning rate, and then resume training from where you left off (and with the new learning rate) is something most learning rate schedulers will not allow you to do.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
Once you’ve ran a few experiments with ctrl + c training you’ll have a good idea on what your hyperparameters should be — when that happens, then you start incorporating hardcoded learning rate schedules to boost your accuracy even further. Finally, keep in mind that nearly all seminal CNN papers that were trained on ImageNet used a method to start/stop/resume training. Just because other methods exist doesn’t make them inherently better — as a deep learning practitioner, you need to learn how to use ctrl + c training along with learning rate scheduling (don’t rely strictly on the latter). If you’re interested in learning more about ctrl + c training, along with my tips, suggestions, and best practices when training your own models, be sure to refer to my book, Deep Learning for Computer Vision with Python. Configuring your development environment To configure your system for this tutorial, I first recommend following either of these tutorials: How to install TensorFlow 2.0 on Ubuntu How to install TensorFlow 2.0 on macOS Either tutorial will help you configure you system with all the necessary software for this blog post in a convenient Python virtual environment. Please note that PyImageSearch does not recommend or support Windows for CV/DL projects. Project structure Let’s review our project structure: $ tree --dirsfirst . ├── output │   ├── checkpoints │   └── resnet_fashion_mnist.png ├── pyimagesearch │   ├── callbacks │   │   ├── __init__.py │   │   ├── epochcheckpoint.py │   │   └── trainingmonitor.py │   ├── nn │   │   ├── __init__.py │   │   └── resnet.py │   └── __init__.py └── train.py 5 directories, 8 files Today we will review train.py , our training script. This script trains Fashion MNIST on ResNet. The key to this training script is that it uses two “callbacks”, epochcheckpoint.py and trainingmonitor.py .
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
I review these callbacks in detail inside Deep Learning for Computer Vision with Python — they aren’t covered today, but I encourage you to review the code. These two callbacks allow us to (1) save our model at the end of every N-th epoch so we can resume training on demand, and (2) output our training plot at the conclusion of each epoch, ensuring we can easily monitor our model for signs of overfitting. The models are checkpointed (i.e. saved) in the output/checkpoints/ directory. 2020-06-05 Update: There is no-longer an accompanying JSON file in the output/ folder for this tutorial. For TensorFlow 2+, it is not necessary and it introduces an error. The training plot is overwritten upon each epoch end as resnet_fashion_mnist.png . We’ll be paying close attention to the training plot to determine when to stop training. Implementing the training script Let’s get started implementing our Python script that will be used for starting, stopping, and resuming training with Keras. This guide is written for intermediate practitioners, even though it teaches an essential skill. If you are new to Keras or deep learning, or maybe you just need to brush up on the basics, definitely check out my Keras Tutorial first.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
Open up a new file, name it train.py, and insert the following code: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from pyimagesearch.callbacks.epochcheckpoint import EpochCheckpoint from pyimagesearch.callbacks.trainingmonitor import TrainingMonitor from pyimagesearch.nn.resnet import ResNet from sklearn.preprocessing import LabelBinarizer from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.optimizers import SGD from tensorflow.keras.datasets import fashion_mnist from tensorflow.keras.models import load_model import tensorflow.keras.backend as K import numpy as np import argparse import cv2 import sys import os Lines 2-19 import our required packages, namely our EpochCheckpoint and TrainingMonitor callbacks. We also import our fashion_mnist dataset and ResNet CNN. The tensorflow.keras.backend as K will allow us to retrieve and set our learning rate. Now let’s go ahead and parse command line arguments: # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-c", "--checkpoints", required=True, help="path to output checkpoint directory") ap.add_argument("-m", "--model", type=str, help="path to *specific* model checkpoint to load") ap.add_argument("-s", "--start-epoch", type=int, default=0, help="epoch to restart training at") args = vars(ap.parse_args()) Our command line arguments include: --checkpoints : The path to our output checkpoints directory. --model : The optional path to a specific model checkpoint to load when resuming training. --start-epoch : The optional start epoch can be provided if you are resuming training. By default, training starts at epoch 0 . Let’s go ahead and load our dataset: # grab the Fashion MNIST dataset (if this is your first time running # this the dataset will be automatically downloaded) print("[INFO] loading Fashion MNIST...") ((trainX, trainY), (testX, testY)) = fashion_mnist.load_data() # Fashion MNIST images are 28x28 but the network we will be training # is expecting 32x32 images trainX = np.array([cv2.resize(x, (32, 32)) for x in trainX]) testX = np.array([cv2.resize(x, (32, 32)) for x in testX]) # scale data to the range of [0, 1] trainX = trainX.astype("float32") / 255.0 testX = testX.astype("float32") / 255.0 # reshape the data matrices to include a channel dimension (required # for training) trainX = trainX.reshape((trainX.shape[0], 32, 32, 1)) testX = testX.reshape((testX.shape[0], 32, 32, 1)) Line 34 loads Fashion MNIST. Lines 38-48 then preprocess the data including (1) resizing to 32×32, (2) scaling pixel intensities to the range [0, 1], and (3) adding a channel dimension.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
From here we’ll (1) binarize our labels, and (2) initialize our data augmentation object: # convert the labels from integers to vectors lb = LabelBinarizer() trainY = lb.fit_transform(trainY) testY = lb.transform(testY) # construct the image generator for data augmentation aug = ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True, fill_mode="nearest") And now to the code for loading model checkpoints: # if there is no specific model checkpoint supplied, then initialize # the network (ResNet-56) and compile the model if args["model"] is None: print("[INFO] compiling model...") opt = SGD(lr=1e-1) model = ResNet.build(32, 32, 1, 10, (9, 9, 9), (64, 64, 128, 256), reg=0.0001) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) # otherwise, we're using a checkpoint model else: # load the checkpoint from disk print("[INFO] loading {}...".format(args["model"])) model = load_model(args["model"]) # update the learning rate print("[INFO] old learning rate: {}".format( K.get_value(model.optimizer.lr))) K.set_value(model.optimizer.lr, 1e-2) print("[INFO] new learning rate: {}".format( K.get_value(model.optimizer.lr))) If no model checkpoint is supplied then we need to initialize the model (Lines 62-68). Notice that we specify our initial learning rate as 1e-1 on Line 64. Otherwise, Lines 71-81 load the model checkpoint (i.e. a model that was previously stopped via ctrl + c ) and update the learning rate. Line 79 will be the line you edit whenever you want to update the learning rate. Next, we’ll construct our callbacks: # build the path to the training plot and training history plotPath = os.path.sep.join(["output", "resnet_fashion_mnist.png"]) jsonPath = os.path.sep.join(["output", "resnet_fashion_mnist.json"]) # construct the set of callbacks callbacks = [ EpochCheckpoint(args["checkpoints"], every=5, startAt=args["start_epoch"]), TrainingMonitor(plotPath, jsonPath=jsonPath, startAt=args["start_epoch"])] Lines 84 and 85 specify our plot and JSON paths. Lines 88-93 construct two callbacks , putting them directly into a list: EpochCheckpoint : This callback is responsible for saving our model as it currently stands at the conclusion of every epoch. That way, if we stop training via ctrl + c (or an unforeseeable power failure), we don’t lose our machine’s work — for training complex models on huge datasets, this could quite literally save you days of time. TrainingMonitor : A callback that saves our training accuracy/loss information as a PNG image plot and JSON dictionary. We’ll be able to open our training plot at any time to see our training progress — valuable information to you as the practitioner, especially for multi-day training processes. Again, please review epochcheckpoint.py and trainingmonitor.py on your own time for the details and/or if you need to add functionality.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
I cover these callbacks in detail inside Deep Learning for Computer Vision with Python. Finally, we have everything we need to start, stop, and resume training. This last block actually starts or resumes training: # train the network print("[INFO] training network...") model.fit( x=aug.flow(trainX, trainY, batch_size=128), validation_data=(testX, testY), steps_per_epoch=len(trainX) // 128, epochs=80, callbacks=callbacks, verbose=1) 2020-06-05 Update: Formerly, TensorFlow/Keras required use of a method called .fit_generator in order to accomplish data augmentation. Now, the .fit method can handle data augmentation as well, making for more-consistent code. This also applies to the migration from .predict_generator to .predict (not used in this example). Be sure to check out my articles about fit and fit_generator as well as data augmentation. Our call to .fit fits/trains our model using and our callbacks (Lines 97-103). Be sure to review my tutorial on Keras’ fit method for more details on how the .fit function is used to train our model. I’d like to call your attention to the epochs parameter (Line 101) — when you adjust your learning rate you’ll typically want to update the epochs as well. Typically you should over-estimate the number of epochs as you’ll see in the next three sections.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
For a more detailed explanation of starting, stopping, and resuming training (along with the implementations of my EpochCheckpoint and TrainingMonitor classes), be sure to refer to Deep Learning for Computer Vision with Python. Phase #1: 40 epochs at 1e-1 Make sure you’ve used the “Downloads” section of this blog post to download the source code to this tutorial. From there, open up a terminal and execute the following command: $ python train.py --checkpoints output/checkpoints [INFO] loading Fashion MNIST... [INFO] compiling model... [INFO] training network... Epoch 1/40 468/468 [==============================] - 46s 99ms/step - loss: 1.2367 - accuracy: 0.7153 - val_loss: 1.0503 - val_accuracy: 0.7712 Epoch 2/40 468/468 [==============================] - 46s 99ms/step - loss: 0.8753 - accuracy: 0.8427 - val_loss: 0.8914 - val_accuracy: 0.8356 Epoch 3/40 468/468 [==============================] - 45s 97ms/step - loss: 0.7974 - accuracy: 0.8683 - val_loss: 0.8175 - val_accuracy: 0.8636 Epoch 4/40 468/468 [==============================] - 46s 98ms/step - loss: 0.7490 - accuracy: 0.8850 - val_loss: 0.7533 - val_accuracy: 0.8855 Epoch 5/40 468/468 [==============================] - 46s 98ms/step - loss: 0.7232 - accuracy: 0.8922 - val_loss: 0.8021 - val_accuracy: 0.8587 ... Epoch 36/40 468/468 [==============================] - 44s 94ms/step - loss: 0.4111 - accuracy: 0.9466 - val_loss: 0.4719 - val_accuracy: 0.9265 Epoch 37/40 468/468 [==============================] - 44s 94ms/step - loss: 0.4052 - accuracy: 0.9483 - val_loss: 0.4499 - val_accuracy: 0.9343 Epoch 38/40 468/468 [==============================] - 44s 94ms/step - loss: 0.4009 - accuracy: 0.9485 - val_loss: 0.4664 - val_accuracy: 0.9270 Epoch 39/40 468/468 [==============================] - 44s 94ms/step - loss: 0.3951 - accuracy: 0.9495 - val_loss: 0.4685 - val_accuracy: 0.9277 Epoch 40/40 468/468 [==============================] - 44s 95ms/step - loss: 0.3895 - accuracy: 0.9497 - val_loss: 0.4672 - val_accuracy: 0.9254 Figure 3: Phase 1 of training ResNet on the Fashion MNIST dataset with a learning rate of 1e-1 for 40 epochs before we stop via ctrl + c, adjust the learning rate, and resume Keras training. Here I’ve started training ResNet on the Fashion MNIST dataset using the SGD optimizer and an initial learning rate of 1e-1. After every epoch my loss/accuracy plot in Figure 3 updates, enabling me to monitor training in real-time. Past epoch 20 we can see training and validation loss starting to diverge, and by epoch 40 I decided to ctrl + c out of the train.py script. Phase #2: 10 epochs at 1e-2 The next step is to update both: My learning rate The number of epochs to train for For the learning rate, the standard practice is to lower it by an order of magnitude. Going back to Line 64 of train.py we can see that my initial learning rate is 1e-1 : # if there is no specific model checkpoint supplied, then initialize # the network (ResNet-56) and compile the model if args["model"] is None: print("[INFO] compiling model...") opt = SGD(lr=1e-1) model = ResNet.build(32, 32, 1, 10, (9, 9, 9), (64, 64, 128, 256), reg=0.0001) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) I’m now going to update my learning rate to be 1e-2 on Line 79: # otherwise, we're using a checkpoint model else: # load the checkpoint from disk print("[INFO] loading {}...".format(args["model"])) model = load_model(args["model"]) # update the learning rate print("[INFO] old learning rate: {}".format( K.get_value(model.optimizer.lr))) K.set_value(model.optimizer.lr, 1e-2) print("[INFO] new learning rate: {}".format( K.get_value(model.optimizer.lr))) So, why am I updating Line 79 and not Line 64? The reason is due to the if/else statement. The else statement handles when we need to load a specific checkpoint from disk — once we have the checkpoint we’ll resume training, thus the learning rate needs to be updated in the else block.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
Secondly, I also update my epochs on Line 101. Initially, the epochs value was 80 : # train the network print("[INFO] training network...") model.fit( x=aug.flow(trainX, trainY, batch_size=128), validation_data=(testX, testY), steps_per_epoch=len(trainX) // 128, epochs=80, callbacks=callbacks, verbose=1) I have decided to lower the number of epochs to train for to 40 epochs: # train the network print("[INFO] training network...") model.fit( x=aug.flow(trainX, trainY, batch_size=128), validation_data=(testX, testY), steps_per_epoch=len(trainX) // 128, epochs=40, callbacks=callbacks, verbose=1) Typically you’ll set the epochs value to be much larger than what you think it should actually be. The reason for this is due to the fact that we’re using the EpochCheckpoint class to save model snapshots every 5 epochs — if at any point we decide we’re unhappy with the training progress we can just ctrl + c out of the script and go back to a previous snapshot. Thus, there is no harm in training for longer since we can always resume training from a previous model weight file. After both my learning rate and the number of epochs to train for were updated, I then executed the following command: $ python train.py --checkpoints output/checkpoints \ --model output/checkpoints/epoch_40.hdf5 --start-epoch 40 [INFO] loading Fashion MNIST... [INFO] loading output/checkpoints/epoch_40.hdf5... [INFO] old learning rate: 0.10000000149011612 [INFO] new learning rate: 0.009999999776482582 [INFO] training network... Epoch 1/10 468/468 [==============================] - 45s 97ms/step - loss: 0.3606 - accuracy: 0.9599 - val_loss: 0.4173 - val_accuracy: 0.9412 Epoch 2/10 468/468 [==============================] - 44s 94ms/step - loss: 0.3509 - accuracy: 0.9637 - val_loss: 0.4171 - val_accuracy: 0.9416 Epoch 3/10 468/468 [==============================] - 44s 94ms/step - loss: 0.3484 - accuracy: 0.9647 - val_loss: 0.4144 - val_accuracy: 0.9424 Epoch 4/10 468/468 [==============================] - 44s 94ms/step - loss: 0.3454 - accuracy: 0.9657 - val_loss: 0.4151 - val_accuracy: 0.9412 Epoch 5/10 468/468 [==============================] - 46s 98ms/step - loss: 0.3426 - accuracy: 0.9667 - val_loss: 0.4159 - val_accuracy: 0.9416 Epoch 6/10 468/468 [==============================] - 45s 96ms/step - loss: 0.3406 - accuracy: 0.9663 - val_loss: 0.4160 - val_accuracy: 0.9417 Epoch 7/10 468/468 [==============================] - 45s 96ms/step - loss: 0.3409 - accuracy: 0.9663 - val_loss: 0.4150 - val_accuracy: 0.9418 Epoch 8/10 468/468 [==============================] - 44s 94ms/step - loss: 0.3362 - accuracy: 0.9687 - val_loss: 0.4159 - val_accuracy: 0.9428 Epoch 9/10 468/468 [==============================] - 44s 95ms/step - loss: 0.3341 - accuracy: 0.9686 - val_loss: 0.4175 - val_accuracy: 0.9406 Epoch 10/10 468/468 [==============================] - 44s 95ms/step - loss: 0.3336 - accuracy: 0.9687 - val_loss: 0.4164 - val_accuracy: 0.9420 Figure 4: Phase 2 of Keras start/stop/resume training. The learning rate is dropped from 1e-1 to 1e-2 as is evident in the plot at epoch 40. I continued training for 10 more epochs until I noticed validation metrics plateauing at which point I stopped training via ctrl + c again. Notice how we’ve updated our learning rate from 1e-1 to 1e-2 and then resumed training. We immediately see a drop in both training/validation loss as well as an increase in training/validation accuracy. The problem here is that our validation metrics have plateaued out — there may not be much more gains left without risking overfitting.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
Because of this, I only allowed training to continue for another 10 epochs before once again ctrl + c ing out of the script. Phase #3: 5 epochs at 1e-3 For the final phase of training I decided to: Lower my learning rate from 1e-2 to 1e-3 . Allow training to continue (but knowing I would likely only be training for a few epochs given the risk of overfitting). After updating my learning rate, I executed the following command: $ python train.py --checkpoints output/checkpoints \ --model output/checkpoints/epoch_50.hdf5 --start-epoch 50 [INFO] loading Fashion MNIST... [INFO] loading output/checkpoints/epoch_50.hdf5... [INFO] old learning rate: 0.009999999776482582 [INFO] new learning rate: 0.0010000000474974513 [INFO] training network... Epoch 1/5 468/468 [==============================] - 45s 97ms/step - loss: 0.3302 - accuracy: 0.9696 - val_loss: 0.4155 - val_accuracy: 0.9414 Epoch 2/5 468/468 [==============================] - 44s 94ms/step - loss: 0.3297 - accuracy: 0.9703 - val_loss: 0.4160 - val_accuracy: 0.9411 Epoch 3/5 468/468 [==============================] - 44s 94ms/step - loss: 0.3302 - accuracy: 0.9694 - val_loss: 0.4157 - val_accuracy: 0.9415 Epoch 4/5 468/468 [==============================] - 44s 94ms/step - loss: 0.3282 - accuracy: 0.9708 - val_loss: 0.4143 - val_accuracy: 0.9421 Epoch 5/5 468/468 [==============================] - 44s 95ms/step - loss: 0.3305 - accuracy: 0.9694 - val_loss: 0.4152 - val_accuracy: 0.9414 Figure 5: Upon resuming Keras training for phase 3, I only let the network train for 5 epochs because there is not significant learning progress being made. Using a start/stop/resume training approach with Keras, we have achieved 94.14% validation accuracy. At this point the learning rate has become so small that the corresponding weight updates are also very small, implying that the model cannot learn much more. I only allowed training to continue for 5 epochs before killing the script. However, looking at my final metrics you can see what we are obtaining 96.94% training accuracy along with 94.14% validation accuracy. We were able to achieve this result by using our start, stop, and resuming training method. At this point, we could either continue to tune our learning rate, utilize a learning rate scheduler, apply Cyclical Learning Rates, or try a new model architecture altogether.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned how to start, stop, and resume training using Keras and Deep Learning. Learning how to resume from where your training left off is a super valuable skill for two reasons: It ensures that if your training script crashes, you can pick up again from the most recent model checkpoint. It enables you to adjust your learning rate and improve your model accuracy. When training your own custom neural networks you’ll want to monitor your loss and accuracy — once you start to see validation loss/accuracy plateau, try killing the training script, lowering your learning rate by an order of magnitude, and then resume training.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
You’ll often find that this method of training can lead to higher accuracy models. However, you should be wary of overfitting! Lowering your learning rate enables your model to descend into lower areas of the loss landscape; however, there is no guarantee that these lower loss areas will still generalize! You likely will only be able to drop the learning rate 1-3 times before either: The learning rate becomes too small, making the corresponding weight updates too small, and preventing the model from learning further. Validation loss stagnates or explodes while training loss continues to drop (implying that the model is overfitting). If those cases occur and your model is still not satisfactory you should consider adjusting other hyperparameters to your model, including regularization strength, dropout, etc. You may want to explore other model architectures as well. For more of my tips, suggestions, and best practices when training your own neural networks on your custom datasets, be sure to refer to Deep Learning for Computer Vision with Python, where I cover my best practices in-depth. To download the source code to this tutorial (and be notified when future tutorials are published on the PyImageSearch blog), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning.
https://pyimagesearch.com/2019/09/23/keras-starting-stopping-and-resuming-training/
Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
Click here to download the source code to this pos In this tutorial, you will learn how to use Keras and the Rectified Adam optimizer as a drop-in replacement for the standard Adam optimizer, potentially leading to a higher accuracy model (and in fewer epochs). Today we’re kicking off a two-part series on the Rectified Adam optimizer: Rectified Adam (RAdam) optimizer with Keras (today’s post) Is Rectified Adam actually *better* than Adam? ( next week’s tutorial) Rectified Adam is a brand new deep learning model optimizer introduced by a collaboration between members of the University of Illinois, Georgia Tech, and Microsoft Research. The goal of the Rectified Adam optimizer is two-fold: Obtain a more accurate/more generalizable deep neural network Complete training in fewer epochs Sound too good to be true? Well, it might just be. You’ll need to read the rest of this tutorial to find out. To learn how to use the Rectified Adam optimizer with Keras, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Rectified Adam (RAdam) optimizer with Keras In the first part of this tutorial, we’ll discuss the Rectified Adam optimizer, including how it’s different than the standard Adam optimizer (and why we should care). From there I’ll show you how to use the Rectified Adam optimizer with the Keras deep learning library.
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
We’ll then run some experiments and compare Adam to Rectified Adam. What is the Rectified Adam optimizer? Figure 1: Using the Rectified Adam (RAdam) deep learning optimizer with Keras. ( image source: Figure 6 from Liu et al.) A few weeks ago the deep learning community was all abuzz after Liu et al. published a brand new paper entitled On the Variance of the Adaptive Learning Rate and Beyond. This paper introduced a new deep learning optimizer called Rectified Adam (or RAdam for short). Rectified Adam is meant to be a drop-in replacement for the standard Adam optimizer. So, why is Liu et al. ’s contribution so important?
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
And why is the deep learning community so excited about it? Here’s a quick rundown on why you should care about it: Learning rate warmup heuristics work well to stabilize training. These heuristics also work well to improve generalization. Liu et al. decided to study the theory behind learning rate warmup… …but they found a problem with adaptive learning rates — during the first few batches the model did not generalize well and had very high variance. The authors studied the problem in detail and concluded that the issue can be resolved/mitigated by: 1. Applying warm up with a low initial learning rate. 2. Or, simply turning off the momentum term for the first few sets of input batches. As training continues, the variance will stabilize, and from there, the learning rate can be increased and the momentum term can be added back in.
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
The authors call this optimizer Rectified Adam (RAdam), a variant of the Adam optimizer, as it “rectifies” (i.e., corrects) the variance/generalization issues apparent in other adaptive learning rate optimizers. But the question remains — is Rectified Adam actually better than standard Adam? To answer that, you’ll need to finish reading this tutorial and read next week’s post which includes a full comparison. For more information about Rectified Adam, including details on both the theoretical and empirical results, be sure to refer to Liu et al. ’s paper. Project structure Let’s inspect our project layout: $ tree --dirsfirst . ├── pyimagesearch │   ├── __init__.py │   └── resnet.py ├── cifar10_adam.png ├── cifar10_rectified_adam.png └── train.py 1 directory, 5 files Our ResNet CNN is contained within the pyimagesearch module. The resnet.py file contains the exact ResNet model class included with Deep Learning for Computer Vision with Python. We will train ResNet on the CIFAR-10 dataset with both the Adam or RAdam optimizers inside of train.py , which we’ll review later in this tutorial. The training script will generate an accuracy/loss plot each time it is run — two .png files for each of the Adam and Rectified Adam experiments are included in the “Downloads”.
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
Installing Rectified Adam for Keras This tutorial requires the following software to be installed in your environment: TensorFlow Keras Rectified Adam for Keras scikit-learn matplotlib Luckily, all of the software is pip installable. If you’ve ever followed one of my installation tutorials, then you know I’m a fan of virtualenv and virtualenvwrapper for managing Python virtual environments. The first command below, workon , assumes that you have these packages installed, but it is optional. Let’s install the software now: $ workon <env_name> # replace "<env_name>" with your environment $ pip install tensorflow # or tensorflow-gpu $ pip install keras $ pip install scikit-learn $ pip install matplotlib The original implementation of RAdam by Liu et al. was in PyTorch; however, a Keras implementation was created by Zhao HG. You can install the Keras implementation of Rectified Adam via the following command: $ pip install keras-rectified-adam To verify that the Keras + RAdam package has been successfully installed, open up a Python shell and attempt to import keras_radam: $ python >>> import keras_radam >>> Provided there are no errors during the import, you can assume Rectified Adam is successfully installed on your deep learning box! Implementing Rectified Adam with Keras Let’s now learn how we can use Rectified Adam with Keras. If you are unfamiliar with Keras and/or deep learning, please refer to my Keras Tutorial. For a full review of deep learning optimizers, refer to the following chapters of Deep Learning for Computer Vision with Python: Starter Bundle – Chapter 9: “Optimization Methods and Regularization Techniques” Practitioner Bundle – Chapter 7: “Advanced Optimization Methods” Otherwise, if you’re ready to go, let’s dive in. Open up a new file, name it train.py, and insert the following code: # set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from pyimagesearch.resnet import ResNet from sklearn.preprocessing import LabelBinarizer from sklearn.metrics import classification_report from keras.preprocessing.image import ImageDataGenerator from keras.optimizers import Adam from keras_radam import RAdam from keras.datasets import cifar10 import matplotlib.pyplot as plt import numpy as np import argparse # construct the argument parser and parse the arguments ap = argparse.
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
ArgumentParser() ap.add_argument("-p", "--plot", type=str, required=True, help="path to output training plot") ap.add_argument("-o", "--optimizer", type=str, default="adam", choices=["adam", "radam"], help="type of optmizer") args = vars(ap.parse_args()) Lines 2-15 import our packages and modules. Most notably, Lines 10 and 11 import Adam and RAdam optimizers. We will use the "Agg" backend of matplotlib so that we can save our training plots to disk (Line 3). Lines 18-24 then parse two command line arguments: --plot : The path to our output training plot. --optimizer : The type of optimizer that we’ll use for training (either adam or radam). From here, let’s go ahead and perform a handful of initializations: # initialize the number of epochs to train for and batch size EPOCHS = 75 BS = 128 # load the training and testing data, then scale it into the # range [0, 1] print("[INFO] loading CIFAR-10 data...") ((trainX, trainY), (testX, testY)) = cifar10.load_data() trainX = trainX.astype("float") / 255.0 testX = testX.astype("float") / 255.0 # convert the labels from integers to vectors lb = LabelBinarizer() trainY = lb.fit_transform(trainY) testY = lb.transform(testY) # construct the image generator for data augmentation aug = ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True, fill_mode="nearest") # initialize the label names for the CIFAR-10 dataset labelNames = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"] Lines 27 and 28 initialize the number of epochs to train for as well as our batch size. Feel free to tune these hyperparameters, just keep in mind that they will affect results. Lines 33-35 load and preprocess our CIFAR-10 data including scaling data to the range [0, 1]. Lines 38-40 then binarize our class labels from integers to vectors. Lines 43-45 construct our data augmentation object.
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
Be sure to refer to my data augmentation tutorial if you are new to data augmentation, how it works, or why we use it. Our CIFAR-10 class labelNames are listed on Lines 48 and 49. Now we’re to the meat of this tutorial — initializing either the Adam or RAdam optimizer: # check if we are using Adam if args["optimizer"] == "adam": # initialize the Adam optimizer print("[INFO] using Adam optimizer") opt = Adam(lr=1e-3) # otherwise, we are using Rectified Adam else: # initialize the Rectified Adam optimizer print("[INFO] using Rectified Adam optimizer") opt = RAdam(total_steps=5000, warmup_proportion=0.1, min_lr=1e-5) Depending on the --optimizer command line argument, we’ll either initialize: Adam with a learning rate of 1e-3 (Lines 52-55) Or RAdam with a minimum learning rate of 1e-5 and warm up (Lines 58-61). Be sure to refer to the original implementation notes on warm up which Zhao HG also implemented With our optimizer ready to go, now we’ll compile and train our model: # initialize our optimizer and model, then compile it model = ResNet.build(32, 32, 3, 10, (9, 9, 9), (64, 64, 128, 256), reg=0.0005) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) # train the network H = model.fit_generator( aug.flow(trainX, trainY, batch_size=BS), validation_data=(testX, testY), steps_per_epoch=trainX.shape[0] // BS, epochs=EPOCHS, verbose=1) We compile ResNet with our specified optimizer (either Adam or RAdam) via Lines 64-67. Lines 70-75 launch the training process. Be sure to refer to my tutorial on Keras’ fit_generator method if you are new to using this function to train a deep neural network with Keras. To wrap up, we print our classification report and plot our loss/accuracy curves over the duration of the training epochs: # evaluate the network print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=BS) print(classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=labelNames)) # determine the number of epochs and then construct the plot title N = np.arange(0, EPOCHS) title = "Training Loss and Accuracy on CIFAR-10 ({})".format( args["optimizer"]) # plot the training loss and accuracy plt.style.use("ggplot") plt.figure() plt.plot(N, H.history["loss"], label="train_loss") plt.plot(N, H.history["val_loss"], label="val_loss") plt.plot(N, H.history["acc"], label="train_acc") plt.plot(N, H.history["val_acc"], label="val_acc") plt.title(title) plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend() plt.savefig(args["plot"]) Standard Adam Optimizer Results To train ResNet on the CIFAR-10 dataset using the Adam optimizer, make sure you use the “Downloads” section of this blog post to download the source guide to this guide. From there, open up a terminal and execute the following command: $ python train.py --plot cifar10_adam.png --optimizer adam [INFO] loading CIFAR-10 data... [INFO] using Adam optimizer Epoch 1/75 390/390 [==============================] - 205s 526ms/step - loss: 1.9642 - acc: 0.4437 - val_loss: 1.7449 - val_acc: 0.5248 Epoch 2/75 390/390 [==============================] - 185s 475ms/step - loss: 1.5199 - acc: 0.6050 - val_loss: 1.4735 - val_acc: 0.6218 Epoch 3/75 390/390 [==============================] - 185s 474ms/step - loss: 1.2973 - acc: 0.6822 - val_loss: 1.2712 - val_acc: 0.6965 Epoch 4/75 390/390 [==============================] - 185s 474ms/step - loss: 1.1451 - acc: 0.7307 - val_loss: 1.2450 - val_acc: 0.7109 Epoch 5/75 390/390 [==============================] - 185s 474ms/step - loss: 1.0409 - acc: 0.7643 - val_loss: 1.0918 - val_acc: 0.7542 ... Epoch 71/75 390/390 [==============================] - 185s 474ms/step - loss: 0.4215 - acc: 0.9358 - val_loss: 0.6372 - val_acc: 0.8775 Epoch 72/75 390/390 [==============================] - 185s 474ms/step - loss: 0.4241 - acc: 0.9347 - val_loss: 0.6024 - val_acc: 0.8819 Epoch 73/75 390/390 [==============================] - 185s 474ms/step - loss: 0.4226 - acc: 0.9350 - val_loss: 0.5906 - val_acc: 0.8835 Epoch 74/75 390/390 [==============================] - 185s 474ms/step - loss: 0.4198 - acc: 0.9369 - val_loss: 0.6321 - val_acc: 0.8759 Epoch 75/75 390/390 [==============================] - 185s 474ms/step - loss: 0.4127 - acc: 0.9391 - val_loss: 0.5669 - val_acc: 0.8953 [INFO] evaluating network... [INFO] evaluating network... precision recall f1-score support airplane 0.81 0.94 0.87 1000 automobile 0.96 0.96 0.96 1000 bird 0.86 0.87 0.86 1000 cat 0.84 0.75 0.79 1000 deer 0.91 0.91 0.91 1000 dog 0.86 0.84 0.85 1000 frog 0.89 0.95 0.92 1000 horse 0.93 0.92 0.93 1000 ship 0.97 0.88 0.92 1000 truck 0.96 0.92 0.94 1000 micro avg 0.90 0.90 0.90 10000 macro avg 0.90 0.90 0.90 10000 weighted avg 0.90 0.90 0.90 10000 Figure 2: To achieve a baseline, we first train ResNet using the Adam optimizer on the CIFAR-10 dataset. We will compare the results to the Rectified Adam (RAdam) optimizer using Keras. Looking at our output you can see that we obtained 90% accuracy on our testing set.
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
Examining Figure 2 shows that there is little overfitting going on as well — our training progress is quite stable. Rectified Adam Optimizer Results Now, let’s train ResNet on CIFAR-10 using the Rectified Adam optimizer: $ python train.py --plot cifar10_rectified_adam.png --optimizer radam [INFO] loading CIFAR-10 data... [INFO] using Rectified Adam optimizer Epoch 1/75 390/390 [==============================] - 212s 543ms/step - loss: 2.4813 - acc: 0.2489 - val_loss: 2.0976 - val_acc: 0.3921 Epoch 2/75 390/390 [==============================] - 188s 483ms/step - loss: 1.8771 - acc: 0.4797 - val_loss: 1.8231 - val_acc: 0.5041 Epoch 3/75 390/390 [==============================] - 188s 483ms/step - loss: 1.5900 - acc: 0.5857 - val_loss: 1.4483 - val_acc: 0.6379 Epoch 4/75 390/390 [==============================] - 188s 483ms/step - loss: 1.3919 - acc: 0.6564 - val_loss: 1.4264 - val_acc: 0.6466 Epoch 5/75 390/390 [==============================] - 188s 483ms/step - loss: 1.2457 - acc: 0.7046 - val_loss: 1.2151 - val_acc: 0.7138 ... Epoch 71/75 390/390 [==============================] - 188s 483ms/step - loss: 0.6256 - acc: 0.9054 - val_loss: 0.7919 - val_acc: 0.8551 Epoch 72/75 390/390 [==============================] - 188s 482ms/step - loss: 0.6184 - acc: 0.9071 - val_loss: 0.7894 - val_acc: 0.8537 Epoch 73/75 390/390 [==============================] - 188s 483ms/step - loss: 0.6242 - acc: 0.9051 - val_loss: 0.7981 - val_acc: 0.8519 Epoch 74/75 390/390 [==============================] - 188s 483ms/step - loss: 0.6191 - acc: 0.9062 - val_loss: 0.7969 - val_acc: 0.8519 Epoch 75/75 390/390 [==============================] - 188s 483ms/step - loss: 0.6143 - acc: 0.9098 - val_loss: 0.7935 - val_acc: 0.8525 [INFO] evaluating network... precision recall f1-score support airplane 0.86 0.88 0.87 1000 automobile 0.91 0.95 0.93 1000 bird 0.83 0.76 0.79 1000 cat 0.76 0.69 0.72 1000 deer 0.85 0.81 0.83 1000 dog 0.79 0.79 0.79 1000 frog 0.81 0.94 0.87 1000 horse 0.89 0.89 0.89 1000 ship 0.94 0.91 0.92 1000 truck 0.88 0.91 0.89 1000 micro avg 0.85 0.85 0.85 10000 macro avg 0.85 0.85 0.85 10000 weighted avg 0.85 0.85 0.85 10000 Figure 3: The Rectified Adam (RAdam) optimizer is used in conjunction with ResNet using Keras on the CIFAR-10 dataset. But how to the results compare to the standard Adam optimizer? Notice how the --optimizer  switch is set to radam for this second run of our training script. But wait a second — why are we only obtaining 85% accuracy here? Isn’t the Rectified Adam optimizer supposed to outperform standard Adam? Why is our accuracy somehow worse? Let’s discuss that in the next section. Is Rectified Adam actually better than Adam? If you look at our results you’ll see that the standard Adam optimizer outperformed the new Rectified Adam optimizer.
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
What’s going on here? Isn’t Rectified Adam supposed to obtain higher accuracy and in fewer epochs? Why is Rectified Adam performing worse than standard Adam? Well, to start, keep in mind that we’re looking at the results from only a single dataset here — a true evaluation would look at the results across multiple datasets. …and that’s exactly what I’ll be doing next week! To see a full-blown comparison between Adam and Rectified Adam, and determine which optimizer is better, you’ll need to tune in for next week’s blog post! What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated?
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to use the Rectified Adam optimizer as a drop-in replacement for the standard Adam optimizer using the Keras deep learning library. We then ran a set of experiments comparing Adam performance to Rectified Adam performance. Our results show that standard Adam actually outperformed the RAdam optimizer. So what gives? Liu et al. reported higher accuracy with fewer epochs in their paper — are we doing anything wrong? Is something broken with our Rectified Adam optimizer? To answer those questions you’ll need to tune in next week where I’ll be providing a full set of benchmark experiments comparing Adam to Rectified Adam.
https://pyimagesearch.com/2019/09/30/rectified-adam-radam-optimizer-with-keras/
You won’t want to miss next week’s post, it’s going to be a good one! To download the source code to this post (and be notified when next week’s tutorial goes live), be sure to enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Click here to download the source code to this pos Is the Rectified Adam (RAdam) optimizer actually better than the standard Adam optimizer? According to my 24 experiments, the answer is no, typically not (but there are cases where you do want to use it instead of Adam). In Liu et al. ’s 2018 paper, On the Variance of the Adaptive Learning Rate and Beyond, the authors claim that Rectified Adam can obtain: Better accuracy (or at least identical accuracy when compared to Adam) And in fewer epochs than standard Adam The authors tested their hypothesis on three different datasets, including one NLP dataset and two computer vision datasets (ImageNet and CIFAR-10). In each case Rectified Adam outperformed standard Adam…but failed to outperform standard Stochastic Gradient Descent (SGD)! The Rectified Adam optimizer has some strong theoretical justifications — but as a deep learning practitioner, you need more than just theory — you need to see empirical results applied to a variety of datasets. And perhaps more importantly, you need to obtain a mastery level experience operating/driving the optimizer (or a small subset of optimizers) as well. Today is part two in our two-part series on the Rectified Adam optimizer: Rectified Adam (RAdam) optimizer with Keras (last week’s post) Is Rectified Adam actually *better* than Adam (today’s tutorial) If you haven’t yet, go ahead and read part one to ensure you have a good understanding of how the Rectified Adam optimizer works. From there, read today’s post to help you understand how to design, code, and run experiments used to compare deep learning optimizers. To learn how to compare Rectified Adam to standard Adam, just keep reading!
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Looking for the source code to this post? Jump Right To The Downloads Section Is Rectified Adam actually *better* than Adam? In the first part of this tutorial, we’ll briefly discuss the Rectified Adam optimizer, including how it works and why it’s interesting to us as deep learning practitioners. From there, I’ll guide you in designing and planning our set of experiments to compare Rectified Adam to Adam — you can use this section to learn how you design your own deep learning experiments as well. We’ll then review the project structure for this post, including implementing our training and evaluation scripts by hand. Finally, we’ll run our experiments, collect results, and ultimately decide is Rectified Adam actually better than Adam? What is the Rectified Adam optimizer? Figure 1: The Rectified Adam (RAdam) deep learning optimizer. Is it better than the standard Adam optimizer? ( image source: Figure 6 from Liu et al.)
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
The Rectified Adam optimizer was proposed by Liu et al. in their 2019 paper, On the Variance of the Adaptive Learning Rate and Beyond. In their paper they discussed how their update to the Adam optimizer, called Rectified Adam, can: Obtain a higher accuracy/more generalizable deep neural network. Complete training in fewer epochs. Their work had some strong theoretical justifications as well. They found that adaptive learning rate optimizers (such as Adam) both: Struggle to generalize during the first few batch updates Had very high variance Liu et al. studied the problem in detail and found that the issue could be rectified (hence the name, Rectified Adam) by: Applying warm up with a low initial earning rate. Simply turning off the momentum term for the first few sets of input training batches. The authors evaluated their experiments on one NLP dataset and two image classification datasets and found that their Rectified Adam implementation outperformed standard Adam (but neither optimizer outperformed standard SGD). We’ll be continuing Liu et al.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
’s experiments today and comparing Rectified Adam to standard Adam in 24 separate experiments. For more details on how the Rectified Adam optimizer works, be sure to review my previous blog post. Planning our experiments Figure 2: We will plan our set of experiments to evaluate the performance of the Rectified Adam (RAdam) optimizer using Keras. To compare Adam to Rectified Adam, we’ll be training three Convolutional Neural Networks (CNNs), including: ResNet GoogLeNet MiniVGGNet The implementations of these CNNs came directly from my book, Deep Learning for Computer Vision with Python. These networks will be trained on four datasets: MNIST Fashion MNIST CIFAR-10 CIFAR-100 For each combination of dataset and CNN architecture, we’ll apply two optimizers: Adam Rectified Adam Taking all possible combinations, we end up with 3 x 4 x 2 = 24 separate training experiments. We’ll run each of these experiments individually, collect, the results, and then interpret them to determine which optimizer is indeed better. Whenever you plan your own experiments make sure you take the time to write out the list of model architectures, optimizers, and datasets you intend on applying them to. Additionally, you may want to list the hyperparameters you believe are important and are worth tuning (i.e., learning rate, L2 weight decay strength, etc.). Considering the 24 experiments we plan to conduct, it makes the most sense to automate the data collection phase. From there, we will be able to work on other tasks while the computation is underway (often requiring days of compute time).
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Upon completion of the data collection for our 24 experiments, we will then be able to sit down and analyze the plots and classification reports in order to evaluate RAdam on our CNNs, datasets, and optimizers. How to design your own deep learning experiments Figure 3: Designing your own deep learning experiments, requires thought and planning. Consider your typical deep learning workflow and design your initial set of experiments such that a thorough preliminary investigation can be conducted using automation. Planning for automated evaluation now will save you time (and money) down the line. Typically, my experiment design workflow goes something like this: Select 2-3 model architectures that I believe would work well on a particular dataset (i.e., ResNet, VGGNet, etc.). Decide if I want to train from scratch or perform transfer learning. Use my learning rate finder to find an acceptable initial learning rate for the SGD optimizer. Train the model on my dataset using SGD and Keras’ standard decay schedule. Look at my results from training, select the architecture that performed best, and start tuning my hyperparameters, including model capacity, regularization strength, revisiting the initial learning rate, applying Cyclical Learning Rates, and potentially exploring other optimizers. You’ll notice that I tend to use SGD in my initial experiments instead of Adam, RMSprop, etc.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Why is that? To answer that question you’ll need to read the “You need to obtain mastery level experience operating these three optimizers” section below. Note: For more of my suggestions, tips, and best practices when designing and running your own experiments, be sure to refer to my book, Deep Learning for Computer Vision with Python. However, in the context of this tutorial, we’re attempting to compare our results to the work of Liu et al. We, therefore, need to fix the model architectures, training from scratch, learning rate, and optimizers — our experiment design now becomes: Train ResNet, GoogLeNet, and MiniVGGNet on MNIST, Fashion MNIST, CIFAR-10, and CIFAR-100, respectively. Train all networks from scratch. Use the initial, default learning rates for Adam/Rectified Adam (1e-3). Utilize the Adam and Rectified Adam optimizers for training. Since these are one-off experiments we’ll not be performing an exhaustive dive on tuning hyperparameters (you can refer to Deep Learning for Computer Vision with Python if you would like details on how to tune your hyperparameters). At this point we’ve motivated and planned our set of experiments — now let’s learn how to implement our training and evaluation scripts.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Project structure Go ahead and grab the “Downloads” and then inspect the project directory with the tree command: $ tree --dirsfirst --filelimit 10 . ├── output [48 entries] ├── plots [12 entries] ├── pyimagesearch │   ├── __init__.py │   ├── minigooglenet.py │   ├── minivggnet.py │   └── resnet.py ├── combinations.py ├── experiments.sh ├── plot.py └── train.py 3 directories, 68 files Our project consists of two output directories: output/ : Holds our classification report .txt files organized by experiment. Additionally, there is one .pickle file per experiment containing the serialized training history data (for plotting purposes). plots/ : For each CNN/dataset combination, a stacked accuracy/loss curve plot is output so that we can conveniently compare the Adam and RAdam optimizers. The pyimagesearch module contains three Convolutional Neural Networks (CNNs) architectures constructed with Keras. These CNN implementations come directly from Deep Learning for Computer Vision with Python. We will review three Python scripts in today’s tutorial: train.py : Our training script accepts a CNN architecture, dataset, and optimizer via command line argument and begins fitting a model accordingly. This script will be invoked automatically for each of our 24 experiments via the experiments.sh bash script. Our training script produces two types of output files: .txt : A classification report printout in scikit-learn’s standard format. .pickle : Serialized training history so that it can later be recalled for plotting purposes.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
combinations.py : This script computes all the experiment combinations for which we will train models and collect data. The result of executing this script is a bash/shell script named experiments.sh . plot.py : Plots accuracy/loss curves for Adam/RAdam using matplotlib directly from the output/*.pickle files. Implementing the training script Our training script will be responsible for accepting: A given model architecture A dataset An optimizer And from there, the script will handle training the specified model, on the supplied dataset, using the specified optimizer. We’ll use this script to run each of our 24 experiments. Let’s go ahead and implement the train.py script now: # import the necessary packages from pyimagesearch.minigooglenet import MiniGoogLeNet from pyimagesearch.minivggnet import MiniVGGNet from pyimagesearch.resnet import ResNet from sklearn.preprocessing import LabelBinarizer from sklearn.metrics import classification_report from keras.preprocessing.image import ImageDataGenerator from keras.optimizers import Adam from keras_radam import RAdam from keras.datasets import fashion_mnist from keras.datasets import cifar100 from keras.datasets import cifar10 from keras.datasets import mnist import numpy as np import argparse import pickle import cv2 Imports include our three CNN architectures, four datasets, and two optimizers (Adam and RAdam ). Let’s parse command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--history", required=True, help="path to output training history file") ap.add_argument("-r", "--report", required=True, help="path to output classification report file") ap.add_argument("-d", "--dataset", type=str, default="mnist", choices=["mnist", "fashion_mnist", "cifar10", "cifar100"], help="dataset name") ap.add_argument("-m", "--model", type=str, default="resnet", choices=["resnet", "googlenet", "minivggnet"], help="type of model architecture") ap.add_argument("-o", "--optimizer", type=str, default="adam", choices=["adam", "radam"], help="type of optmizer") args = vars(ap.parse_args()) Our command line arguments include: --history : The path to the output training history .pickle file. --report : The path to the output classification report .txt file. --dataset : The dataset to train our model on can be any of the choices listed on Line 26.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
--model : The deep learning model architecture must be one of the choices on Line 29. --optimizer : Our adam or radam deep learning optimization method. Upon providing the command line arguments via the terminal, our training script dynamically sets up and launches the experiment. Output files are named according to the parameters of the experiment. From here we’ll set two constants and initialize the default number of channels for the dataset: # initialize the batch size and number of epochs to train BATCH_SIZE = 128 NUM_EPOCHS = 60 # initialize the number of channels in the dataset numChans = 1 If our --dataset is MNIST or Fashion MNIST, we’ll load the dataset in the following manner: # check if we are using either the MNIST or Fashion MNIST dataset if args["dataset"] in ("mnist", "fashion_mnist"): # check if we are using MNIST if args["dataset"] == "mnist": # initialize the label names for the MNIST dataset labelNames = [str(i) for i in range(0, 10)] # load the MNIST dataset print("[INFO] loading MNIST dataset...") ((trainX, trainY), (testX, testY)) = mnist.load_data() # otherwise, are are using Fashion MNIST else: # initialize the label names for the Fashion MNIST dataset labelNames = ["top", "trouser", "pullover", "dress", "coat", "sandal", "shirt", "sneaker", "bag", "ankle boot"] # load the Fashion MNIST dataset print("[INFO] loading Fashion MNIST dataset...") ((trainX, trainY), (testX, testY)) = fashion_mnist.load_data() # MNIST dataset images are 28x28 but the networks we will be # training expect 32x32 images trainX = np.array([cv2.resize(x, (32, 32)) for x in trainX]) testX = np.array([cv2.resize(x, (32, 32)) for x in testX]) # reshape the data matrices to include a channel dimension which # is required for training) trainX = trainX.reshape((trainX.shape[0], 32, 32, 1)) testX = testX.reshape((testX.shape[0], 32, 32, 1)) Keep in mind that MNIST images are 28×28 but we need 32×32 images for our architectures. Thus, Lines 66 and 67 resize all images in the dataset. Lines 71 and 72 then add the batch dimension. Otherwise, we have a CIFAR variant --dataset to load: # otherwise, we must be using a variant of CIFAR else: # update the number of channels in the images numChans = 3 # check if we are using CIFAR-10 if args["dataset"] == "cifar10": # initialize the label names for the CIFAR-10 dataset labelNames = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"] # load the CIFAR-10 dataset print("[INFO] loading CIFAR-10 dataset...") ((trainX, trainY), (testX, testY)) = cifar10.load_data() # otherwise, we are using CIFAR-100 else: # initialize the label names for the CIFAR-100 dataset labelNames = ["apple", "aquarium_fish", "baby", "bear", "beaver", "bed", "bee", "beetle", "bicycle", "bottle", "bowl", "boy", "bridge", "bus", "butterfly", "camel", "can", "castle", "caterpillar", "cattle", "chair", "chimpanzee", "clock", "cloud", "cockroach", "couch", "crab", "crocodile", "cup", "dinosaur", "dolphin", "elephant", "flatfish", "forest", "fox", "girl", "hamster", "house", "kangaroo", "keyboard", "lamp", "lawn_mower", "leopard", "lion", "lizard", "lobster", "man", "maple_tree", "motorcycle", "mountain", "mouse", "mushroom", "oak_tree", "orange", "orchid", "otter", "palm_tree", "pear", "pickup_truck", "pine_tree", "plain", "plate", "poppy", "porcupine", "possum", "rabbit", "raccoon", "ray", "road", "rocket", "rose", "sea", "seal", "shark", "shrew", "skunk", "skyscraper", "snail", "snake", "spider", "squirrel", "streetcar", "sunflower", "sweet_pepper", "table", "tank", "telephone", "television", "tiger", "tractor", "train", "trout", "tulip", "turtle", "wardrobe", "whale", "willow_tree", "wolf", "woman", "worm"] # load the CIFAR-100 dataset print("[INFO] loading CIFAR-100 dataset...") ((trainX, trainY), (testX, testY)) = cifar100.load_data() CIFAR datasets contain 3-channel color images (Line 77). These datasets are already comprised of 32×32 images (no resizing is necessary). From here, we’ll scale our data and determine the total number of classes: # scale the data to the range [0, 1] trainX = trainX.astype("float32") / 255.0 testX = testX.astype("float32") / 255.0 # determine the total number of unique classes in the dataset numClasses = len(np.unique(trainY)) print("[INFO] {} classes in dataset".format(numClasses)) Followed by initializing this experiment’s deep learning optimizer: # check if we are using Adam if args["optimizer"] == "adam": # initialize the Adam optimizer print("[INFO] using Adam optimizer") opt = Adam(lr=1e-3) # otherwise, we are using Rectified Adam else: # initialize the Rectified Adam optimizer print("[INFO] using Rectified Adam optimizer") opt = RAdam(total_steps=5000, warmup_proportion=0.1, min_lr=1e-5) Either Adam or RAdam is initialized according to the --optimizer command line argument switch.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Our model is then built depending upon the --model command line argument: # check if we are using the ResNet architecture if args["model"] == "resnet": # utilize the ResNet architecture print("[INFO] initializing ResNet...") model = ResNet.build(32, 32, numChans, numClasses, (9, 9, 9), (64, 64, 128, 256), reg=0.0005) # check if we are using Tiny GoogLeNet elif args["model"] == "googlenet": # utilize the MiniGoogLeNet architecture print("[INFO] initializing MiniGoogLeNet...") model = MiniGoogLeNet.build(width=32, height=32, depth=numChans, classes=numClasses) # otherwise, we must be using MiniVGGNet else: # utilize the MiniVGGNet architecture print("[INFO] initializing MiniVGGNet...") model = MiniVGGNet.build(width=32, height=32, depth=numChans, classes=numClasses) Once either ResNet, GoogLeNet, or MiniVGGNet is built, we’ll binarize our labels and construct our data augmentation object: # convert the labels from integers to vectors lb = LabelBinarizer() trainY = lb.fit_transform(trainY) testY = lb.transform(testY) # construct the image generator for data augmentation aug = ImageDataGenerator(rotation_range=18, zoom_range=0.15, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.15, horizontal_flip=True, fill_mode="nearest") Followed by compiling our model and training the network: # compile the model and train the network print("[INFO] training network...") model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) H = model.fit_generator( aug.flow(trainX, trainY, batch_size=BATCH_SIZE), validation_data=(testX, testY), steps_per_epoch=trainX.shape[0] // BATCH_SIZE, epochs=NUM_EPOCHS, verbose=1) We then evaluate the trained model and dump training history to disk: # evaluate the network print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=BATCH_SIZE) report = classification_report(testY.argmax(axis=1), predictions.argmax(axis=1), target_names=labelNames) # serialize the training history to disk print("[INFO] serializing training history...") f = open(args["history"], "wb") f.write(pickle.dumps(H.history)) f.close() # save the classification report to disk print("[INFO] saving classification report...") f = open(args["report"], "w") f.write(report) f.close() Each experiment will contain a classification report .txt file along with a serialized training history .pickle file. The classification reports will be inspected manually whereas the training history files will later be opened by operations inside plot.py , the training history parsed, and finally plotted. As you’ve learned, creating a training script that dynamically sets up an experiment is quite straightforward. Creating our experiment combinations At this point, we have our training script which can accept a (1) model architecture, (2) dataset, and (3) optimizer, followed by fitting a model using the respective combination. That being said, are we going to manually run each and every individual command? No, not only is that a tedious task, it’s also prone to human error. Instead, let’s create a Python script to generate a shell script containing the train.py command for each experiment we want to run. Open up the combinations.py file and insert the following code: # import the necessary packages import argparse import os # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-o", "--output", required=True, help="path to output output directory") ap.add_argument("-s", "--script", required=True, help="path to output shell script") args = vars(ap.parse_args()) Our script requires two command line arguments: --output : The path to the output directory where the training files will be stored. --script : The path to the output shell script which will contain all of our training script commands with command line argument combinations.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Let’s go ahead and open a new file for writing: # open the output shell script for writing, then write the header f = open(args["script"], "w") f.write("#!/bin/sh\n\n") # initialize the list of datasets, models, and optimizers datasets = ["mnist", "fashion_mnist", "cifar10", "cifar100"] models = ["resnet", "googlenet", "minivggnet"] optimizers = ["adam", "radam"] Line 14 opens a shell script file writing. Subsequently, Line 15 writes the “shebang” to indicate that this shell script is executable. Lines 18-20 then list our datasets , models , and optimizers . We will form all possible combinations of experiments from these lists in a nested loop: # loop over all combinations of datasets, models, and optimizers for dataset in datasets: for model in models: for opt in optimizers: # build the path to the output training log file histFilename = "{}_{}_{}.pickle".format(model, opt, dataset) historyPath = os.path.sep.join([args["output"], histFilename]) # build the path to the output report log file reportFilename = "{}_{}_{}.txt".format(model, opt, dataset) reportPath = os.path.sep.join([args["output"], reportFilename]) # construct the command that will be executed to launch # the experiment cmd = ("python train.py --history {} --report {} " "--dataset {} --model {} --optimizer {}").format( historyPath, reportPath, dataset, model, opt) # write the command to disk f.write("{}\n".format(cmd)) # close the shell script file f.close() Inside the loop, we: Construct our history file path (Lines 27-29). Assemble our report file path (Lines 32-34). Concatenate each command per the current loop iteration’s combination and write it to the shell file (Lines 38-43). Finally, we close the shell script file. Note: I am making the assumption that you are using a Unix machine to run these experiments. If you’re using Windows you should either (1) update this script to generate a batch file instead, or (2) manually execute the train.py command for each respective experiment. Note that I do not support Windows on the PyImageSearch blog so you will be on your own to implement it based on this script.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Generating the experiment shell script Go ahead and use the “Downloads” section of this tutorial to download the source code to the guide. From there, open up a terminal and execute the combinations.py script: $ python combinations.py --output output --script experiments.sh After the script has executed you should have a file named experiments.sh in your working directory — this file contains the 24 separate experiments we’ll be running to compare Adam to Rectified Adam. Go ahead and investigate experiments.sh now: #! /bin/sh python train.py --history output/resnet_adam_mnist.pickle --report output/resnet_adam_mnist.txt --dataset mnist --model resnet --optimizer adam python train.py --history output/resnet_radam_mnist.pickle --report output/resnet_radam_mnist.txt --dataset mnist --model resnet --optimizer radam python train.py --history output/googlenet_adam_mnist.pickle --report output/googlenet_adam_mnist.txt --dataset mnist --model googlenet --optimizer adam python train.py --history output/googlenet_radam_mnist.pickle --report output/googlenet_radam_mnist.txt --dataset mnist --model googlenet --optimizer radam python train.py --history output/minivggnet_adam_mnist.pickle --report output/minivggnet_adam_mnist.txt --dataset mnist --model minivggnet --optimizer adam python train.py --history output/minivggnet_radam_mnist.pickle --report output/minivggnet_radam_mnist.txt --dataset mnist --model minivggnet --optimizer radam python train.py --history output/resnet_adam_fashion_mnist.pickle --report output/resnet_adam_fashion_mnist.txt --dataset fashion_mnist --model resnet --optimizer adam python train.py --history output/resnet_radam_fashion_mnist.pickle --report output/resnet_radam_fashion_mnist.txt --dataset fashion_mnist --model resnet --optimizer radam python train.py --history output/googlenet_adam_fashion_mnist.pickle --report output/googlenet_adam_fashion_mnist.txt --dataset fashion_mnist --model googlenet --optimizer adam python train.py --history output/googlenet_radam_fashion_mnist.pickle --report output/googlenet_radam_fashion_mnist.txt --dataset fashion_mnist --model googlenet --optimizer radam python train.py --history output/minivggnet_adam_fashion_mnist.pickle --report output/minivggnet_adam_fashion_mnist.txt --dataset fashion_mnist --model minivggnet --optimizer adam python train.py --history output/minivggnet_radam_fashion_mnist.pickle --report output/minivggnet_radam_fashion_mnist.txt --dataset fashion_mnist --model minivggnet --optimizer radam python train.py --history output/resnet_adam_cifar10.pickle --report output/resnet_adam_cifar10.txt --dataset cifar10 --model resnet --optimizer adam python train.py --history output/resnet_radam_cifar10.pickle --report output/resnet_radam_cifar10.txt --dataset cifar10 --model resnet --optimizer radam python train.py --history output/googlenet_adam_cifar10.pickle --report output/googlenet_adam_cifar10.txt --dataset cifar10 --model googlenet --optimizer adam python train.py --history output/googlenet_radam_cifar10.pickle --report output/googlenet_radam_cifar10.txt --dataset cifar10 --model googlenet --optimizer radam python train.py --history output/minivggnet_adam_cifar10.pickle --report output/minivggnet_adam_cifar10.txt --dataset cifar10 --model minivggnet --optimizer adam python train.py --history output/minivggnet_radam_cifar10.pickle --report output/minivggnet_radam_cifar10.txt --dataset cifar10 --model minivggnet --optimizer radam python train.py --history output/resnet_adam_cifar100.pickle --report output/resnet_adam_cifar100.txt --dataset cifar100 --model resnet --optimizer adam python train.py --history output/resnet_radam_cifar100.pickle --report output/resnet_radam_cifar100.txt --dataset cifar100 --model resnet --optimizer radam python train.py --history output/googlenet_adam_cifar100.pickle --report output/googlenet_adam_cifar100.txt --dataset cifar100 --model googlenet --optimizer adam python train.py --history output/googlenet_radam_cifar100.pickle --report output/googlenet_radam_cifar100.txt --dataset cifar100 --model googlenet --optimizer radam python train.py --history output/minivggnet_adam_cifar100.pickle --report output/minivggnet_adam_cifar100.txt --dataset cifar100 --model minivggnet --optimizer adam python train.py --history output/minivggnet_radam_cifar100.pickle --report output/minivggnet_radam_cifar100.txt --dataset cifar100 --model minivggnet --optimizer radam Note: Be sure to use the horizontal scroll bar to inspect the entire contents of the experiments.sh script. I intentionally did not break up lines or automatically wrap them for better display. You can also refer to Figure 4 below — I suggest clicking the image to enlarge + inspect it. Figure 4: The output of our `combinations.py` file is a shell script listing the training script commands to run in succession. Click image to enlarge. Notice how there is a train.py call for each of the 24 possible combinations of model architecture, dataset, and optimizer. Furthermore, the “shebang” on Line 1 indicates that this shell script is executable.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Running our experiments The next step is to actually perform each of these experiments. I executed the shell script on an Amazon EC2 instance with an NVIDIA K80 GPU. It took approximately 48 hours to run all the experiments. To launch the experiments for yourself, just run the following command: $ ./experiments.sh After the script has finished running, your output/ directory should be filled with .pickle and .txt files: $ ls -l output/ googlenet_adam_cifar10.pickle googlenet_adam_cifar10.txt googlenet_adam_cifar100.pickle googlenet_adam_cifar100.txt ... resnet_radam_fashion_mnist.pickle resnet_radam_fashion_mnist.txt resnet_radam_mnist.pickle resnet_radam_mnist.txt The .txt files contain the output of scikit-learn’s classification_report, a human-readable output that tells us how well our model performed. The .pickle files contain the training history for the model. We’ll use this .pickle file to plot both Adam and Rectified Adam’s performance in the next section. Implementing our Adam vs. Rectified Adam plotting script Our final Python script, plot.py, will be used to plot the performance of Adam vs. Rectified Adam, giving us a nice, clear visualization of a given model architecture trained on a specific dataset. The plot file opens each Adam/RAdam .pickle file pair and generates a corresponding plot. Open up plot.py and insert the following code: # import the necessary packages import matplotlib.pyplot as plt import numpy as np import argparse import pickle import os def plot_history(adamHist, rAdamHist, accTitle, lossTitle): # determine the total number of epochs used for training, then # initialize the figure N = np.arange(0, len(adamHist["loss"])) plt.style.use("ggplot") (fig, axs) = plt.subplots(2, 1, figsize=(7, 9)) # plot the accuracy for Adam vs. Rectified Adam axs[0].plot(N, adamHist["acc"], label="adam_train_acc") axs[0].plot(N, adamHist["val_acc"], label="adam_val_acc") axs[0].plot(N, rAdamHist["acc"], label="radam_train_acc") axs[0].plot(N, rAdamHist["val_acc"], label="radam_val_acc") axs[0].set_title(accTitle) axs[0].set_xlabel("Epoch #") axs[0].set_ylabel("Accuracy") axs[0].legend(loc="lower right") # plot the loss for Adam vs. Rectified Adam axs[1].plot(N, adamHist["loss"], label="adam_train_loss") axs[1].plot(N, adamHist["val_loss"], label="adam_val_loss") axs[1].plot(N, rAdamHist["loss"], label="radam_train_loss") axs[1].plot(N, rAdamHist["val_loss"], label="radam_val_loss") axs[1].set_title(lossTitle) axs[1].set_xlabel("Epoch #") axs[1].set_ylabel("Loss") axs[1].legend(loc="upper right") # update the layout of the plot plt.tight_layout() Lines 2-6 handle imports, namely the matplotlib.pyplot module. The plot_history function is responsible for generating two stacked plots via the subplots feature: Training/validation accuracy curves (Lines 16-23).
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Training/validation loss curves (Lines 26-33). Both Adam and Rectified Adam training history curves are generated from adamHist and rAdamHist data passed as parameters to the function. Note: If you are using TensorFlow 2.0 (i.e., tf.keras ) to run this code , you’ll need to change all occurrences of acc and val_acc to accuracy and val_accuracy , respectively as TensorFlow 2.0 has made a breaking change to the accuracy name. Let’s handle parsing command line arguments: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--input", required=True, help="path to input directory of Keras training history files") ap.add_argument("-p", "--plots", required=True, help="path to output directory of training plots") args = vars(ap.parse_args()) # initialize the list of datasets and models datasets = ["mnist", "fashion_mnist", "cifar10", "cifar100"] models = ["resnet", "googlenet", "minivggnet"] Our command line arguments consist of: --input : The path to the input directory of training history files to be parsed for plot generation. --plots : Our output path where the plots will be stored. Lines 47 and 48 list our datasets and models . We’ll loop over the combinations of datasets and models to generate our plots: # loop over all combinations of datasets and models for dataset in datasets: for model in models: # construct the path to the Adam output training history files adamFilename = "{}_{}_{}.pickle".format(model, "adam", dataset) adamPath = os.path.sep.join([args["input"], adamFilename]) # construct the path to the Rectified Adam output training # history files rAdamFilename = "{}_{}_{}.pickle".format(model, "radam", dataset) rAdamPath = os.path.sep.join([args["input"], rAdamFilename]) # load the training history files for Adam and Rectified Adam, # respectively adamHist = pickle.loads(open(adamPath, "rb").read()) rAdamHist = pickle.loads(open(rAdamPath, "rb").read()) # plot the accuracy/loss for the current dataset, comparing # Adam vs. Rectified Adam accTitle = "Adam vs. RAdam for '{}' on '{}' (Accuracy)".format( model, dataset) lossTitle = "Adam vs. RAdam for '{}' on '{}' (Loss)".format( model, dataset) plot_history(adamHist, rAdamHist, accTitle, lossTitle) # construct the path to the output plot plotFilename = "{}_{}.png".format(model, dataset) plotPath = os.path.sep.join([args["plots"], plotFilename]) # save the plot and clear it plt.savefig(plotPath) plt.clf() Inside our nested datasets /models loop, we: Construct Adam and Rectified Adam’s file paths (Lines 54-62). Load serialized training history (Lines 66 and 67). Generate the plots using our plot_history function (Lines 71-75).
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
Export the figures to disk (Lines 78-83). Plotting Adam vs. Rectified Adam We are now ready to run the plot.py script. Again, make sure you have used the “Downloads” section of this tutorial to download the source code. From there, execute the following command: $ python plot.py --input output --plots plots You can then check the plots/ directory and ensure it has been populated with the training history figures: $ ls -l plots/ googlenet_cifar10.png googlenet_cifar100.png googlenet_fashion_mnist.png googlenet_mnist.png minivggnet_cifar10.png minivggnet_cifar100.png minivggnet_fashion_mnist.png minivggnet_mnist.png resnet_cifar10.png resnet_cifar100.png resnet_fashion_mnist.png resnet_mnist.png In the next section, we’ll review the results of our experiments. Adam vs. Rectified Adam Experiments with MNIST Figure 5: Montage of samples from the MNIST digit dataset. Our first set of experiments will compare Adam vs. Rectified Adam on the MNIST dataset, a standard benchmark image classification dataset for handwritten digit recognition. MNIST – MiniVGGNet Figure 6: Which is better — Adam or RAdam optimizer using MiniVGGNet on the MNIST dataset? Our first experiment compares Adam to Rectified Adam when training MiniVGGNet on the MNIST dataset. Below is the output classification report for the Adam optimizer: precision recall f1-score support 0 0.99 1.00 1.00 980 1 0.99 1.00 0.99 1135 2 0.98 0.96 0.97 1032 3 1.00 1.00 1.00 1010 4 0.99 1.00 0.99 982 5 0.97 0.98 0.98 892 6 0.98 0.98 0.98 958 7 0.99 0.99 0.99 1028 8 0.99 0.99 0.99 974 9 1.00 0.99 0.99 1009 micro avg 0.99 0.99 0.99 10000 macro avg 0.99 0.99 0.99 10000 weighted avg 0.99 0.99 0.99 10000 As well as the classification report for the Rectified Adam optimizer: precision recall f1-score support 0 0.99 1.00 0.99 980 1 1.00 0.99 1.00 1135 2 0.97 0.97 0.97 1032 3 0.99 0.99 0.99 1010 4 0.99 0.99 0.99 982 5 0.98 0.97 0.97 892 6 0.98 0.98 0.98 958 7 0.99 0.99 0.99 1028 8 0.99 0.99 0.99 974 9 0.99 0.99 0.99 1009 micro avg 0.99 0.99 0.99 10000 macro avg 0.99 0.99 0.99 10000 weighted avg 0.99 0.99 0.99 10000 As you can see, we’re obtaining 99% accuracy for both experiments. Looking at Figure 6 you can observe the warmup period associated with Rectified Adam: Loss starts off very high and accuracy very low After warmup is complete the Rectified Adam optimizer catches up with Adam What’s interesting to note though is that Adam obtains lower loss compared to Rectified Adam — we’ll actually see that trend continue in the rest of the experiments we run (and I’ll explain why this happens as well).
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
MNIST – GoogLeNet Figure 7: Which deep learning optimizer is actually better — Rectified Adam or Adam? This plot is from my experiment notebook while testing RAdam and Adam using GoogLeNet on the MNIST dataset. This next experiment compares Adam to Rectified Adam for GoogLeNet trained on the MNIST dataset. Below follows the output of the Adam optimizer: precision recall f1-score support 0 1.00 1.00 1.00 980 1 1.00 0.99 1.00 1135 2 0.96 0.99 0.97 1032 3 0.99 1.00 0.99 1010 4 0.99 0.99 0.99 982 5 0.99 0.96 0.98 892 6 0.98 0.99 0.98 958 7 0.99 0.99 0.99 1028 8 1.00 1.00 1.00 974 9 1.00 0.98 0.99 1009 micro avg 0.99 0.99 0.99 10000 macro avg 0.99 0.99 0.99 10000 weighted avg 0.99 0.99 0.99 10000 As well as the output for the Rectified Adam optimizer: precision recall f1-score support 0 1.00 1.00 1.00 980 1 1.00 0.99 1.00 1135 2 0.98 0.98 0.98 1032 3 1.00 0.99 1.00 1010 4 1.00 0.99 1.00 982 5 0.97 0.99 0.98 892 6 0.99 0.98 0.99 958 7 0.99 1.00 0.99 1028 8 0.99 1.00 1.00 974 9 1.00 0.99 1.00 1009 micro avg 0.99 0.99 0.99 10000 macro avg 0.99 0.99 0.99 10000 weighted avg 0.99 0.99 0.99 10000 Again, 99% accuracy is obtained for both optimizers. This time both the training/validation plots are near identical for both accuracy and loss. MNIST – ResNet Figure 8: Training accuracy/loss plot for ResNet on the MNIST dataset using both the RAdam (Rectified Adam) and Adam deep learning optimizers with Keras. Our final MNIST experiment compares training ResNet using both Adam and Rectified Adam. Given that MNIST is not a very challenging dataset we obtain 99% accuracy for the Adam optimizer: precision recall f1-score support 0 1.00 1.00 1.00 980 1 1.00 0.99 1.00 1135 2 0.98 0.98 0.98 1032 3 0.99 1.00 1.00 1010 4 0.99 1.00 0.99 982 5 0.99 0.98 0.98 892 6 0.98 0.99 0.99 958 7 0.99 1.00 0.99 1028 8 0.99 1.00 1.00 974 9 1.00 0.98 0.99 1009 micro avg 0.99 0.99 0.99 10000 macro avg 0.99 0.99 0.99 10000 weighted avg 0.99 0.99 0.99 10000 As well as the Rectified Adam optimizer: precision recall f1-score support 0 1.00 1.00 1.00 980 1 1.00 1.00 1.00 1135 2 0.97 0.98 0.98 1032 3 1.00 1.00 1.00 1010 4 0.99 1.00 1.00 982 5 0.99 0.97 0.98 892 6 0.99 0.98 0.99 958 7 0.99 1.00 0.99 1028 8 1.00 1.00 1.00 974 9 1.00 0.99 1.00 1009 micro avg 0.99 0.99 0.99 10000 macro avg 0.99 0.99 0.99 10000 weighted avg 0.99 0.99 0.99 10000 But take a look at Figure 8 — note how Adam obtains much lower loss than Rectified Adam. That’s not necessarily a bad thing as it may imply that Rectified Adam is obtaining a more generalizable model; however, performance on the testing set is identical so we would need to test on images outside MNIST (which is outside the scope of this blog post). Adam vs. Rectified Adam Experiments with Fashion MNIST Figure 9: The Fashion MNIST dataset was created by e-commerce company, Zalando, as a drop-in replacement for MNIST Digits.
https://pyimagesearch.com/2019/10/07/is-rectified-adam-actually-better-than-adam/
It is a great dataset to practice/experiment with when using Keras for deep learning. ( image source) Our next set of experiments evaluate Adam vs. Rectified Adam on the Fashion MNIST dataset, a drop-in replacement for the standard MNIST dataset. You can read more about Fashion MNIST here. Fashion MNIST – MiniVGGNet Figure 10: Testing optimizers with deep learning, including new ones such as RAdam, requires multiple experiments. Shown in this figure is the MiniVGGNet CNN trained on the Fashion MNIST dataset with both Adam and RAdam optimizers. Our first experiment evaluates the MiniVGGNet architecture trained on the Fashion MNIST dataset. Below you can find the output of training with the Adam optimizer: precision recall f1-score support top 0.95 0.71 0.81 1000 trouser 0.99 0.99 0.99 1000 pullover 0.94 0.76 0.84 1000 dress 0.96 0.80 0.87 1000 coat 0.84 0.90 0.87 1000 sandal 0.98 0.98 0.98 1000 shirt 0.59 0.91 0.71 1000 sneaker 0.96 0.97 0.96 1000 bag 0.98 0.99 0.99 1000 ankle boot 0.97 0.97 0.97 1000 micro avg 0.90 0.90 0.90 10000 macro avg 0.92 0.90 0.90 10000 weighted avg 0.92 0.90 0.90 10000 As well as the Rectified Adam optimizer: precision recall f1-score support top 0.85 0.85 0.85 1000 trouser 1.00 0.97 0.99 1000 pullover 0.89 0.84 0.87 1000 dress 0.93 0.81 0.87 1000 coat 0.85 0.80 0.82 1000 sandal 0.99 0.95 0.97 1000 shirt 0.62 0.77 0.69 1000 sneaker 0.92 0.96 0.94 1000 bag 0.96 0.99 0.97 1000 ankle boot 0.97 0.95 0.96 1000 micro avg 0.89 0.89 0.89 10000 macro avg 0.90 0.89 0.89 10000 weighted avg 0.90 0.89 0.89 10000 Note that the Adam optimizer outperforms Rectified Adam, obtaining 92% accuracy compared to the 90% accuracy of Rectified Adam. Furthermore, take a look at the training plot in Figure 10 — training is very stable with validation loss falling below training loss. With more aggressive training with Adam, we can likely improve our accuracy further. Fashion MNIST – GoogLeNet Figure 11: Is either RAdam or Adam a better deep learning optimizer using GoogLeNet?