url
stringclasses
675 values
text
stringlengths
0
9.95k
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Let’s go ahead and execute the shell script: $ source setup.sh Provided that you have executed this script, you shouldn’t see any strange OpenVINO-related errors with the rest of the project. If you encounter the following error message in the next section, be sure to execute setup.sh: Traceback (most recent call last): File "extract_embeddings.py", line 108 in cv2.error: OpenCV(4.1.1-openvino) /home/jenkins/workspace/OpenCV/ OpenVINO/build/opencv/modules/dnn/src/opinfengine.cpp:477 error: (-215:Assertion failed) Failed to initialize Inference Engine backend: Can not init Myriad device: NC_ERROR in function 'initPlugin' Extracting Facial Embeddings with Movidius NCS Figure 2: Raspberry Pi facial recognition with the Movidius NCS uses deep metric learning, a process that involves a “triplet training step.” The triplet consists of 3 unique face images — 2 of the 3 are the same person. The NN generates a 128-d vector for each of the 3 face images. For the 2 face images of the same person, we tweak the neural network weights to make the vector closer via distance metric. ( image credit: Adam Geitgey) In order to perform deep learning face recognition, we need real-valued feature vectors to train a model upon. The script in this section serves the purpose of extracting 128-d feature vectors for all faces in your dataset. Again, if you are unfamiliar with facial embeddings/encodings, refer to one of the three aforementioned resources. Let’s open extract_embeddings.py and review: # import the necessary packages from imutils import paths import numpy as np import argparse import imutils import pickle import cv2 import os # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-i", "--dataset", required=True, help="path to input directory of faces + images") ap.add_argument("-e", "--embeddings", required=True, help="path to output serialized db of facial embeddings") ap.add_argument("-d", "--detector", required=True, help="path to OpenCV's deep learning face detector") ap.add_argument("-m", "--embedding-model", required=True, help="path to OpenCV's deep learning face embedding model") ap.add_argument("-c", "--confidence", type=float, default=0.5, help="minimum probability to filter weak detections") args = vars(ap.parse_args()) Lines 2-8 import the necessary packages for extracting face embeddings.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Lines 11-22 parse five command line arguments: --dataset: The path to our input dataset of face images. --embeddings: The path to our output embeddings file. Our script will compute face embeddings which we’ll serialize to disk. --detector: Path to OpenCV’s Caffe-based deep learning face detector used to actually localize the faces in the images. --embedding-model: Path to the OpenCV deep learning Torch embedding model. This model will allow us to extract a 128-D facial embedding vector. --confidence: Optional threshold for filtering week face detections. We’re now ready to load our face detector and face embedder: # load our serialized face detector from disk print("[INFO] loading face detector...") protoPath = os.path.sep.join([args["detector"], "deploy.prototxt"]) modelPath = os.path.sep.join([args["detector"], "res10_300x300_ssd_iter_140000.caffemodel"]) detector = cv2.dnn.readNetFromCaffe(protoPath, modelPath) detector.setPreferableTarget(cv2.dnn. DNN_TARGET_MYRIAD) # load our serialized face embedding model from disk and set the # preferable target to MYRIAD print("[INFO] loading face recognizer...") embedder = cv2.dnn.readNetFromTorch(args["embedding_model"]) embedder.setPreferableTarget(cv2.dnn. DNN_TARGET_MYRIAD) Here we load the face detector and embedder: detector: Loaded via Lines 26-29.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
We’re using a Caffe-based DL face detector to localize faces in an image. embedder: Loaded on Line 33. This model is Torch-based and is responsible for extracting facial embeddings via deep learning feature extraction. Notice that we’re using the respective cv2.dnn functions to load the two separate models. The dnn module is optimized by the Intel OpenVINO developers. As you can see on Line 30 and Line 36 we call setPreferableTarget and pass the Myriad constant setting. These calls ensure that the Movidius Neural Compute Stick will conduct the deep learning heavy lifting for us. Moving forward, let’s grab our image paths and perform initializations: # grab the paths to the input images in our dataset print("[INFO] quantifying faces...") imagePaths = list(paths.list_images(args["dataset"])) # initialize our lists of extracted facial embeddings and # corresponding people names knownEmbeddings = [] knownNames = [] # initialize the total number of faces processed total = 0 The imagePaths list, built on Line 40, contains the path to each image in the dataset. The imutils function, paths.list_images automatically traverses the directory tree to find all image paths. Our embeddings and corresponding names will be held in two lists: (1) knownEmbeddings, and (2) knownNames (Lines 44 and 45).
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
We’ll also be keeping track of how many faces we’ve processed the total variable (Line 48). Let’s begin looping over the imagePaths — this loop will be responsible for extracting embeddings from faces found in each image: # loop over the image paths for (i, imagePath) in enumerate(imagePaths): # extract the person name from the image path print("[INFO] processing image {}/{}".format(i + 1, len(imagePaths))) name = imagePath.split(os.path.sep)[-2] # load the image, resize it to have a width of 600 pixels (while # maintaining the aspect ratio), and then grab the image # dimensions image = cv2.imread(imagePath) image = imutils.resize(image, width=600) (h, w) = image.shape[:2] We begin looping over imagePaths on Line 51. First, we extract the name of the person from the path (Line 55). To explain how this works, consider the following example in a Python shell: $ python >>> from imutils import paths >>> import os >>> datasetPath = "../datasets/face_recognition_dataset" >>> imagePaths = list(paths.list_images(datasetPath)) >>> imagePath = imagePaths[0] >>> imagePath 'dataset/adrian/00004.jpg' >>> imagePath.split(os.path.sep) ['dataset', 'adrian', '00004.jpg'] >>> imagePath.split(os.path.sep)[-2] 'adrian' >>> Notice how by using imagePath.split and providing the split character (the OS path separator — “/ ” on Unix and “\ ” on non-Unix systems), the function produces a list of folder/file names (strings) which walk down the directory tree. We grab the second-to-last index, the person’s name, which in this case is adrian. Finally, we wrap up the above code block by loading the image and resizing it to a known width (Lines 60 and 61). Let’s detect and localize faces: # construct a blob from the image imageBlob = cv2.dnn.blobFromImage( cv2.resize(image, (300, 300)), 1.0, (300, 300), (104.0, 177.0, 123.0), swapRB=False, crop=False) # apply OpenCV's deep learning-based face detector to localize # faces in the input image detector.setInput(imageBlob) detections = detector.forward() On Lines 65-67, we construct a blob. A blob packages an image into a data structure compatible with OpenCV’s dnn module. To learn more about this process, read Deep learning: How OpenCV’s blobFromImage works. From there we detect faces in the image by passing the imageBlob through the detector network (Lines 71 and 72).
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
And now, let’s process the detections: # ensure at least one face was found if len(detections) > 0: # we're making the assumption that each image has only ONE # face, so find the bounding box with the largest probability j = np.argmax(detections[0, 0, :, 2]) confidence = detections[0, 0, j, 2] # ensure that the detection with the largest probability also # means our minimum probability test (thus helping filter out # weak detection) if confidence > args["confidence"]: # compute the (x, y)-coordinates of the bounding box for # the face box = detections[0, 0, j, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # extract the face ROI and grab the ROI dimensions face = image[startY:endY, startX:endX] (fH, fW) = face.shape[:2] # ensure the face width and height are sufficiently large if fW < 20 or fH < 20: continue The detections list contains probabilities and bounding box coordinates to localize faces in an image. Assuming we have at least one detection, we’ll proceed into the body of the if-statement (Line 75). We make the assumption that there is only one face in the image, so we extract the detection with the highest confidence and check to make sure that the confidence meets the minimum probability threshold used to filter out weak detections (Lines 78-84). When we’ve met that threshold, we extract the face ROI and grab/check dimensions to make sure the face ROI is sufficiently large (Lines 87-96). From there, we’ll take advantage of our embedder CNN and extract the face embeddings: # construct a blob for the face ROI, then pass the blob # through our face embedding model to obtain the 128-d # quantification of the face faceBlob = cv2.dnn.blobFromImage(face, 1.0 / 255, (96, 96), (0, 0, 0), swapRB=True, crop=False) embedder.setInput(faceBlob) vec = embedder.forward() # add the name of the person + corresponding face # embedding to their respective lists knownNames.append(name) knownEmbeddings.append(vec.flatten()) total += 1 We construct another blob, this time from the face ROI (not the whole image as we did before) on Lines 101 and 102. Subsequently, we pass the faceBlob through the embedder CNN (Lines 103 and 104). This generates a 128-D vector (vec) which quantifies the face. We’ll leverage this data to recognize new faces via machine learning. And then we simply add the name and embedding vec to knownNames and knownEmbeddings, respectively (Lines 108 and 109). We also can’t forget about the variable we set to track the total number of faces either — we go ahead and increment the value on Line 110.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
We continue this process of looping over images, detecting faces, and extracting face embeddings for each and every image in our dataset. All that’s left when the loop finishes is to dump the data to disk: # dump the facial embeddings + names to disk print("[INFO] serializing {} encodings...".format(total)) data = {"embeddings": knownEmbeddings, "names": knownNames} f = open(args["embeddings"], "wb") f.write(pickle.dumps(data)) f.close() We add the name and embedding data to a dictionary and then serialize it into a pickle file on Lines 113-117. At this point we’re ready to extract embeddings by executing our script. Prior to running the embeddings script, be sure your openvino environment and additional environment variable is set if you did not do so in the previous section. Here is the quickest way to do it as a reminder: $ source ~/start_openvino.sh Starting Python 3.7 with OpenCV-OpenVINO 4.1.1 bindings... $ source setup.sh From there, open up a terminal and execute the following command to compute the face embeddings with OpenCV and Movidius: $ python extract_embeddings.py \ --dataset dataset \ --embeddings output/embeddings.pickle \ --detector face_detection_model \ --embedding-model face_embedding_model/openface_nn4.small2.v1.t7 [INFO] loading face detector... [INFO] loading face recognizer... [INFO] quantifying faces... [INFO] processing image 1/120 [INFO] processing image 2/120 [INFO] processing image 3/120 [INFO] processing image 4/120 [INFO] processing image 5/120 ... [INFO] processing image 116/120 [INFO] processing image 117/120 [INFO] processing image 118/120 [INFO] processing image 119/120 [INFO] processing image 120/120 [INFO] serializing 116 encodings... This process completed in 57s on a RPi 4B with an NCS2 plugged into the USB 3.0 port. You may notice a delay at the beginning as the model is being loaded. From there, each image will process very quickly. Note: Typically I don’t recommend using the Raspberry Pi for extracting embeddings as the process can require significant time (a full-size, more-powerful computer is recommended for large datasets). Due to our relatively small dataset (120 images) and the extra “oomph” of the Movidius NCS, this process completed in a reasonable amount of time. As you can see we’ve extracted 120 embeddings for each of the 120 face photos in our dataset.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
The embeddings.pickle file is now available in the output/ folder as well: ls -lh output/*.pickle -rw-r--r-- 1 pi pi 66K Nov 20 14:35 output/embeddings.pickle The serialized embeddings filesize is 66KB — embeddings files grow linearly according to the size of your dataset. Be sure to review the “How to obtain higher face recognition accuracy” section later in this tutorial about the importance of an adequately large dataset for achieving high accuracy. Training an SVM model on Top of Facial Embeddings Figure 3: Python machine learning practitioners will often apply Support Vector Machines (SVMs) to their problems (such as deep learning face recognition with the Raspberry Pi and Movidius NCS). SVMs are based on the concept of a hyperplane and the perpendicular distance to it as shown in 2-dimensions (the hyperplane concept applies to higher dimensions as well). For more details, refer to my Machine Learning in Python blog post. At this point we have extracted 128-d embeddings for each face — but how do we actually recognize a person based on these embeddings? The answer is that we need to train a “standard” machine learning model (such as an SVM, k-NN classifier, Random Forest, etc.) on top of the embeddings. For small datasets a k-Nearest Neighbor (k-NN) approach can be used for face recognition on 128-d embeddings created via the dlib (Davis King) and face_recognition (Adam Geitgey) libraries. However, in this tutorial, we will build a more powerful classifier (Support Vector Machines) on top of the embeddings — you’ll be able to use this same method in your dlib-based face recognition pipelines as well if you are so inclined.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Open up the train_model.py file and insert the following code: # import the necessary packages from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import LabelEncoder from sklearn.svm import SVC import argparse import pickle # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-e", "--embeddings", required=True, help="path to serialized db of facial embeddings") ap.add_argument("-r", "--recognizer", required=True, help="path to output model trained to recognize faces") ap.add_argument("-l", "--le", required=True, help="path to output label encoder") args = vars(ap.parse_args()) We import our packages and modules on Lines 2-6. We’ll be using scikit-learn’s implementation of Support Vector Machines (SVM), a common machine learning model. Lines 9-16 parse three required command line arguments: --embeddings: The path to the serialized embeddings (we saved them to disk by running the previous extract_embeddings.py script). --recognizer: This will be our output model that recognizes faces. We’ll be saving it to disk so we can use it in the next two recognition scripts. --le: Our label encoder output file path. We’ll serialize our label encoder to disk so that we can use it and the recognizer model in our image/video face recognition scripts. Let’s load our facial embeddings and encode our labels: # load the face embeddings print("[INFO] loading face embeddings...") data = pickle.loads(open(args["embeddings"], "rb").read()) # encode the labels print("[INFO] encoding labels...") le = LabelEncoder() labels = le.fit_transform(data["names"]) Here we load our embeddings from our previous section on Line 20. We won’t be generating any embeddings in this model training script — we’ll use the embeddings previously generated and serialized.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Then we initialize our scikit-learn LabelEncoder and encode our name labels (Lines 24 and 25). Now it’s time to train our SVM model for recognizing faces: # train the model used to accept the 128-d embeddings of the face and # then produce the actual face recognition print("[INFO] training model...") params = {"C": [0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0], "gamma": [1e-1, 1e-2, 1e-3, 1e-4, 1e-5]} model = GridSearchCV(SVC(kernel="rbf", gamma="auto", probability=True), params, cv=3, n_jobs=-1) model.fit(data["embeddings"], labels) print("[INFO] best hyperparameters: {}".format(model.best_params_)) We are using a machine learning Support Vector Machine (SVM) with a Radial Basis Function (RBF) kernel, which is typically harder to tune than a linear kernel. Therefore, we will undergo a process known as “gridsearching”, a method to find the optimal machine learning hyperparameters for a model. Lines 30-33 set our gridsearch parameters and perform the process. Notice that n_jobs=1. If you were utilizing a more powerful system, you could run more than one job to perform gridsearching in parallel. We are on a Raspberry Pi, so we will use a single worker. Line 34 handles training our face recognition model on the face embeddings vectors. Note: You can and should experiment with alternative machine learning classifiers. The PyImageSearch Gurus course covers popular machine learning algorithms in depth.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
From here we’ll serialize our face recognizer model and label encoder to disk: # write the actual face recognition model to disk f = open(args["recognizer"], "wb") f.write(pickle.dumps(model.best_estimator_)) f.close() # write the label encoder to disk f = open(args["le"], "wb") f.write(pickle.dumps(le)) f.close() To execute our training script, enter the following command in your terminal: $ python train_model.py --embeddings output/embeddings.pickle \ --recognizer output/recognizer.pickle --le output/le.pickle [INFO] loading face embeddings... [INFO] encoding labels... [INFO] training model... [INFO] best hyperparameters: {'C': 100.0, 'gamma': 0.1} Let’s check the output/ folder now: ls -lh output/*.pickle -rw-r--r-- 1 pi pi 66K Nov 20 14:35 output/embeddings.pickle -rw-r--r-- 1 pi pi 470 Nov 20 14:55 le.pickle -rw-r--r-- 1 pi pi 97K Nov 20 14:55 recognizer.pickle With our serialized face recognition model and label encoder, we’re ready to recognize faces in images or video streams. Real-Time Face Recognition in Video Streams with Movidius NCS In this section we will code a quick demo script to recognize faces using your PiCamera or USB webcamera. Go ahead and open recognize_video.py and insert the following code: # import the necessary packages from imutils.video import VideoStream from imutils.video import FPS import numpy as np import argparse import imutils import pickle import time import cv2 import os # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-d", "--detector", required=True, help="path to OpenCV's deep learning face detector") ap.add_argument("-m", "--embedding-model", required=True, help="path to OpenCV's deep learning face embedding model") ap.add_argument("-r", "--recognizer", required=True, help="path to model trained to recognize faces") ap.add_argument("-l", "--le", required=True, help="path to label encoder") ap.add_argument("-c", "--confidence", type=float, default=0.5, help="minimum probability to filter weak detections") args = vars(ap.parse_args()) Our imports should be familiar at this point. Our five command line arguments are parsed on Lines 12-24: --detector: The path to OpenCV’s deep learning face detector. We’ll use this model to detect where in the image the face ROIs are. --embedding-model: The path to OpenCV’s deep learning face embedding model. We’ll use this model to extract the 128-D face embedding from the face ROI — we’ll feed the data into the recognizer. --recognizer: The path to our recognizer model. We trained our SVM recognizer in the previous section.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
This model will actually determine who a face is. --le: The path to our label encoder. This contains our face labels such as adrian or unknown. --confidence: The optional threshold to filter weak face detections. Be sure to study these command line arguments — it is critical that you know the difference between the two deep learning models and the SVM model. If you find yourself confused later in this script, you should refer back to here. Now that we’ve handled our imports and command line arguments, let’s load the three models from disk into memory: # load our serialized face detector from disk print("[INFO] loading face detector...") protoPath = os.path.sep.join([args["detector"], "deploy.prototxt"]) modelPath = os.path.sep.join([args["detector"], "res10_300x300_ssd_iter_140000.caffemodel"]) detector = cv2.dnn.readNetFromCaffe(protoPath, modelPath) detector.setPreferableTarget(cv2.dnn. DNN_TARGET_MYRIAD) # load our serialized face embedding model from disk and set the # preferable target to MYRIAD print("[INFO] loading face recognizer...") embedder = cv2.dnn.readNetFromTorch(args["embedding_model"]) embedder.setPreferableTarget(cv2.dnn. DNN_BACKEND_OPENCV) # load the actual face recognition model along with the label encoder recognizer = pickle.loads(open(args["recognizer"], "rb").read()) le = pickle.loads(open(args["le"], "rb").read()) We load three models in this block. At the risk of being redundant, here is a brief summary of the differences among the models: detector: A pre-trained Caffe DL model to detect where in the image the faces are (Lines 28-32).
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
embedder: A pre-trained Torch DL model to calculate our 128-D face embeddings (Line 37 and 38). recognizer: Our SVM face recognition model (Line 41). One and two are pre-trained deep learning models, meaning that they are provided to you as-is by OpenCV. The Movidius NCS will perform inference using only the detector (Line 32). The embedder is better if it run’s on the Pi CPU (Line 38). The third recognizer model is not a form of deep learning. Rather, it is our SVM machine learning face recognition model. The RPi CPU will have to handle making face recognition predictions using it. We also load our label encoder which holds the names of the people our model can recognize (Line 42). Let’s initialize our video stream: # initialize the video stream, then allow the camera sensor to warm up print("[INFO] starting video stream...") #vs = VideoStream(src=0).start() vs = VideoStream(usePiCamera=True).start() time.sleep(2.0) # start the FPS throughput estimator fps = FPS().start() Line 47 initializes and starts our VideoStream object.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
We wait for the camera sensor to warm up on Line 48. Line 51 initializes our FPS counter for benchmarking purposes. Frame processing begins with our while loop: # loop over frames from the video file stream while True: # grab the frame from the threaded video stream frame = vs.read() # resize the frame to have a width of 600 pixels (while # maintaining the aspect ratio), and then grab the image # dimensions frame = imutils.resize(frame, width=600) (h, w) = frame.shape[:2] # construct a blob from the image imageBlob = cv2.dnn.blobFromImage( cv2.resize(frame, (300, 300)), 1.0, (300, 300), (104.0, 177.0, 123.0), swapRB=False, crop=False) # apply OpenCV's deep learning-based face detector to localize # faces in the input image detector.setInput(imageBlob) detections = detector.forward() We grab a frame from the webcam on Line 56. We resize the frame (Line 61) and then construct a blob prior to detecting where the faces are (Lines 65-72). Given our new detections , let’s recognize faces in the frame. But, first we need to filter weak detections and extract the face ROI: # loop over the detections for i in range(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with # the prediction confidence = detections[0, 0, i, 2] # filter out weak detections if confidence > args["confidence"]: # compute the (x, y)-coordinates of the bounding box for # the face box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # extract the face ROI face = frame[startY:endY, startX:endX] (fH, fW) = face.shape[:2] # ensure the face width and height are sufficiently large if fW < 20 or fH < 20: continue Here we loop over the detections on Line 75 and extract the confidence of each on Line 78. Then we compare the confidence to the minimum probability detection threshold contained in our command line args dictionary, ensuring that the computed probability is larger than the minimum probability (Line 81). From there, we extract the face ROI (Lines 84-89) as well as ensure it’s spatial dimensions are sufficiently large (Lines 92 and 93). Recognizing the name of the face ROI requires just a few steps: # construct a blob for the face ROI, then pass the blob # through our face embedding model to obtain the 128-d # quantification of the face faceBlob = cv2.dnn.blobFromImage(cv2.resize(face, (96, 96)), 1.0 / 255, (96, 96), (0, 0, 0), swapRB=True, crop=False) embedder.setInput(faceBlob) vec = embedder.forward() # perform classification to recognize the face preds = recognizer.predict_proba(vec)[0] j = np.argmax(preds) proba = preds[j] name = le.classes_[j] First, we construct a faceBlob (from the face ROI) and pass it through the embedder to generate a 128-D vector which quantifies the face (Lines 98-102) Then, we pass the vec through our SVM recognizer model (Line 105), the result of which is our predictions for who is in the face ROI. We take the highest probability index and query our label encoder to find the name (Lines 106-108).
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Note: You can further filter out weak face recognitions by applying an additional threshold test on the probability. For example, inserting if proba < T (where T is a variable you define) can provide an additional layer of filtering to ensure there are fewer false-positive face recognitions. Now, let’s display face recognition results for this particular frame: # draw the bounding box of the face along with the # associated probability text = "{}: {:.2f}%".format(name, proba * 100) y = startY - 10 if startY - 10 > 10 else startY + 10 cv2.rectangle(frame, (startX, startY), (endX, endY), (0, 0, 255), 2) cv2.putText(frame, text, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 0, 255), 2) # update the FPS counter fps.update() # show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # stop the timer and display FPS information fps.stop() print("[INFO] elasped time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # do a bit of cleanup cv2.destroyAllWindows() vs.stop() To close out the script, we: Draw a bounding box around the face and the person’s name and corresponding predicted probability (Lines 112-117). Update our fps counter (Line 120). Display the annotated frame (Line 123) and wait for the q key to be pressed at which point we break out of the loop (Lines 124-128). Stop our fps counter and print statistics in the terminal (Lines 131-133). Cleanup by closing windows and releasing pointers (Lines 136 and 137). Face Recognition with Movidius NCS Results Now that we have (1) extracted face embeddings, (2) trained a machine learning model on the embeddings, and (3) written our face recognition in video streams driver script, let’s see the final result. Ensure that you have followed the following steps: Step #1: Gather your face recognition dataset.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Step #2: Extract facial embeddings (via the extract_embeddings.py script). Step #3: Train a machine learning model on the set of embeddings (such as Support Vector Machines per today’s example) using train_model.py . From there, set up your Raspberry Pi and Movidius NCS for face recognition: Connect your PiCamera or USB camera and configure either Line 46 or Line 47 of the realtime face recognition script (but not both) to start your video stream. Plug in your Intel Movidius NCS2 (the NCS1 is also compatible). Start your openvino virtual environment and set the key environment variable as shown below: $ source ~/start_openvino.sh Starting Python 3.7 with OpenCV-OpenVINO 4.1.1 bindings... $ source setup.sh Using OpenVINO 4.1.1 is critical. The newer 4.1.2 has a number of issues causing it to not work well. From there, open up a terminal and execute the following command: $ python recognize_video.py --detector face_detection_model \ --embedding-model face_embedding_model/openface_nn4.small2.v1.t7 \ --recognizer output/recognizer.pickle \ --le output/le.pickle [INFO] loading face detector... [INFO] loading face recognizer... [INFO] starting video stream... [INFO] elasped time: 60.30 [INFO] approx. FPS: 6.29 Note: Ensure that the version of scikit-learn you use for deployment matches the version you use for training. If the versions do not match, then you may encounter a problem when you try to load the model from disk. In particular, you may encounter AttributeError: 'SVC' object has no attribute '_n_support'.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
This is especially important if you are training on your laptop/desktop/cloud environment and deploying to a Raspberry Pi. It is very easy for the versions to be out of sync, so always be sure to check them in both places via pip freeze | grep scikit. To install a specific version in your environment, simply use this command: pip install scikit-learn==0.22.1, replacing the version as appropriate. As you can see, faces have correctly been identified. What’s more, we are achieving 6.29 FPS using the Movidius NCS in comparison to 2.59 FPS using strictly the CPU. This comes out to a speedup of 243% using the RPi 4B and Movidius NCS2. I asked PyImageSearch team member, Abhishek Thanki, to record a demo of our Movidius NCS face recognition in action. Below you can find the demo:   As you can see the combination of the Raspberry Pi and Movidius NCS is able to recognize Abhishek’s face in near real-time — using just the Raspberry Pi CPU alone would not be enough to obtain such speed. My face recognition system isn’t recognizing faces correctly Figure 4: Misclassified faces occur for a variety of reasons when performing Raspberry Pi and Movidius NCS face recognition. As a  reminder, be sure to refer to the following two resources: OpenCV Face Recognition includes a section entitled “Drawbacks, limitations, and how to obtain higher face recognition accuracy”.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
“How to obtain higher face recognition accuracy”, a section of Chapter 14, Face Recognition on the Raspberry Pi (Raspberry Pi for Computer Vision). Both resources help you in situations where OpenCV does not recognize a face correctly. In short, you may need: More data. This is the number one reason face recognition systems fail. I recommend 20-50 face images per person in your dataset as a general rule. To perform face alignment as each face ROI undergoes the embeddings process. To tune your machine learning classifier hyperparameters. Again, if your face recognition system is mismatching faces or marking faces as “Unknown” be sure to spend time improving your face recognition system. What's next? We recommend PyImageSearch University.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, we used OpenVINO and our Movidius NCS to perform face recognition. Our face recognition pipeline was created using a four-stage process: Step #1: Create your dataset of face images. You can, of course, swap in your own face dataset provided you follow the same dataset directory structure of today’s project. Step #2: Extract face embeddings for each face in the dataset. Step #3: Train a machine learning model (Support Vector Machines) on top of the face embeddings. Step #4: Utilize OpenCV and our Movidius NCS to recognize faces in video streams.
https://pyimagesearch.com/2020/01/06/raspberry-pi-and-movidius-ncs-face-recognition/
We put our Movidius NCS to work for only one of following deep learning tasks: Face detection: Localizing faces in an image (Movidius) Extracting face embeddings: Generating 128-D vectors which quantify a face numerically (CPU) We then used the Raspberry Pi CPU to also handle the non-DL machine learning classifier used to make predictions on the 128-D embeddings. It may seem like the CPU is doing more with two of the tasks, just keep in mind that deep learning face detection is a very computationally “expensive” operation. This process of separating responsibilities allowed the CPU to call the shots, while employing the NCS for the heavy lifting. We achieved a speedup of 243% using the Movidius NCS for face recognition in video streams. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just drop your email in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
Click here to download the source code to this pos In this tutorial you will learn how to use dlib’s find_min_global function to optimize the options and hyperparameters to dlib’s shape predictor, yielding a more accurate model. A few weeks ago I published a two-part series on using dlib to train custom shape predictors: Part one: Training a custom dlib shape predictor Part two: Tuning dlib shape predictor hyperparameters to balance speed, accuracy, and model size When I announced the first post on social media, Davis King, the creator of dlib, chimed in and suggested that I demonstrate how to use dlib’s find_min_global function to optimize the shape predictor hyperparameters: Figure 1: Dlib’s creator and maintainer, Davis King, suggested that I write content on optimizing dlib shape predictor accuracy with find_min_global. I loved the idea and immediately began writing code and gathering results. Today I’m pleased to share the bonus guide on training dlib shape predictors and optimizing their hyperparameters. I hope you enjoy it! To learn how to use dlib’s find_min_global function to optimize shape predictor hyperparameters, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Optimizing dlib shape predictor accuracy with find_min_global In the first part of this tutorial, we’ll discuss dlib’s find_min_global function and how it can be used to optimize the options/hyperparameters to a shape predictor. We’ll also compare and contrast find_min_global to a standard grid search. Next, we’ll discuss the dataset we’ll be using for this tutorial, including reviewing our directory structure for the project.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
We’ll then open up our code editor and get our hands dirty by implementing three Python scripts including: A configuration file. A script used to optimize hyperparameters via find_min_global . A script used to take the best hyperparameters found via find_min_global and then train an optimal shape predictor using these values. We’ll wrap up the post with a short discussion on when you should be using find_min_global versus performing a standard grid hyperparameter search. Let’s get started! What does dlib’s find_min_global function do? And how can we use it to tune shape predictor options? Video Source: A Global Optimization Algorithm Worth Using by Davis King A few weeks ago you learned how to tune dlib’s shape predictor options using a systematic grid search. That method worked well enough, but the problem is a grid search isn’t a true optimizer! Instead, we hardcoded hyperparameter values we want to explore, the grid search computes all possible combinations of these values, and then explores them one-by-one.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
Grid searches are computationally wasteful as the algorithm spends precious time and CPU cycles exploring hyperparameter combinations that will never yield the best possible results. Wouldn’t it be more advantageous if we could instead iteratively tune our options, ensuring that with each iteration we are incrementally improving our model? In fact, that’s exactly what dlib find_min_global function does! Davis King, the creator of the dlib library, documented his struggle with hyperparameter tuning algorithms, including: Guess and check: An expert uses his gut instinct and previous experience to manually set hyperparameters, run the algorithm, inspect the results, and then use the results to make an educated guess as to what the next set of hyperparameters to explore will be. Grid search: Hardcode all possible hyperparameter values you want to test, compute all possible combinations of these hyperparameters, and then let the computer test them all, one-by-one. Random search: Hardcode upper and lower limits/ranges on the hyperparamters you want to explore and then allow the computer to randomly sample the hyperparameter values within those ranges. Bayesian optimization: A global optimization strategy for black-box algorithms. This method often has more hyperparameters to tune than the original algorithm itself. Comparatively, you are better off using a “guess and check” strategy or throwing hardware at the problem via grid searching or random searching. Local optimization with a good initial guess: This method is good, but is limited to finding local optima with no guarantee it will find the global optima.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
Eventually, Davis came across Malherbe and Vayatis’s 2017 paper, Global optimization of Lipschitz functions, which he then implemented into the dlib library via the find_min_global function. Unlike Bayesian methods, which are near impossible to tune, and local optimization methods, which place no guarantees on a globally optimal solution, the Malherbe and Vayatis method is parameter-free and provably correct for finding a set of values that maximizes/minimizes a particular function. Davis has written extensively on the optimization method in the following blog post — I suggest you give it a read if you are interested in the mathematics behind the optimization method. The iBUG-300W dataset Figure 2: The iBug 300-W face landmark dataset is used to train a custom dlib shape predictor. Using dlib’s find_min_global optimization method, we will optimize an eyes-only shape predictor. To find the optimal dlib shape predictor hyperparameters, we’ll be using the iBUG 300-W dataset, the same dataset we used for previous our two-part series on shape predictors. The iBUG 300-W dataset is perfect for training facial landmark predictors to localize the individual structures of the face, including: Eyebrows Eyes Nose Mouth Jawline Shape predictor data files can become quite large. To combat this, we’ll be training our shape predictor to localize only the eyes rather than all face landmarks. You could just as easily train a shape predictor to recognize only the mouth, etc. Configuring your dlib development environment To follow along with today’s tutorial, you will need a virtual environment with the following packages installed: dlib OpenCV imutils scikit-learn Luckily, each of these packages is pip-installable.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
That said, there are a handful of prerequisites (including Python virtual environments). Be sure to follow these two guides for additional information in configuring your development environment: Install dlib (the easy, complete guide) pip install opencv The pip install commands include: $ workon <env-name> $ pip install dlib $ pip install opencv-contrib-python $ pip install imutils $ pip install scikit-learn The workon command becomes available once you install virtualenv and virtualenvwrapper per either my dlib or OpenCV installation guides. Downloading the iBUG-300W dataset To follow along with this tutorial, you will need to download the iBUG 300-W dataset (~1.7GB): http://dlib.net/files/data/ibug_300W_large_face_landmark_dataset.tar.gz While the dataset is downloading, you should also use the “Downloads” section of this tutorial to download the source code. You can either (1) use the hyperlink above, or (2) use wget to download the dataset. Let’s cover both methods so that your project is organized just like my own. Option 1: Use the hyperlink above to download the dataset and then place the iBug 300-W dataset into the folder associated with the download of this tutorial like this: $ unzip tune-dlib-shape-predictor.zip ... $ cd tune-dlib-shape-predictor $ mv ~/Downloads/ibug_300W_large_face_landmark_dataset.tar.gz . $ tar -xvf ibug_300W_large_face_landmark_dataset.tar.gz ... Option 2: Rather than clicking the hyperlink above, use wget in your terminal to download the dataset directly: $ unzip tune-dlib-shape-predictor.zip ... $ cd tune-dlib-shape-predictor $ wget http://dlib.net/files/data/ibug_300W_large_face_landmark_dataset.tar.gz $ tar -xvf ibug_300W_large_face_landmark_dataset.tar.gz ... You’re now ready to follow along with the rest of the tutorial. Project structure Be sure to follow the previous section to both (1) download today’s .zip from the “Downloads” section, and (2) download the iBug 300-W dataset into today’s project. From there, go ahead and execute the tree command to see our project structure: % tree --dirsfirst --filelimit 10 . ├── ibug_300W_large_face_landmark_dataset │   ├── afw [1011 entries] │   ├── helen │   │   ├── testset [990 entries] │   │   └── trainset [6000 entries] │   ├── ibug [405 entries] │   ├── lfpw │   │   ├── testset [672 entries] │   │   └── trainset [2433 entries] │   ├── image_metadata_stylesheet.xsl │   ├── labels_ibug_300W.xml │   ├── labels_ibug_300W_test.xml │   └── labels_ibug_300W_train.xml ├── pyimagesearch │   ├── __init__.py │   └── config.py ├── best_predictor.dat ├── ibug_300W_large_face_landmark_dataset.tar.gz ├── parse_xml.py ├── predict_eyes.py ├── shape_predictor_tuner.py └── train_best_predictor.py 10 directories, 11 files As you can see, our dataset has been extracted into the ibug_300W_large_face_landmark_dataset/ directory following the instructions in the previous section.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
Our configuration is housed in the pyimagesearch module. Our Python scripts consist of: parse_xml.py : First, you need to prepare and extract eyes-only landmarks from the iBug 300-W dataset, resulting in smaller XML files. We’ll review how to use the script in the next section, but we won’t review the script itself as it was covered in a previous tutorial. shape_predictor_tuner.py : This script takes advantage of dlib’s find_min_global method to find the best shape predictor. We will review this script in detail today. This script will take significant time to execute (multiple days). train_best_predictor.py : After the shape predictor is tuned, we’ll update our shape predictor options and start the training process. predict_eys.py : Loads the serialized model, finds landmarks, and annotates them on a real-time video stream. We won’t cover this script today as we have covered it previously. Let’s get started!
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
Preparing the iBUG-300W dataset Figure 3: In this tutorial, we will optimize a custom dlib shape predictor’s accuracy with find_min_global. As previously mentioned in the “The iBUG-300W dataset” section above, we will be training our dlib shape predictor on solely the eyes (i.e., not the eyebrows, nose, mouth or jawline). In order to do so, we’ll first parse out any facial structures we are not interested in from the iBUG 300-W training/testing XML files. At this point, ensure that you have: Used the “Downloads” section of this tutorial to download the source code. Used the “Downloading the iBUG-300W dataset” section above to download the iBUG-300W dataset. Reviewed the “Project structure” section so that you are familiar with the files and folders. Inside your directory structure there is a script named parse_xml.py — this script handles parsing out just the eye locations from the XML files. We reviewed this file in detail in my previous Training a Custom dlib Shape Predictor tutorial. We will not review the file again, so be sure to review it in the first tutorial of this series. Before you continue on with the rest of this tutorial you’ll need to execute the following command to prepare our “eyes only” training and testing XML files: $ python parse_xml.py \ --input ibug_300W_large_face_landmark_dataset/labels_ibug_300W_train.xml \ --output ibug_300W_large_face_landmark_dataset/labels_ibug_300W_train_eyes.xml [INFO] parsing data split XML file... $ python parse_xml.py \ --input ibug_300W_large_face_landmark_dataset/labels_ibug_300W_test.xml \ --output ibug_300W_large_face_landmark_dataset/labels_ibug_300W_test_eyes.xml [INFO] parsing data split XML file... Now let’s verify that the training/testing files have been created.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
You should check your iBUG-300W root dataset directory for the labels_ibug_300W_train_eyes.xml and labels_ibug_300W_test_eyes.xml files as shown: $ cd ibug_300W_large_face_landmark_dataset $ ls -lh *.xml -rw-r--r--@ 1 adrian staff 21M Aug 16 2014 labels_ibug_300W.xml -rw-r--r--@ 1 adrian staff 2.8M Aug 16 2014 labels_ibug_300W_test.xml -rw-r--r-- 1 adrian staff 602K Dec 12 12:54 labels_ibug_300W_test_eyes.xml -rw-r--r--@ 1 adrian staff 18M Aug 16 2014 labels_ibug_300W_train.xml -rw-r--r-- 1 adrian staff 3.9M Dec 12 12:54 labels_ibug_300W_train_eyes.xml $ cd .. Notice that our *_eyes.xml files are highlighted. These files are significantly smaller in filesize than their original, non-parsed counterparts. Our configuration file Before we can use find_min_global to tune our hyperparameters, we first need to create a configuration file that will store all our important variables, ensuring we can use them and access them across multiple Python scripts. Open up the config.py file in your pyimagesearch module (following the project structure above) and insert the following code: # import the necessary packages import os # define the path to the training and testing XML files TRAIN_PATH = os.path.join("ibug_300W_large_face_landmark_dataset", "labels_ibug_300W_train_eyes.xml") TEST_PATH = os.path.join("ibug_300W_large_face_landmark_dataset", "labels_ibug_300W_test_eyes.xml") The os module (Line 2) allows our configuration script to join filepaths. Lines 5-8 join our training and testing XML landmark files. Let’s define our training parameters: # define the path to the temporary model file TEMP_MODEL_PATH = "temp.dat" # define the number of threads/cores we'll be using when trianing our # shape predictor models PROCS = -1 # define the maximum number of trials we'll be performing when tuning # our shape predictor hyperparameters MAX_FUNC_CALLS = 100 Here you will find: The path to the temporary model file (Line 11). The number of threads/cores to use when training (Line 15). A value of -1 indicates that all processor cores on your machine will be utilized. The maximum number of function calls that find_min_global will use when attempting to optimize our hyperparameters (Line 19). Smaller values will enable our tuning script to complete faster, but could lead to hyperparameters that are “less optimal”.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
Larger values will take the tuning script significantly longer to run, but could lead to hyperparameters that are “more optimal”. Implementing the dlib shape predictor and find_min_global training script Now that we’ve reviewed our configuration file, we can move on to tuning our shape predictor hyperparameters using find_min_global. Open up the shape_predictor_tuner.py file in your project structure and insert the following code: # import the necessary packages from pyimagesearch import config from collections import OrderedDict import multiprocessing import dlib import sys import os # determine the number of processes/threads to use procs = multiprocessing.cpu_count() procs = config. PROCS if config. PROCS > 0 else procs Lines 2-7 import our necessary packages, namely our config and dlib . We will use the multiprocessing module to grab the number of CPUs/cores our system has (Lines 10 and 11). An OrderedDict will contain all of our dlib shape predictor options. Now let’s define a function responsible for the heart of shape predictor tuning with dlib: def test_shape_predictor_params(treeDepth, nu, cascadeDepth, featurePoolSize, numTestSplits, oversamplingAmount, oversamplingTransJitter, padding, lambdaParam): # grab the default options for dlib's shape predictor and then # set the values based on our current hyperparameter values, # casting to ints when appropriate options = dlib.shape_predictor_training_options() options.tree_depth = int(treeDepth) options.nu = nu options.cascade_depth = int(cascadeDepth) options.feature_pool_size = int(featurePoolSize) options.num_test_splits = int(numTestSplits) options.oversampling_amount = int(oversamplingAmount) options.oversampling_translation_jitter = oversamplingTransJitter options.feature_pool_region_padding = padding options.lambda_param = lambdaParam # tell dlib to be verbose when training and utilize our supplied # number of threads when training options.be_verbose = True options.num_threads = procs The test_shape_predictor_params function: Accepts an input set of hyperparameters. Trains a dlib shape predictor using those hyperparameters. Computes the predictor loss/error on our testing set.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
Returns the error to the find_min_global function. The find_min_global function will then take the returned error and use it to adjust the optimal hyperparameters found thus far in an iterative fashion. As you can see, the test_shape_predictor_params function accepts nine parameters, each of which are dlib shape predictor hyperparameters that we’ll be optimizing. Lines 19-28 set the hyperparameter values from the parameters (casting to integers when appropriate). Lines 32 and 33 instruct dlib to be verbose with output and to utilize the supplied number of threads/processes for training. Let’s finish coding the test_shape_predictor_params function: # display the current set of options to our terminal print("[INFO] starting training...") print(options) sys.stdout.flush() # train the model using the current set of hyperparameters dlib.train_shape_predictor(config. TRAIN_PATH, config. TEMP_MODEL_PATH, options) # take the newly trained shape predictor model and evaluate it on # both our training and testing set trainingError = dlib.test_shape_predictor(config. TRAIN_PATH, config. TEMP_MODEL_PATH) testingError = dlib.test_shape_predictor(config.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
TEST_PATH, config. TEMP_MODEL_PATH) # display the training and testing errors for the current trial print("[INFO] train error: {}".format(trainingError)) print("[INFO] test error: {}".format(testingError)) sys.stdout.flush() # return the error on the testing set return testingError Lines 41 and 42 train the dlib shape predictor using the current set of hyperparameters. From there, Lines 46-49 evaluate the newly trained shape predictor on training and testing set. Lines 52-54 print training and testing errors for the current trial before Line 57 returns the testingError to the calling function. Let’s define our set of shape predictor hyperparameters: # define the hyperparameters to dlib's shape predictor that we are # going to explore/tune where the key to the dictionary is the # hyperparameter name and the value is a 3-tuple consisting of the # lower range, upper range, and is/is not integer boolean, # respectively params = OrderedDict([ ("tree_depth", (2, 5, True)), ("nu", (0.001, 0.2, False)), ("cascade_depth", (4, 25, True)), ("feature_pool_size", (100, 1000, True)), ("num_test_splits", (20, 300, True)), ("oversampling_amount", (1, 40, True)), ("oversampling_translation_jitter", (0.0, 0.3, False)), ("feature_pool_region_padding", (-0.2, 0.2, False)), ("lambda_param", (0.01, 0.99, False)) ]) Each value in the OrderedDict is a 3-tuple consisting of: The lower bound on the hyperparameter value. The upper bound on the hyperparameter value. A boolean indicating whether the hyperparameter is an integer or not. For a full review of the hyperparameters, be sure to refer to my previous post. From here, we’ll extract our upper and lower bounds as well as whether a hyperparameter is an integer: # use our ordered dictionary to easily extract the lower and upper # boundaries of the hyperparamter range, include whether or not the # parameter is an integer or not lower = [v[0] for (k, v) in params.items()] upper = [v[1] for (k, v) in params.items()] isint = [v[2] for (k, v) in params.items()] Lines 79-81 extract the lower , upper , and isint boolean from our params dictionary. Now that we have the setup taken care of, let’s optimize our shape predictor hyperparameters using dlib’s find_min_global method: # utilize dlib to optimize our shape predictor hyperparameters (bestParams, bestLoss) = dlib.find_min_global( test_shape_predictor_params, bound1=lower, bound2=upper, is_integer_variable=isint, num_function_calls=config.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
MAX_FUNC_CALLS) # display the optimal hyperparameters so we can reuse them in our # training script print("[INFO] optimal parameters: {}".format(bestParams)) print("[INFO] optimal error: {}".format(bestLoss)) # delete the temporary model file os.remove(config. TEMP_MODEL_PATH) Lines 84-89 start the optimization process. Lines 93 and 94 display the optimal parameters before Line 97 deletes the temporary model file. Tuning shape predictor options with find_min_global To use find_min_global to tune the hyperparameters to our dlib shape predictor, make sure you have: Used the “Downloads” section of this tutorial to download the source code. Downloaded the iBUG-300W dataset using the “Downloading the iBUG-300W dataset” section above. Executed the parse_xml.py for both the training and testing XML files in the “Preparing the iBUG-300W dataset” section. Provided you have accomplished each of these three steps, you can now execute the shape_predictor_tune.py script: $ time python shape_predictor_tune.py [INFO] starting training... shape_predictor_training_options(be_verbose=1, cascade_depth=15, tree_depth=4, num_trees_per_cascade_level=500, nu=0.1005, oversampling_amount=21, oversampling_translation_jitter=0.15, feature_pool_size=550, lambda_param=0.5, num_test_splits=160, feature_pool_region_padding=0, random_seed=, num_threads=20, landmark_relative_padding_mode=1) Training with cascade depth: 15 Training with tree depth: 4 Training with 500 trees per cascade level. Training with nu: 0.1005 Training with random seed: Training with oversampling amount: 21 Training with oversampling translation jitter: 0.15 Training with landmark_relative_padding_mode: 1 Training with feature pool size: 550 Training with feature pool region padding: 0 Training with 20 threads. Training with lambda_param: 0.5 Training with 160 split tests. Fitting trees... Training complete Training complete, saved predictor to file temp.dat [INFO] train error: 5.518466441668642 [INFO] test error: 6.977162396336371 [INFO] optimal inputs: [4.0, 0.1005, 15.0, 550.0, 160.0, 21.0, 0.15, 0.0, 0.5] [INFO] optimal output: 6.977162396336371 ... [INFO] starting training... shape_predictor_training_options(be_verbose=1, cascade_depth=20, tree_depth=4, num_trees_per_cascade_level=500, nu=0.1033, oversampling_amount=29, oversampling_translation_jitter=0, feature_pool_size=677, lambda_param=0.0250546, num_test_splits=295, feature_pool_region_padding=0.0974774, random_seed=, num_threads=20, landmark_relative_padding_mode=1) Training with cascade depth: 20 Training with tree depth: 4 Training with 500 trees per cascade level.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
Training with nu: 0.1033 Training with random seed: Training with oversampling amount: 29 Training with oversampling translation jitter: 0 Training with landmark_relative_padding_mode: 1 Training with feature pool size: 677 Training with feature pool region padding: 0.0974774 Training with 20 threads. Training with lambda_param: 0.0250546 Training with 295 split tests. Fitting trees... Training complete Training complete, saved predictor to file temp.dat [INFO] train error: 2.1037606164427904 [INFO] test error: 4.225682000183475 [INFO] optimal parameters: [4.0, 0.10329967171060293, 20.0, 677.0, 295.0, 29.0, 0.0, 0.09747738830224817, 0.025054553453757795] [INFO] optimal error: 4.225682000183475 real 8047m24.389s user 98916m15.646s sys 464m33.139s On my iMac Pro with a 3 GHz Intel Xeon W processor with 20 cores, running a total of 100 MAX_TRIALS took ~8047m24s, or ~5.6 days. If you don’t have a powerful computer, I would recommend running this procedure on a powerful cloud instance. Looking at the output you can see that the find_min_global function found the following optimal shape predictor hyperparameters: tree_depth: 4 nu: 0.1033 cascade_depth: 20 feature_pool_size: 677 num_test_splits: 295 oversampling_amount: 29 oversampling_translation_jitter: 0 feature_pool_region_padding: 0.0975 lambda_param: 0.0251 In the next section we’ll take these values and update our train_best_predictor.py script to include them. Updating our shape predictor options using the results from find_min_global At this point we know the best possible shape predictor hyperparameter values, but we still need to train our final shape predictor using these values. To do make, open up the train_best_predictor.py file and insert the following code: # import the necessary packages from pyimagesearch import config import multiprocessing import argparse import dlib # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-m", "--model", required=True, help="path serialized dlib shape predictor model") args = vars(ap.parse_args()) # determine the number of processes/threads to use procs = multiprocessing.cpu_count() procs = config. PROCS if config. PROCS > 0 else procs # grab the default options for dlib's shape predictor print("[INFO] setting shape predictor options...") options = dlib.shape_predictor_training_options() # update our hyperparameters options.tree_depth = 4 options.nu = 0.1033 options.cascade_depth = 20 options.feature_pool_size = 677 options.num_test_splits = 295 options.oversampling_amount = 29 options.oversampling_translation_jitter = 0 options.feature_pool_region_padding = 0.0975 options.lambda_param = 0.0251 # tell the dlib shape predictor to be verbose and print out status # messages our model trains options.be_verbose = True # number of threads/CPU cores to be used when training -- we default # this value to the number of available cores on the system, but you # can supply an integer value here if you would like options.num_threads = procs # log our training options to the terminal print("[INFO] shape predictor options:") print(options) # train the shape predictor print("[INFO] training shape predictor...") dlib.train_shape_predictor(config.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
TRAIN_PATH, args["model"], options) Lines 2-5 import our config , multiprocessing , argparse , and dlib . From there, we set the shape predictor options (Lines 14-39) using the optimal values we found from the previous section. And finally, Line 47 trains and exports the model. For a more detailed review of this script, be sure to refer to my previous tutorial. Training the final shape predictor The final step is to execute our train_best_predictor.py file which will train a dlib shape predictor using our best hyperparameter values found via find_min_global: $ time python train_best_predictor.py --model best_predictor.dat [INFO] setting shape predictor options... [INFO] shape predictor options: shape_predictor_training_options(be_verbose=1, cascade_depth=20, tree_depth=4, num_trees_per_cascade_level=500, nu=0.1033, oversampling_amount=29, oversampling_translation_jitter=0, feature_pool_size=677, lambda_param=0.0251, num_test_splits=295, feature_pool_region_padding=0.0975, random_seed=, num_threads=20, landmark_relative_padding_mode=1) [INFO] training shape predictor... Training with cascade depth: 20 Training with tree depth: 4 Training with 500 trees per cascade level. Training with nu: 0.1033 Training with random seed: Training with oversampling amount: 29 Training with oversampling translation jitter: 0 Training with landmark_relative_padding_mode: 1 Training with feature pool size: 677 Training with feature pool region padding: 0.0975 Training with 20 threads. Training with lambda_param: 0.0251 Training with 295 split tests. Fitting trees... Training complete Training complete, saved predictor to file best_predictor.dat real 111m46.444s user 1492m29.777s sys 5m39.150s After the command finishes executing you should have a file named best_predictor.dat in your local directory structure: $ ls -lh *.dat -rw-r--r--@ 1 adrian staff 24M Dec 22 12:02 best_predictor.dat You can then take this predictor and use it to localize eyes in real-time video using the predict_eyes.py script: $ python predict_eyes.py --shape-predictor best_predictor.dat [INFO] loading facial landmark predictor... [INFO] camera sensor warming up...   When should I use dlib’s find_min_global function? Figure 4: Using the find_min_global method to optimize a custom dlib shape predictor can take significant processing time. Be sure to review this section for general rules of thumb including guidance on when to use a Grid Search method to find a shape predictor model.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
Unlike a standard grid search for tuning hyperparameters, which blindly explores sets of hyperparameters, the find_min_global function is a true optimizer, enabling it to iteratively explore the hyperparameter space, choosing options that maximize our accuracy and minimize our loss/error. However, one of the downsides of find_min_global is that it cannot be made parallel in an easy fashion. A standard grid search, on the other hand, can be made parallel by: Dividing all combinations of hyperparameters into N size chunks And then distributing each of the chunks across M systems Doing so would lead to faster hyperparameter space exploration than using find_min_global. The downside is that you may not have the “true” best choices of hyperparameters since a grid search can only explore values that you have hardcoded. Therefore, I recommend the following rule of thumb: If you have multiple machines, use a standard grid search and distribute the work across the machines. After the grid search completes, take the best values found and then use them as inputs to dlib’s find_min_global to find your best hyperparameters. If you have a single machine use dlib’s find_min_global, making sure to trim down the ranges of hyperparameters you want to explore. For instance, if you know you want a small, fast model, you should cap the upper range limit of tree_depth, preventing your ERTs from becoming too deep (and therefore slower). While dlib’s find_min_global function is quite powerful, it can also be slow, so make sure you take care to think ahead and plan out which hyperparameters are truly important for your application. You should also read my previous tutorial on training a custom dlib shape predictor for a detailed review of what each of the hyperparameters controls and how they can be used to balance speed, accuracy, and model size.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
Use these recommendations and you’ll be able to successfully tune and optimize your dlib shape predictors. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned how to use dlib’s find_min_global function to optimize options/hyperparameters when training a custom shape predictor. The function is incredibly easy to use and makes it dead simple to tune the hyperparameters to your dlib shape predictor. I would also recommend you use my previous tutorial on tuning dlib shape predictor options via a grid search — combining a grid search (using multiple machines) with find_min_global can lead to a superior shape predictor.
https://pyimagesearch.com/2020/01/13/optimizing-dlib-shape-predictor-accuracy-with-find_min_global/
I hope you enjoyed this blog post! To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Click here to download the source code to this pos In this tutorial, you will learn how to perform anomaly/novelty detection in image datasets using OpenCV, Computer Vision, and the scikit-learn machine learning library. Imagine this — you’re fresh out of college with a degree in Computer Science. You focused your studies specifically on computer vision and machine learning. Your first job out of school is with the United States National Parks department. Your task? Build a computer vision system that can automatically recognize flower species in the park. Such a system can be used to detect invasive plant species that may be harmful to the overall ecosystem of the park. You recognize immediately that computer vision can be used to recognize flower species. But first you need to: Gather example images of each flower species in the park (i.e., build a dataset). Quantify the image dataset and train a machine learning model to recognize the species.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Spot when outlier/anomaly plant species are detected, that way a trained botanist can inspect the plant and determine if it’s harmful to the park’s environment. Steps #1 and #2 and fairly straightforward — but Step #3 is substantially harder to perform. How are you supposed to train a machine learning model to automatically detect if a given input image is outside the “normal distribution” of what plants look like in the park? The answer lies in a special class of machine learning algorithms, including outlier detection and novelty/anomaly detection. In the remainder of this tutorial, you’ll learn the difference between these algorithms and how you can use them to spot outliers and anomalies in your own image datasets. To learn how to perform anomaly/novelty detection in image datasets, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Intro to anomaly detection with OpenCV, Computer Vision, and scikit-learn In the first part of this tutorial, we’ll discuss the difference between standard events that occur naturally and outlier/anomaly events. We’ll also discuss why these types of events can be especially hard for machine learning algorithms to detect. From there we’ll review our example dataset for this tutorial.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
I’ll then show you how to: Load our input images from disk. Quantify them. Train a machine learning model used for anomaly detection on our quantified images. From there we’ll be able to detect outliers/anomalies in new input images. Let’s get started! What are outliers and anomalies? And why are they hard to detect? Figure 1: Scikit-learn’s definition of an outlier is an important concept for anomaly detection with OpenCV and computer vision (image source). Anomalies are defined as events that deviate from the standard, rarely happen, and don’t follow the rest of the “pattern”. Examples of anomalies include: Large dips and spikes in the stock market due to world events Defective items in a factory/on a conveyor belt Contaminated samples in a lab If you were to think of a bell curve, anomalies exist on the far, far ends of the tails.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Figure 2: Anomalies exist at either side of a bell curve. In this tutorial we will conduct anomaly detection with OpenCV, computer vision, and scikit-learn (image source). These events will occur, but will happen with an incredibly small probability. From a machine learning perspective, this makes detecting anomalies hard — by definition, we have many examples of “standard” events and few examples of “anomaly” events. We, therefore, have a massive skew in our dataset. How are machine learning algorithms, which tend to work optimally with balanced datasets, supposed to work when the anomalies we want to detect may only happen 1%, 0.1%, or 0.0001% of the time? Luckily, machine learning researchers have investigated this type of problem and have devised algorithms to handle the task. Anomaly detection algorithms Figure 3: To detect anomalies in time-series data, be on the lookout for spikes as shown. We will use scikit-learn, computer vision, and OpenCV to detect anomalies in this tutorial (image source). Anomaly detection algorithms can be broken down into two subclasses: Outlier detection: Our input dataset contains examples of both standard events and anomaly events.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
These algorithms seek to fit regions of the training data where the standard events are most concentrated, disregarding, and therefore isolating, the anomaly events. Such algorithms are often trained in an unsupervised fashion (i.e., without labels). We sometimes use these methods to help clean and pre-process datasets before applying additional machine learning techniques. Novelty detection: Unlike outlier detection, which includes examples of both standard and anomaly events, novelty detection algorithms have only the standard event data points (i.e., no anomaly events) during training time. During training, we provide these algorithms with labeled examples of standard events (supervised learning). At testing/prediction time novelty detection algorithms must detect when an input data point is an outlier. Outlier detection is a form of unsupervised learning. Here we provide our entire dataset of example data points and ask the algorithm to group them into inliers (standard data points) and outliers (anomalies). Novelty detection is a form of supervised learning, but we only have labels for the standard data points — it’s up to the novelty detection algorithm to predict if a given data point is an inlier or outlier at test time. In the remainder of this blog post, we’ll be focusing on novelty detection as a form of anomaly detection.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Isolation Forests for anomaly detection Figure 4: A technique called “Isolation Forests” based on Liu et al. ’s 2012 paper is used to conduct anomaly detection with OpenCV, computer vision, and scikit-learn (image source). We’ll be using Isolation Forests to perform anomaly detection, based on Liu et al. ’s 2012 paper, Isolation-Based Anomaly Detection. Isolation forests are a type of ensemble algorithm and consist of multiple decision trees used to partition the input dataset into distinct groups of inliers. As Figure 4 shows above, Isolation Forests accept an input dataset (white points) and then build a manifold surrounding them. At test time, the Isolation Forest can then determine if the input points fall inside the manifold (standard events; green points) or outside the high-density area (anomaly events; red points). Reviewing how the Isolation Forests constructs an ensemble of partitioning trees is outside the scope of this post, so be sure to refer to Liu et al. ’s paper for more details. Configuring your anomaly detection development environment To follow along with today’s tutorial, you will need a Python 3 virtual environment with the following packages installed: scikit-learn OpenCV NumPy imutils Luckily, each of these packages is pip-installable, but there are a handful of pre-requisites (including Python virtual environments).
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Be sure to follow the following guide first to set up your virtual environment with OpenCV: pip install opencv Once your Python 3 virtual environment is ready, the pip install commands include: $ workon <env-name> $ pip install numpy $ pip install opencv-contrib-python $ pip install imutils $ pip install scikit-learn Note: The workon command becomes available once you install virtualenv and virtualenvwrapper per the pip install opencv installation guide. Project structure Be sure to grab the source code and example images to today’s post using the “Downloads” section of the tutorial. After you unarchive the .zip file you’ll be presented with the following project structure: $ tree --dirsfirst . ├── examples │   ├── coast_osun52.jpg │   ├── forest_cdmc290.jpg │   └── highway_a836030.jpg ├── forest │   ├── forest_bost100.jpg │   ├── forest_bost101.jpg │   ├── forest_bost102.jpg │   ├── forest_bost103.jpg │   ├── forest_bost98.jpg │   ├── forest_cdmc458.jpg │   ├── forest_for119.jpg │   ├── forest_for121.jpg │   ├── forest_for127.jpg │   ├── forest_for130.jpg │   ├── forest_for136.jpg │   ├── forest_for137.jpg │   ├── forest_for142.jpg │   ├── forest_for143.jpg │   ├── forest_for146.jpg │   └── forest_for157.jpg ├── pyimagesearch │   ├── __init__.py │   └── features.py ├── anomaly_detector.model ├── test_anomaly_detector.py └── train_anomaly_detector.py 3 directories, 24 files Our project consists of forest/ images and example/ testing images. Our anomaly detector will try to determine if any of the three examples is an anomaly compared to the set of forest images. Inside the pyimagesearch module is a file named features.py . This script contains two functions responsible for loading our image dataset from disk and calculating the color histogram features for each image. We will operate our system in two stages — (1) training, and (2) testing. First, the train_anomaly_detector.py script calculates features and trains an Isolation Forests machine learning model for anomaly detection, serializing the result as anomaly_detector.model . Then we’ll develop test_anomaly_detector.py which accepts an example image and determines if it is an anomaly.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Our example image dataset Figure 5: We will use a subset of the 8Scenes dataset to detect anomalies among pictures of forests using scikit-learn, OpenCV, and computer vision. Our example dataset for this tutorial includes 16 images of forests (each of which is shown in Figure 5 above). These example images are a subset of the 8 Scenes Dataset from Oliva and Torralba’s paper, Modeling the shape of the scene: a holistic representation of the spatial envelope. We’ll take this dataset and train an anomaly detection algorithm on top of it. When presented with a new input image, our anomaly detection algorithm will return one of two values: 1: “Yep, that’s a forest.” -1: “No, doesn’t look like a forest. It must be an outlier.” You can thus think of this model as a “forest” vs “not forest” detector. This model was trained on forest images and now must decide if a new input image fits inside the “forest manifold” or if is truly an anomaly/outlier. To evaluate our anomaly detection algorithm we have 3 testing images: Figure 6: Three testing images are included in today’s Python + computer vision anomaly detection project.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
As you can see, only one of these images is a forest — the other two are examples of highways and beach coasts, respectively. If our anomaly detection pipeline is working properly, our model should return 1 (inlier) for the forest image and -1 for the two non-forest images. Implementing our feature extraction and dataset loader helper functions Figure 7: Color histograms characterize the color distribution of an image. Color will be the basis of our anomaly detection introduction using OpenCV, computer vision, and scikit-learn. Before we can train a machine learning model to detect anomalies and outliers, we must first define a process to quantify and characterize the contents of our input images. To accomplish this task, we’ll be using color histograms. Color histograms are simple yet effective methods to characterize the color distribution of an image. Since our task here is to characterize forest vs. non-forest images, we may assume that forest images will contain more shades of green versus their non-forest counterparts. Let’s take a look at how we can implement color histogram extraction using OpenCV. Open up the features.py file in the pyimagesearch module and insert the following code: # import the necessary packages from imutils import paths import numpy as np import cv2 def quantify_image(image, bins=(4, 6, 3)): # compute a 3D color histogram over the image and normalize it hist = cv2.calcHist([image], [0, 1, 2], None, bins, [0, 180, 0, 256, 0, 256]) hist = cv2.normalize(hist, hist).flatten() # return the histogram return hist Lines 2-4 import our packages.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
We’ll use paths from my imutils package to list all images in an input directory. OpenCV will be used to calculate and normalize histograms. NumPy is used for array operations. Now that imports are taken care of, let’s define the quantify_image function. This function accepts two parameters: image : The OpenCV-loaded image. bins : When plotting the histogram, the x-axis serves as our “bins.” In this case our default specifies 4 hue bins, 6 saturation bins, and 3 value bins. Here’s a brief example — if we use only 2 (equally spaced) bins, then we are counting the number of times a pixel is in the range [0, 128] or [128, 255]. The number of pixels binned to the x-axis value is then plotted on the y-axis. Note: To learn more about both histograms and color spaces including HSV, RGB, and L*a*b, and Grayscale, be sure to refer to Practical Python and OpenCV and PyImageSearch Gurus.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Lines 8-10 compute the color histogram and normalize it. Normalization allows us to count percentage and not raw frequency counts, helping in the case that some images are larger or smaller than others. Line 13 returns the normalized histogram to the caller. Our next function handles: Accepting the path to a directory containing our dataset of images. Looping over the image paths while quantifying them using our quantify_image method. Let’s take a look at this method now: def load_dataset(datasetPath, bins): # grab the paths to all images in our dataset directory, then # initialize our lists of images imagePaths = list(paths.list_images(datasetPath)) data = [] # loop over the image paths for imagePath in imagePaths: # load the image and convert it to the HSV color space image = cv2.imread(imagePath) image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) # quantify the image and update the data list features = quantify_image(image, bins) data.append(features) # return our data list as a NumPy array return np.array(data) Our load_dataset function accepts two parameters: datasetPath : The path to our dataset of images. bins : Num of bins for the color histogram. Refer to the explanation above. The bins are passed to the quantify_image function. Line 18 grabs all image paths in the datasetPath .
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Line 19 initializes a list to hold our features data . From there, Line 22 begins a loop over the imagePaths . Inside the loop we load an image and convert it to the HSV color space (Lines 24 and 25). Then we quantify the image , and add the resulting features to the data list (Lines 28 and 29). Finally, Line 32 returns our data list as a NumPy array to the caller. Implementing our anomaly detection training script with scikit-learn With our helper functions implemented we can now move on to training an anomaly detection model. As mentioned earlier in this tutorial, we’ll be using an Isolation Forest to help determine anomaly/novelty data points. Our implementation of Isolation Forests comes from the scikit-learn library. Open up the train_anomaly_detector.py file and let’s get to work: # import the necessary packages from pyimagesearch.features import load_dataset from sklearn.ensemble import IsolationForest import argparse import pickle # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-d", "--dataset", required=True, help="path to dataset of images") ap.add_argument("-m", "--model", required=True, help="path to output anomaly detection model") args = vars(ap.parse_args()) Lines 2-6 handle our imports.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
This script uses our custom load_dataset function and scikit-learn’s implementation of Isolation Forests. We’ll serialize our resulting model as a pickle file. Lines 8-13 parse our command line arguments including: --dataset : The path to our dataset of images. --model : The path to the output anomaly detection model. At this point, we’re ready to load our dataset and train our Isolation Forest model: # load and quantify our image dataset print("[INFO] preparing dataset...") data = load_dataset(args["dataset"], bins=(3, 3, 3)) # train the anomaly detection model print("[INFO] fitting anomaly detection model...") model = IsolationForest(n_estimators=100, contamination=0.01, random_state=42) model.fit(data) Line 17 loads and quantifies the image dataset. Lines 21 and 22 initializes our IsolationForest model with the following parameters: n_estimators : The number of base estimators (i.e., trees) in the ensemble. contamination : The proportion of outliers in the dataset. random_state : The random number generator seed value for reproducibility. You can use any integer; 42 is commonly used in the machine learning world as it relates to a joke in the book, Hitchhiker’s Guide to the Galaxy. Be sure to refer to other optional parameters to the Isolation Forest in the scikit-learn documentation.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Line 23 trains the anomaly detector on top of the histogram data . Now that our model is trained, the remaining lines serialize the anomaly detector to a pickle file on disk: # serialize the anomaly detection model to disk f = open(args["model"], "wb") f.write(pickle.dumps(model)) f.close() Training our anomaly detector Now that we have implemented our anomaly detection training script, let’s put it to work. Start by making sure you have used the “Downloads” section of this tutorial to download the source code and example images. From there, open up a terminal and execute the following command: $ python train_anomaly_detector.py --dataset forest --model anomaly_detector.model [INFO] preparing dataset... [INFO] fitting anomaly detection model... To verify that the anomaly detector has been serialized to disk, check the contents of your working project directory: $ ls *.model anomaly_detector.model Creating the anomaly detector testing script At this point we have trained our anomaly detection model — but how do we use to actually detect anomalies in new data points? To answer that question, let’s look at the test_anomaly_detector.py script. At a high-level, this script: Loads the anomaly detection model trained in the previous step. Loads, preprocesses, and quantifies a query image. Makes a prediction with our anomaly detector to determine if the query image is an inlier or an outlier (i.e. anomaly). Displays the result. Go ahead and open test_anomaly_detector.py and insert the following code: # import the necessary packages from pyimagesearch.features import quantify_image import argparse import pickle import cv2 # construct the argument parser and parse the arguments ap = argparse.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
ArgumentParser() ap.add_argument("-m", "--model", required=True, help="path to trained anomaly detection model") ap.add_argument("-i", "--image", required=True, help="path to input image") args = vars(ap.parse_args()) Lines 2-5 handle our imports. Notice that we import our custom quantify_image function to calculate features on our input image. We also import pickle to load our anomaly detection model. OpenCV will be used for loading, preprocessing, and displaying images. Our script requires two command line arguments: --model : The serialized anomaly detector residing on disk. --image : The path to the input image (i.e. our query). Let’s load our anomaly detector and quantify our input image: # load the anomaly detection model print("[INFO] loading anomaly detection model...") model = pickle.loads(open(args["model"], "rb").read()) # load the input image, convert it to the HSV color space, and # quantify the image in the *same manner* as we did during training image = cv2.imread(args["image"]) hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) features = quantify_image(hsv, bins=(3, 3, 3)) Line 17 loads our pre-trained anomaly detector. Lines 21-23 load, preprocess, and quantify our input image . Our preprocessing steps must be the same as in our training script (i.e. converting from BGR to HSV color space). At this point, we’re ready to make an anomaly prediction and display results: # use the anomaly detector model and extracted features to determine # if the example image is an anomaly or not preds = model.predict([features])[0] label = "anomaly" if preds == -1 else "normal" color = (0, 0, 255) if preds == -1 else (0, 255, 0) # draw the predicted label text on the original image cv2.putText(image, label, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.7, color, 2) # display the image cv2.imshow("Output", image) cv2.waitKey(0) Line 27 makes predictions on the input image features .
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Our anomaly detection model will return 1 for a “normal” data point and -1 for an “outlier”. Line 28 assigns either an "anomaly" or "normal" label to our prediction. Lines 32-37 then annotate the label onto the query image and display it on screen until any key is pressed. Detecting anomalies in image datasets using computer vision and scikit-learn To see our anomaly detection model in action make sure you have used the “Downloads” section of this tutorial to download the source code, example image dataset, and pre-trained model. From there, you can use the following command to test the anomaly detector: $ python test_anomaly_detector.py --model anomaly_detector.model \ --image examples/forest_cdmc290.jpg [INFO] loading anomaly detection model... Figure 8: This image is clearly not an anomaly as it is a green forest. Our intro to anomaly detection method with computer vision and Python has passed the first test. Here you can see that our anomaly detector has correctly labeled the forest as an inlier. Let’s now see how the model handles an image of a highway, which is certainly not a forest: $ python test_anomaly_detector.py --model anomaly_detector.model \ --image examples/highway_a836030.jpg [INFO] loading anomaly detection model... Figure 9: A highway is an anomaly compared to our set of forest images and has been marked as such in the top-left corner. This tutorial presents an intro to anomaly detection with OpenCV, computer vision, and scikit-learn. Our anomaly detector correctly labels this image as an outlier/anomaly.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
As a final test, let’s supply an image of a beach/coast to the anomaly detector: $ python test_anomaly_detector.py --model anomaly_detector.model \ --image examples/coast_osun52.jpg [INFO] loading anomaly detection model... Figure 10: A coastal landscape is marked as an anomaly against a set of forest images using Python, OpenCV, scikit-learn, and computer vision anomaly detection techniques. Once again, our anomaly detector correctly identifies the image as an outlier/anomaly. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned how to perform anomaly and outlier detection in image datasets using computer vision and the scikit-learn machine learning library. To perform anomaly detection, we: Gathered an example image dataset of forest images.
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Quantified the image dataset using color histograms and the OpenCV library. Trained an Isolation Forest on our quantified images. Used the Isolation Forest to detect image outliers and anomalies. Along with Isolation Forests you should also investigate One-class SVMs, Elliptic Envelopes, and Local Outlier Factor algorithms as they can be used for outlier/anomaly detection as well. But what about deep learning? Can deep learning be used to perform anomaly detection too? I’ll answer that question in a future tutorial. To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
https://pyimagesearch.com/2020/01/20/intro-to-anomaly-detection-with-opencv-computer-vision-and-scikit-learn/
Download the code! Website
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
Click here to download the source code to this pos In this tutorial, you will learn how to utilize YOLO and Tiny-YOLO for near real-time object detection on the Raspberry Pi with a Movidius NCS. The YOLO object detector is often cited as being one of the fastest deep learning-based object detectors, achieving a higher FPS rate than computationally expensive two-stage detectors (ex. Faster R-CNN) and some single-stage detectors (ex. RetinaNet and some, but not all, variations of SSDs). However, even with all that speed, YOLO is still not fast enough to run on embedded devices such as the Raspberry Pi — even with the aid of the Movidius NCS. To help make YOLO even faster, Redmon et al. ( the creators of YOLO), defined a variation of the YOLO architecture called Tiny-YOLO. The Tiny-YOLO architecture is approximately 442% faster than it’s larger big brothers, achieving upwards of 244 FPS on a single GPU. The small model size (< 50MB) and fast inference speed make the Tiny-YOLO object detector naturally suited for embedded computer vision/deep learning devices such as the Raspberry Pi, Google Coral, and NVIDIA Jetson Nano. Today you’ll learn how to take Tiny-YOLO and then deploy it to the Raspberry Pi using a Movidius NCS to obtain near real-time object detection.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
To learn how to utilize YOLO and TinyYOLO for object detection on the Raspberry Pi with the Movidius NCS, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section YOLO and Tiny-YOLO object detection on the Raspberry Pi and Movidius NCS In the first part of this tutorial, we’ll learn about the YOLO and Tiny-YOLO object detectors. From there, I’ll show you how to configure your Raspberry Pi and OpenVINO development environment so that they can utilize Tiny-YOLO. We’ll then review our directory structure for the project, including a shell script required to properly access your OpenVINO environment. Once we understand our project structure, we’ll move on to implementing a Python script that: Accesses our OpenVINO environment. Reads frames from a video stream. Performs near real-time object detection using a Raspberry Pi, Movidius NCS, and Tiny-YOLO. We’ll wrap up the tutorial by examining the results of our script. What are YOLO and Tiny-YOLO?
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
Figure 1: Tiny-YOLO has a lower mAP score on the COCO dataset than most object detectors. That said, Tiny-YOLO may be a useful object detector to pair with your Raspberry Pi and Movidius NCS. ( image source) Tiny-YOLO is a variation of the “You Only Look Once” (YOLO) object detector proposed by Redmon et al. in their 2016 paper, You Only Look Once: Unified, Real-Time Object Detection. YOLO was created to help improve the speed of slower two-stage object detectors, such as Faster R-CNN. While R-CNNs are accurate they are quite slow, even when running on a GPU. On the contrary, single-stage detectors such as YOLO are quite fast, obtaining super real-time performance on a GPU. The downside, of course, is that YOLO tends to be less accurate (and in my experience, much harder to train than SSDs or RetinaNet). Since Tiny-YOLO is a smaller version than its big brothers, this also means that Tiny-YOLO is unfortunately even less accurate. For reference, Redmon et al.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
report ~51-57% mAP for YOLO on the COCO benchmark dataset while Tiny-YOLO is only 23.7% mAP — less than half of the accuracy of its bigger brothers. That said, 23% mAP is still reasonable enough for some applications. My general advice when using YOLO is to “simply give it a try”: In some cases, it may work perfectly fine for your project. And in others, you may seek more accurate detectors (Faster R-CNN, SSDs, RetinaNet, etc.). To learn more about YOLO, Tiny-YOLO, and other YOLO variants, be sure to refer to Redmon et al. ’s 2018 publication. Configuring your Raspberry Pi + OpenVINO environment Figure 2: Configuring the OpenVINO toolkit for your Raspberry Pi and Movidius NCS to conduct TinyYOLO object detection. This tutorial requires a Raspberry Pi 4B and Movidius NCS2 (the NCS1 is not supported) in order to replicate my results. Configuring your Raspberry Pi with the Intel Movidius NCS for this project is admittedly challenging. I suggest you (1) pick up a copy of Raspberry Pi for Computer Vision, and (2) flash the included pre-configured .img to your microSD.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
The .img that comes included with the book is worth its weight in gold as it will save you countless hours of toiling and frustration. For the stubborn few who wish to configure their Raspberry Pi + OpenVINO on their own, here is a brief guide: Head to my BusterOS install guide and follow all instructions to create an environment named cv. Follow my OpenVINO installation guide and create a 2nd environment named openvino. Be sure to download OpenVINO 4.1.1 (4.1.2 has unresolved issues). You will need a package called JSON-Minify to parse our JSON configuration. You may install it into your virtual environment: $ pip install json_minify At this point, your RPi will have both a normal OpenCV environment as well as an OpenVINO-OpenCV environment. You will use the openvino environment for this tutorial. Now, simply plug in your NCS2 into a blue USB 3.0 port (the RPi 4B has USB 3.0 for maximum speed) and start your environment using either of the following methods: Option A: Use the shell script on my Pre-configured Raspbian .img (the same shell script is described in the “Recommended: Create a shell script for starting your OpenVINO environment” section of my OpenVINO installation guide). From here on, you can activate your OpenVINO environment with one simple command (as opposed to two commands like in the previous step: $ source ~/start_openvino.sh Starting Python 3.7 with OpenCV-OpenVINO 4.1.1 bindings... Option B: One-two punch method. If you don’t mind executing two commands instead of one, you can open a terminal and perform the following: $ workon openvino $ source ~/openvino/bin/setupvars.sh The first command activates our OpenVINO virtual environment.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
The second command sets up the Movidius NCS with OpenVINO (and is very important, otherwise your script will error out). Both Option A and Option B assume that you either are using my Pre-configured Raspbian .img or that you followed my OpenVINO installation guide and installed OpenVINO 4.1.1 on your own. Caveats: Some versions of OpenVINO struggle to read .mp4 videos. This is a known bug that PyImageSearch has reported to the Intel team. Our preconfigured .img includes a fix. Abhishek Thanki edited the source code and compiled OpenVINO from source. This blog post is long enough as is, so I cannot include the compile-from-source instructions. If you encounter this issue please encourage Intel to fix the problem, and either (A) compile from source using our customer portal instructions, or (B) pick up a copy of Raspberry Pi for Computer Vision and use the pre-configured .img. The NCS1 does not support the TinyYOLO model provided with this tutorial. This is atypical — usually, the NCS2 and NCS1 are very compatible (with the NCS2 being faster).
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
We will add to this list if we discover other caveats. Project Structure Go ahead and grab today’s downloadable .zip from the “Downloads” section of today’s tutorial. Let’s inspect our project structure directly in the terminal with the tree command: $ tree --dirsfirst . ├── config │   └── config.json ├── intel │   ├── __init__.py │   ├── tinyyolo.py │   └── yoloparams.py ├── pyimagesearch │   ├── utils │   │   ├── __init__.py │   │   └── conf.py │   └── __init__.py ├── videos │   └── test_video.mp4 ├── yolo │   ├── coco.names │   ├── frozen_darknet_tinyyolov3_model.bin │   ├── frozen_darknet_tinyyolov3_model.mapping │   └── frozen_darknet_tinyyolov3_model.xml └── detect_realtime_tinyyolo_ncs.py 6 directories, 13 files Our TinyYOLO model trained on the COCO dataset is provided via the yolo/ directory. The intel/ directory contains two classes provided by Intel Corporation: TinyYOLOv3: A class for parsing, scaling, and computing Intersection over Union for the TinyYOLO results. TinyYOLOV3Params: A class for building a layer parameters object. We will not review either of the Intel-provided scripts today. You are encouraged to review the files on your own. Our pyimagesearch module contains our Conf class, a utility responsible for parsing config.json. A testing video of people walking through a public place (grabbed from Oxford University‘s site) is provided for you to perform TinyYOLO object detection on.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
I encourage you to add your own videos/ as well. The heart of today’s tutorial lies in detect_realtime_tinyyolo_ncs.py. This script loads the TinyYOLOv3 model and performs inference on every frame of a realtime video stream. You may use your PiCamera, USB camera, or a video file residing on disk. The script will calculate the overall frames per second (FPS) benchmark for near real-time TinyYOLOv3 inference on your Raspberry Pi 4B and NCS2. Our Configuration File Figure 3: Intel’s OpenVINO Toolkit is combined with OpenCV allowing for optimized deep learning inference on Intel devices such as the Movidius Neural Compute Stick. We will use OpenVINO for TinyYOLO object detection on the Raspberry Pi and Movidius NCS. Our configuration variables are housed in our config.json file. Go ahead and open it now and let’s inspect the contents: { // path to YOLO architecture definition XML file "xml_path": "yolo/frozen_darknet_tinyyolov3_model.xml", // path to the YOLO weights "bin_path": "yolo/frozen_darknet_tinyyolov3_model.bin", // path to the file containing COCO labels "labels_path": "yolo/coco.names", Line 3 defines our TinyYOLOv3 architecture definition file path while Line 6 specifies the path to the pre-trained TinyYOLOv3 COCO weights. We then provide the path to the COCO dataset label names on Line 9.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
Let’s now look at variables used to filter detections: // probability threshold for detections filtering "prob_threshold": 0.2, // intersection over union threshold for filtering overlapping // detections "iou_threshold": 0.15 } Lines 12-16 define the probability and Intersection over Union (IoU) thresholds so that weak detections may be filtered by our driver script. If you are experiencing too many false positive object detections, you should increase these numbers. As a general rule, I like to start my probability threshold at 0.5. Implementing the YOLO and Tiny-YOLO object detection script for the Movidius NCS We are now ready to implement our Tiny-YOLO object detection script! Open up the detect_realtime_tinyyolo_ncs.py file in your directory structure and insert the following code: # import the necessary packages from openvino.inference_engine import IENetwork from openvino.inference_engine import IEPlugin from intel.yoloparams import TinyYOLOV3Params from intel.tinyyolo import TinyYOLOv3 from imutils.video import VideoStream from pyimagesearch.utils import Conf from imutils.video import FPS import numpy as np import argparse import imutils import time import cv2 import os We begin on Lines 2-14 by importing necessary packages; let’s review the most important ones: openvino: The IENetwork and IEPlugin imports allow our Movidius NCS to takeover the TinyYOLOv3 inference. intel: The TinyYOLOv3 and TinyYOLOV3Params classes are provided by Intel Corporation (i.e., not developed by us) and assist with parsing the TinyYOLOv3 results. imutils: The VideoStream class is threaded for speedy camera frame capture. The FPS class provides a framework for calculating frames per second benchmarks. Conf: A class to parse commented JSON files. cv2: OpenVINO’s modified OpenCV is optimized for Intel devices.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
With our imports ready to go, now we’ll load our configuration file: # construct the argument parser and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-c", "--conf", required=True, help="Path to the input configuration file") ap.add_argument("-i", "--input", help="path to the input video file") args = vars(ap.parse_args()) # load the configuration file conf = Conf(args["conf"]) The command line arguments for our Python script include: --conf: The path to the input configuration file that we reviewed in the previous section. --input: An optional path to an input video file. If no input file is specified, the script will use a camera instead. With our configuration path specified, Line 24 loads our configuration file from disk. Now that our configuration resides in memory, now we’ll proceed to load our COCO class labels: # load the COCO class labels our YOLO model was trained on and # initialize a list of colors to represent each possible class # label LABELS = open(conf["labels_path"]).read().strip().split("\n") np.random.seed(42) COLORS = np.random.uniform(0, 255, size=(len(LABELS), 3)) Lines 29-31 load our COCO dataset class labels and associate a random color with each label. We will use the colors when it comes to annotating our resulting bounding boxes and class labels. Next, we’ll load our TinyYOLOv3 model onto our Movidius NCS: # initialize the plugin in for specified device plugin = IEPlugin(device="MYRIAD") # read the IR generated by the Model Optimizer (.xml and .bin files) print("[INFO] loading models...") net = IENetwork(model=conf["xml_path"], weights=conf["bin_path"]) # prepare inputs print("[INFO] preparing inputs...") inputBlob = next(iter(net.inputs)) # set the default batch size as 1 and get the number of input blobs, # number of channels, the height, and width of the input blob net.batch_size = 1 (n, c, h, w) = net.inputs[inputBlob].shape Our first interaction with the OpenVINO API is to initialize our NCS’s Myriad processor and loads the pre-trained TinyYOLOv3 from disk (Lines 34-38). We then: Prepare our inputBlob (Line 42). Set the batch size to 1 as we will be processing a single frame at a time (Line 46).
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
Determine the input volume shape dimensions (Line 47). Let’s go ahead and initialize our camera or file video stream: # if a video path was not supplied, grab a reference to the webcam if args["input"] is None: print("[INFO] starting video stream...") # vs = VideoStream(src=0).start() vs = VideoStream(usePiCamera=True).start() time.sleep(2.0) # otherwise, grab a reference to the video file else: print("[INFO] opening video file...") vs = cv2.VideoCapture(os.path.abspath(args["input"])) # loading model to the plugin and start the frames per second # throughput estimator print("[INFO] loading model to the plugin...") execNet = plugin.load(network=net, num_requests=1) fps = FPS().start() We query our --input argument to determine if we will process frames from a camera or video file and set up the appropriate video stream (Lines 50-59). Due to a bug in Intel’s OpenCV-OpenVINO implementation, if you are using a video file you must specify the absolute path in the cv2.VideoCapture function. If you do not, OpenCV-OpenVINO will not be able to process the file. Note: If the --input command line argument is not provided, a camera will be used instead. By default, your PiCamera (Line 53) is selected. If you prefer to use a USB camera, simply comment out Line 53 and uncomment Line 52. Our next interaction with the OpenVINO API is to load TinyYOLOv3 onto our Movidius NCS (Line 64) while Line 65 starts measuring FPS throughput. At this point, we’re done with the setup and we can now begin processing frames and performing TinyYOLOv3 detection: # loop over the frames from the video stream while True: # grab the next frame and handle if we are reading from either # VideoCapture or VideoStream orig = vs.read() orig = orig[1] if args["input"] is not None else orig # if we are viewing a video and we did not grab a frame then we # have reached the end of the video if args["input"] is not None and orig is None: break # resize original frame to have a maximum width of 500 pixel and # input_frame to network size orig = imutils.resize(orig, width=500) frame = cv2.resize(orig, (w, h)) # change data layout from HxWxC to CxHxW frame = frame.transpose((2, 0, 1)) frame = frame.reshape((n, c, h, w)) # start inference and initialize list to collect object detection # results output = execNet.infer({inputBlob: frame}) objects = [] Line 68 begins our realtime TinyYOLOv3 object detection loop. First, we grab and preprocess our frame (Lines 71-86).
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
Then, we performs object detection inference (Line 90). Line 91 initializes an objects list which we’ll populate next: # loop over the output items for (layerName, outBlob) in output.items(): # create a new object which contains the required tinyYOLOv3 # parameters layerParams = TinyYOLOV3Params(net.layers[layerName].params, outBlob.shape[2]) # parse the output region objects += TinyYOLOv3.parse_yolo_region(outBlob, frame.shape[2:], orig.shape[:-1], layerParams, conf["prob_threshold"]) To populate our objects list, we loop over the output items, create our layerParams, and parse the output region (Lines 94-103). Take note that we are using Intel-provided code to assist with parsing our YOLO output. YOLO and TinyYOLO tend to produce quite a few false-positives. To combat this, next, we’ll devise two weak detection filters: # loop over each of the objects for i in range(len(objects)): # check if the confidence of the detected object is zero, if # it is, then skip this iteration, indicating that the object # should be ignored if objects[i]["confidence"] == 0: continue # loop over remaining objects for j in range(i + 1, len(objects)): # check if the IoU of both the objects exceeds a # threshold, if it does, then set the confidence of that # object to zero if TinyYOLOv3.intersection_over_union(objects[i], objects[j]) > conf["iou_threshold"]: objects[j]["confidence"] = 0 # filter objects by using the probability threshold -- if a an # object is below the threshold, ignore it objects = [obj for obj in objects if obj['confidence'] >= \ conf["prob_threshold"]] Line 106 begins a loop over our parsed objects for our first filter: We allow only objects with confidence values not equal to zero (Lines 110 and 111). Then we actually modify the confidence value (sets it to zero) for any object that does not pass our Intersection over Union (IoU) threshold (Lines 114-120). Effectively, objects with a low IoU will be ignored. Lines 124 and 125 compactly account for our second filter. Inspecting the code carefully, these two lines: Rebuild (overwrite) our objects list. Effectively, we are filtering out objects that do not meet the probability threshold.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
Now that our objects only contain those which we care about, we’ll annotate our output frame with bounding boxes and class labels: # store the height and width of the original frame (endY, endX) = orig.shape[:-1] # loop through all the remaining objects for obj in objects: # validate the bounding box of the detected object, ensuring # we don't have any invalid bounding boxes if obj["xmax"] > endX or obj["ymax"] > endY or obj["xmin"] \ < 0 or obj["ymin"] < 0: continue # build a label consisting of the predicted class and # associated probability label = "{}: {:.2f}%".format(LABELS[obj["class_id"]], obj["confidence"] * 100) # calculate the y-coordinate used to write the label on the # frame depending on the bounding box coordinate y = obj["ymin"] - 15 if obj["ymin"] - 15 > 15 else \ obj["ymin"] + 15 # draw a bounding box rectangle and label on the frame cv2.rectangle(orig, (obj["xmin"], obj["ymin"]), (obj["xmax"], obj["ymax"]), COLORS[obj["class_id"]], 2) cv2.putText(orig, label, (obj["xmin"], y), cv2.FONT_HERSHEY_SIMPLEX, 1, COLORS[obj["class_id"]], 3) Line 128 extracts the height and width of our original frame. We’ll need these values for annotation. We then loop over our filtered objects. Inside the loop beginning on Line 131, we: Check to see if the detected (x, y)-coordinates fall outside the bounds of the original image dimensions; if so, we discard the detection (Lines 134-136). Build our bounding box label consisting of the object "class_id" and "confidence". Annotate the bounding box rectangle and label using the COLORS (from Line 31) on the output frame (Lines 145-152). If the top of the box is close to the top of the frame, Lines 145 and 146 move the label down by 15 pixels. Finally, we’ll display our frame, calculate statistics, and clean up: # display the current frame to the screen and record if a user # presses a key cv2.imshow("TinyYOLOv3", orig) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # update the FPS counter fps.update() # stop the timer and display FPS information fps.stop() print("[INFO] elapsed time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # stop the video stream and close any open windows1 vs.stop() if args["input"] is None else vs.release() cv2.destroyAllWindows() Wrapping up, we display the output frame and wait for the q key to be pressed at which point we’ll break out of the loop (Lines 156-161). Line 164 updates our FPS calculator.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
When either (1) the video file has no more frames, or (2) the user presses the q key on either a video or camera stream, the loop exits. At that point, Lines 167-169 print FPS statistics to your terminal. Lines 172 and 173 stop the stream and destroy GUI windows. YOLO and Tiny-YOLO object detection results on the Raspberry Pi and Movidius NCS To utilize Tiny-YOLO on the Raspberry Pi with the Movidius NCS, make sure you have: Followed the instructions in “Configuring your Raspberry Pi + OpenVINO environment” to configure your development environment. Used the “Downloads” section of this tutorial to download the source code and pre-trained model weights. After unarchiving the source code/model weights, you can open up a terminal and execute the following command: $ python detect_realtime_tinyyolo_ncs.py --conf config/config.json \ --input videos/test_video.mp4 [INFO] loading models... [INFO] preparing inputs... [INFO] opening video file... [INFO] loading model to the plugin... [INFO] elapsed time: 199.86 [INFO] approx. FPS: 2.66 Here we have supplied the path to an input video file. Our combination of Raspberry Pi, Movidius NCS, and Tiny-YOLO can apply object detection at the rate of ~2.66 FPS. Video Credit: Oxford University. Let’s now try using a camera rather than a video file, simply by omitting the --input command line argument: $ python detect_realtime_tinyyolo_ncs.py --conf config/config.json [INFO] loading models... [INFO] preparing inputs... [INFO] starting video stream... [INFO] loading model to the plugin... [INFO] elapsed time: 804.18 [INFO] approx.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
FPS: 4.28   Notice that processing a camera stream leads to a higher FPS (~4.28 FPS versus 2.66 FPS respectively). So, why is running object detection on a camera stream faster than applying object detection to a video file? The reason is quite simple — it takes the CPU more cycles to decode frames from a video file than it does to read a raw frame from a camera stream. Video files typically apply some level of compression to reduce the resulting video file size. While the output file size is reduced, the frame still needs to be decompressed when read — the CPU is responsible for that operation. On the contrary, the CPU has significantly less work to do when a frame is read from a webcam, USB camera, or RPi camera module, hence why our script runs faster on a camera stream versus a video file. It’s also worth noting that the fastest speed can be obtained using a Raspberry Pi camera module. When using the RPi camera module the onboard display and stream processing GPU (no, not a deep learning GPU) on the RPi handles reading and processing frames so the CPU doesn’t have to be involved. I’ll leave it as an experiment to you, the reader, to compare USB camera vs. RPi camera module throughput rates. Note: All FPS statistics collected on RPi 4B 4GB, NCS2 (connected to USB 3.0) and serving an OpenCV GUI window on the Raspbian desktop which is being displayed over VNC.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
If you were to run the algorithm headless (i.e. no GUI), you may be able to achieve 0.5 or more FPS gains because displaying frames to the screen also takes precious CPU cycles. Please keep this in mind as you compare your results. Drawbacks and limitations of Tiny-YOLO While Tiny-YOLO is fast and more than capable of running on the Raspberry Pi, the biggest issue you’ll find with it is accuracy — the smaller model size results in a substantially less accurate model. For reference, Tiny-YOLO achieves only 23.7% mAP on the COCO dataset while the larger YOLO models achieve 51-57% mAP, well over double the accuracy of Tiny-YOLO. When testing Tiny-YOLO I found that it worked well in some images/videos, and in others, it was totally unusable. Don’t be discouraged if Tiny-YOLO isn’t giving you the results that you want, it’s likely that the model just isn’t suited for your particular application. Instead, consider trying a more accurate object detector, including: Larger, more accurate YOLO models Single Shot Detectors (SSDs) Faster R-CNNs RetinaNet For embedded devices such as the Raspberry Pi, I typically always recommend Single Shot Detectors (SSDs) with a MobileNet base. These models are challenging to train (i.e. optimizing hyperparameters), but once you have a solid model, the speed and accuracy tradeoffs are well worth it. If you’re interested in learning more about these object detectors, my book, Deep Learning for Computer Vision with Python, shows you how to train each of these object detectors from scratch and then deploy them for object detection in images and video streams. Inside of Raspberry Pi for Computer Vision you’ll learn how to train MobileNet SSD and InceptionNet SSD object detectors and deploy the models to embedded devices as well.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning. Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery. Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial, you learned how to utilize Tiny-YOLO for near real-time object detection on the Raspberry Pi using the Movidius NCS. Due to Tiny-YOLO’s small size (< 50MB) and fast inference speed (~244 FPS on a GPU), the model is well suited for usage on embedded devices such as the Raspberry Pi, Google Coral, and NVIDIA Jetson Nano. Using both a Raspberry Pi and Movidius NCS, we were capable of obtaining ~4.28 FPS. I would suggest using the code and pre-trained model provided in this tutorial as a template/starting point for your own projects — extend them to fit your own needs.
https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/
To download the source code and pre-trained Tiny-YOLO model (and be notified when future tutorials are published here on PyImageSearch), just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code! Website
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Click here to download the source code to this pos In this tutorial, you’ll learn how to use OpenCV’s “dnn” module with an NVIDIA GPU for up to 1,549% faster object detection (YOLO and SSD) and instance segmentation (Mask R-CNN). Last week, we discovered how to configure and install OpenCV and its “deep neural network” (dnn) module for inference using an NVIDIA GPU. Using OpenCV’s GPU-optimized dnn module we were able to push a given network’s computation from the CPU to the GPU in only three lines of code: # load the model from disk and set the backend target to a # CUDA-enabled GPU net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"]) net.setPreferableBackend(cv2.dnn. DNN_BACKEND_CUDA) net.setPreferableTarget(cv2.dnn. DNN_TARGET_CUDA) Today we’re going to discuss complete code examples in more detail — and by the end of the tutorial, you’ll be able to apply: Single Shot Detectors (SSDs) at 65.90 FPS YOLO object detection at 11.87 FPS Mask R-CNN instance segmentation at 11.05 FPS To learn how to use OpenCV’s dnn module and an NVIDIA GPU for faster object detection and instance segmentation, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section OpenCV ‘dnn’ with NVIDIA GPUs: 1,549% faster YOLO, SSD, and Mask R-CNN Inside this tutorial you’ll learn how to implement Single Shot Detectors, YOLO, and Mask R-CNN using OpenCV’s “deep neural network” (dnn) module and an NVIDIA/CUDA-enabled GPU. Compile OpenCV’s ‘dnn’ module with NVIDIA GPU support Figure 1: Compiling OpenCV’s DNN module with the CUDA backend allows us to perform object detection with YOLO, SSD, and Mask R-CNN deep learning models much faster. If you haven’t yet, make sure you carefully read last week’s tutorial on configuring and installing OpenCV with NVIDIA GPU support for the “dnn” module — following that tutorial is an absolute prerequisite for this tutorial. If you do not install OpenCV with NVIDIA GPU support enabled, OpenCV will still use your CPU for inference; however, if you try to pass the computation to the GPU, OpenCV will error out.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Project Structure Before we review the structure of today’s project, grab the code and model files from the “Downloads” section of this blog post. From there, unzip the files and use the tree command in your terminal to inspect the project hierarchy: $ tree --dirsfirst . ├── example_videos │ ├── dog_park.mp4 │ ├── guitar.mp4 │ └── janie.mp4 ├── opencv-ssd-cuda │ ├── MobileNetSSD_deploy.caffemodel │ ├── MobileNetSSD_deploy.prototxt │ └── ssd_object_detection.py ├── opencv-yolo-cuda │ ├── yolo-coco │ │ ├── coco.names │ │ ├── yolov3.cfg │ │ └── yolov3.weights │ └── yolo_object_detection.py ├── opencv-mask-rcnn-cuda │ ├── mask-rcnn-coco │ │ ├── colors.txt │ │ ├── frozen_inference_graph.pb │ │ ├── mask_rcnn_inception_v2_coco_2018_01_28.pbtxt │ │ └── object_detection_classes_coco.txt │ └── mask_rcnn_segmentation.py └── output_videos 7 directories, 15 files In today’s tutorial, we will review three Python scripts: ssd_object_detection.py: Performs Caffe-based MobileNet SSD object detection on 20 COCO classes with CUDA. yolo_object_detection.py: Performs YOLO V3 object detection on 80 COCO classes with CUDA. mask_rcnn_segmentation.py: Performs TensorFlow-based Inception V2 segmentation on 90 COCO classes with CUDA. Each of the model files and class name files are included in their respective folders with the exception of our MobileNet SSD (the class names are hardcoded in a Python list directly in the script). Let’s review the folder names in the order in which we’ll work with them today: opencv-ssd-cuda/ opencv-yolo-cuda/ opencv-mask-rcnn-cuda/ As is evident by all three directory names, we will use OpenCV’s DNN module compiled with CUDA support. If your OpenCV is not compiled with CUDA support for your NVIDIA GPU, then you need to configure your system using the instructions in last week’s tutorial. Implementing Single Shot Detectors (SSDs) using OpenCV’s NVIDIA GPU-Enabled ‘dnn’ module Figure 2: Single Shot Detectors (SSDs) are known for being fast and efficient. In this tutorial, we’ll use Python + OpenCV + CUDA to perform even faster deep learning inference using an NVIDIA GPU.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
The first object detector we’ll be looking at are Single Shot Detectors (SSDs), which we originally covered back in 2017: Object detection with deep learning and OpenCV Real-time object detection with deep learning and OpenCV Back then we could only run those SSDs on a CPU; however, today I’ll be showing you how to use your NVIDIA GPU to improve inference speed by up to 211%. Open up the ssd_object_detection.py file in your project directory structure, and insert the following code: # import the necessary packages from imutils.video import FPS import numpy as np import argparse import imutils import cv2 # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-p", "--prototxt", required=True, help="path to Caffe 'deploy' prototxt file") ap.add_argument("-m", "--model", required=True, help="path to Caffe pre-trained model") ap.add_argument("-i", "--input", type=str, default="", help="path to (optional) input video file") ap.add_argument("-o", "--output", type=str, default="", help="path to (optional) output video file") ap.add_argument("-d", "--display", type=int, default=1, help="whether or not output frame should be displayed") ap.add_argument("-c", "--confidence", type=float, default=0.2, help="minimum probability to filter weak detections") ap.add_argument("-u", "--use-gpu", type=bool, default=False, help="boolean indicating if CUDA GPU should be used") args = vars(ap.parse_args()) Here we’ve imported our packages. Notice that we do not require any special imports for CUDA. The CUDA capability is built in (via our compilation last week) to our cv2 import on Line 6. Next let’s parse our command line arguments: --prototxt: Our pretrained Caffe MobileNet SSD “deploy” prototxt file path. --model: The path to our pretrained Caffe MobileNet SSD model. --input: The optional path to our input video file. If it is not supplied, your first camera will be used by default. --output: The optional path to our output video file.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
--display: The optional boolean flag indicating whether we will diplay output frames to an OpenCV GUI window. Displaying frames costs CPU cycles, so for a true benchmark, you may wish to turn display off (by default it is on). --confidence: The minimum probability threshold to filter weak detections. By default the value is set to 20%; however, you may override it if you wish. --use-gpu: A boolean indicating whether the CUDA GPU should be used. By default this value is False (i.e., off). If you desire for your NVIDIA CUDA-capable GPU to be used for object detection with OpenCV, you need to pass a 1 value to this argument. Next we’ll specify our classes and associated random colors: # initialize the list of class labels MobileNet SSD was trained to # detect, then generate a set of bounding box colors for each class CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3)) And then we’ll load our Caffe-based model: # load our serialized model from disk net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"]) # check if we are going to use GPU if args["use_gpu"]: # set CUDA as the preferable backend and target print("[INFO] setting preferable backend and target to CUDA...") net.setPreferableBackend(cv2.dnn. DNN_BACKEND_CUDA) net.setPreferableTarget(cv2.dnn. DNN_TARGET_CUDA) As Line 35 indicates, we use OpenCV’s dnn module to load our Caffe object detection model.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
A check is made to see if NVIDIA CUDA-enabled GPU should be used. From there, we set the backend and target accordingly (Lines 38-42). Let’s go ahead and start processing frames and performing object detection with our GPU (provided the --use-gpu command line argument is turned on, of course): # initialize the video stream and pointer to output video file, then # start the FPS timer print("[INFO] accessing video stream...") vs = cv2.VideoCapture(args["input"] if args["input"] else 0) writer = None fps = FPS().start() # loop over the frames from the video stream while True: # read the next frame from the file (grabbed, frame) = vs.read() # if the frame was not grabbed, then we have reached the end # of the stream if not grabbed: break # resize the frame, grab the frame dimensions, and convert it to # a blob frame = imutils.resize(frame, width=400) (h, w) = frame.shape[:2] blob = cv2.dnn.blobFromImage(frame, 0.007843, (300, 300), 127.5) # pass the blob through the network and obtain the detections and # predictions net.setInput(blob) detections = net.forward() # loop over the detections for i in np.arange(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with # the prediction confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the `confidence` is # greater than the minimum confidence if confidence > args["confidence"]: # extract the index of the class label from the # `detections`, then compute the (x, y)-coordinates of # the bounding box for the object idx = int(detections[0, 0, i, 1]) box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # draw the prediction on the frame label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100) cv2.rectangle(frame, (startX, startY), (endX, endY), COLORS[idx], 2) y = startY - 15 if startY - 15 > 15 else startY + 15 cv2.putText(frame, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2) Here we access our video stream. Note that the code is meant to be compatible with both video files and live video streams, which is why I elected not to use my threaded VideoStream class. Looping over frames, we: Read and preprocess incoming frames. Construct a blob from the frame. Detect objects using the Single Shot Detector and our GPU (if the --use-gpu flag was set). Filter objects allowing only high --confidence objects to pass. Annotate bounding boxes, class labels, and probabilities. If you need a refresher on OpenCV drawing basics, be sure to refer to my OpenCV Tutorial: A Guide to Learn OpenCV.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Finally, we’ll wrap up: # check to see if the output frame should be displayed to our # screen if args["display"] > 0: # show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # if an output video file path has been supplied and the video # writer has not been initialized, do so now if args["output"] ! = "" and writer is None: # initialize our video writer fourcc = cv2.VideoWriter_fourcc(*"MJPG") writer = cv2.VideoWriter(args["output"], fourcc, 30, (frame.shape[1], frame.shape[0]), True) # if the video writer is not None, write the frame to the output # video file if writer is not None: writer.write(frame) # update the FPS counter fps.update() # stop the timer and display FPS information fps.stop() print("[INFO] elasped time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) In the remaining lines, we: Display the annotated video frames if required. Capture key presses if we are displaying. Write annotated output frames to a video file on disk. Update, calculate, and print out FPS statistics. Great job developing your SSD + OpenCV + CUDA script. In the next sections, we’ll analyze results using both our GPU and CPU. Single Shot Detectors: 211% faster object detection with OpenCV’s ‘dnn’ module and an NVIDIA GPU To see our Single Shot Detector in action, make sure you use the “Downloads” section of this tutorial to download (1) the source code and (2) pretrained models compatible with OpenCV’s dnn module. From there, execute the following command to obtain a baseline for our SSD by running it on our CPU: $ python ssd_object_detection.py \ --prototxt MobileNetSSD_deploy.prototxt \ --model MobileNetSSD_deploy.caffemodel \ --input ../example_videos/guitar.mp4 \ --output ../output_videos/ssd_guitar.avi \ --display 0 [INFO] accessing video stream... [INFO] elasped time: 11.69 [INFO] approx.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
FPS: 21.13 Here we are obtaining ~21 FPS on our CPU, which is quite good for an object detector! To see the detector really fly, let’s supply the --use-gpu 1 command line argument, instructing OpenCV to push the dnn computation to our NVIDIA Tesla V100 GPU: $ python ssd_object_detection.py \ --prototxt MobileNetSSD_deploy.prototxt \ --model MobileNetSSD_deploy.caffemodel \ --input ../example_videos/guitar.mp4 \ --output ../output_videos/ssd_guitar.avi \ --display 0 \ --use-gpu 1 [INFO] setting preferable backend and target to CUDA... [INFO] accessing video stream... [INFO] elasped time: 3.75 [INFO] approx. FPS: 65.90 Using our NVIDIA GPU, we’re now reaching ~66 FPS which improves our frames-per-second throughput rate by over 211%! And as the video demonstration shows, our SSD is quite accurate. Note: As discussed by this comment by Yashas, the MobileNet SSD could perform poorly because cuDNN does not have optimized kernels for depthwise convolutions on all NVIDA GPUs. If you see your GPU results similar to your CPU results, this is likely the problem. Implementing YOLO object detection for OpenCV’s NVIDIA GPU/CUDA-enabled ‘dnn’ module Figure 3: YOLO is touted as being one of the fastest object detection architectures. In this section, we’ll use Python + OpenCV + CUDA to perform even faster YOLO deep learning inference using an NVIDIA GPU. While YOLO is certainly one of the fastest deep learning-based object detectors, the YOLO model included with OpenCV is anything but — on a CPU, YOLO struggled to break 3 FPS. Therefore, if you intend on using YOLO with OpenCV’s dnn module, you better be using a GPU.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Let’s take a look at how to use the YOLO object detector (yolo_object_detection.py) with OpenCV’s CUDA-enabled dnn module: # import the necessary packages from imutils.video import FPS import numpy as np import argparse import cv2 import os # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-y", "--yolo", required=True, help="base path to YOLO directory") ap.add_argument("-i", "--input", type=str, default="", help="path to (optional) input video file") ap.add_argument("-o", "--output", type=str, default="", help="path to (optional) output video file") ap.add_argument("-d", "--display", type=int, default=1, help="whether or not output frame should be displayed") ap.add_argument("-c", "--confidence", type=float, default=0.5, help="minimum probability to filter weak detections") ap.add_argument("-t", "--threshold", type=float, default=0.3, help="threshold when applyong non-maxima suppression") ap.add_argument("-u", "--use-gpu", type=bool, default=0, help="boolean indicating if CUDA GPU should be used") args = vars(ap.parse_args()) Our imports are nearly the same as our previous script with one swap. In this script we don’t need imutils, but we do need Python’s os module for file I/O. Again, the CUDA capability is baked into our custom-compiled OpenCV installation. Let’s review our command line arguments: --yolo: The base path to your pretrained YOLO model directory. --input: The optional path to our input video file. If it is not supplied, your first camera will be used by default. --output: The optional path to our output video file. --display: The optional boolean flag indicating whether we will use output frames to an OpenCV GUI window. Displaying frames costs CPU cycles, so for a true benchmark, you may wish to turn display off (by default it is on). --confidence: The minimum probability threshold to filter weak detections.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
By default the value is set to 50%; however you may override it if you wish. --threshold: The Non-Maxima Suppression (NMS) threshold is set to 30% by default. --use-gpu: A boolean indicating whether the CUDA GPU should be used. By default this value is False (i.e., off). If you desire for your NVIDIA CUDA-capable GPU to be used for object detection with OpenCV, you need to pass a 1 value to this argument. Next we’ll load our class labels and assign random colors: # load the COCO class labels our YOLO model was trained on labelsPath = os.path.sep.join([args["yolo"], "coco.names"]) LABELS = open(labelsPath).read().strip().split("\n") # initialize a list of colors to represent each possible class label np.random.seed(42) COLORS = np.random.randint(0, 255, size=(len(LABELS), 3), dtype="uint8") We load class labels from the coco.names file and assign random COLORS. Now we’re ready to load our YOLO model from disk including setting the GPU backend/target if required: # derive the paths to the YOLO weights and model configuration weightsPath = os.path.sep.join([args["yolo"], "yolov3.weights"]) configPath = os.path.sep.join([args["yolo"], "yolov3.cfg"]) # load our YOLO object detector trained on COCO dataset (80 classes) print("[INFO] loading YOLO from disk...") net = cv2.dnn.readNetFromDarknet(configPath, weightsPath) # check if we are going to use GPU if args["use_gpu"]: # set CUDA as the preferable backend and target print("[INFO] setting preferable backend and target to CUDA...") net.setPreferableBackend(cv2.dnn. DNN_BACKEND_CUDA) net.setPreferableTarget(cv2.dnn. DNN_TARGET_CUDA) Lines 36 and 37 grab our pretrained YOLO detector model and weights paths. From there, Lines 41-48 load the model and set the GPU as the backend if the --use-gpu command line flag is set.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Moving on, we’ll begin performing object detection with YOLO: # determine only the *output* layer names that we need from YOLO ln = net.getLayerNames() ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()] # initialize the width and height of the frames in the video file W = None H = None # initialize the video stream and pointer to output video file, then # start the FPS timer print("[INFO] accessing video stream...") vs = cv2.VideoCapture(args["input"] if args["input"] else 0) writer = None fps = FPS().start() # loop over frames from the video file stream while True: # read the next frame from the file (grabbed, frame) = vs.read() # if the frame was not grabbed, then we have reached the end # of the stream if not grabbed: break # if the frame dimensions are empty, grab them if W is None or H is None: (H, W) = frame.shape[:2] # construct a blob from the input frame and then perform a forward # pass of the YOLO object detector, giving us our bounding boxes # and associated probabilities blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416), swapRB=True, crop=False) net.setInput(blob) layerOutputs = net.forward(ln) Lines 51 and 52 grab only the output layer names from the YOLO model. We need these in order to perform inference with YOLO using OpenCV. We then grab frame dimensions and initialize our video stream + FPS counter. From there, we’ll loop over frames and begin YOLO object detection. Inside the loop, we: Grab a frame. Construct a blob from the frame. Compute predictions (i.e., perform YOLO inference on the blob). Continuing on, we’ll process the results: # initialize our lists of detected bounding boxes, confidences, # and class IDs, respectively boxes = [] confidences = [] classIDs = [] # loop over each of the layer outputs for output in layerOutputs: # loop over each of the detections for detection in output: # extract the class ID and confidence (i.e., probability) # of the current object detection scores = detection[5:] classID = np.argmax(scores) confidence = scores[classID] # filter out weak predictions by ensuring the detected # probability is greater than the minimum probability if confidence > args["confidence"]: # scale the bounding box coordinates back relative to # the size of the image, keeping in mind that YOLO # actually returns the center (x, y)-coordinates of # the bounding box followed by the boxes' width and # height box = detection[0:4] * np.array([W, H, W, H]) (centerX, centerY, width, height) = box.astype("int") # use the center (x, y)-coordinates to derive the top # and and left corner of the bounding box x = int(centerX - (width / 2)) y = int(centerY - (height / 2)) # update our list of bounding box coordinates, # confidences, and class IDs boxes.append([x, y, int(width), int(height)]) confidences.append(float(confidence)) classIDs.append(classID) # apply non-maxima suppression to suppress weak, overlapping # bounding boxes idxs = cv2.dnn. NMSBoxes(boxes, confidences, args["confidence"], args["threshold"]) # ensure at least one detection exists if len(idxs) > 0: # loop over the indexes we are keeping for i in idxs.flatten(): # extract the bounding box coordinates (x, y) = (boxes[i][0], boxes[i][1]) (w, h) = (boxes[i][2], boxes[i][3]) # draw a bounding box rectangle and label on the frame color = [int(c) for c in COLORS[classIDs[i]]] cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2) text = "{}: {:.4f}".format(LABELS[classIDs[i]], confidences[i]) cv2.putText(frame, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2) Still in our loop, now we will: Initialize results lists. Loop over detections and accumulate outputs while filtering low confidence detections.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Apply Non-Maxima Suppression (NMS). Annotate the output frame with the object’s bounding box, class label, and confidence value. We’ll wrap up our frame processing loop and perform cleanup next: # check to see if the output frame should be displayed to our # screen if args["display"] > 0: # show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # if an output video file path has been supplied and the video # writer has not been initialized, do so now if args["output"] ! = "" and writer is None: # initialize our video writer fourcc = cv2.VideoWriter_fourcc(*"MJPG") writer = cv2.VideoWriter(args["output"], fourcc, 30, (frame.shape[1], frame.shape[0]), True) # if the video writer is not None, write the frame to the output # video file if writer is not None: writer.write(frame) # update the FPS counter fps.update() # stop the timer and display FPS information fps.stop() print("[INFO] elasped time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) The remaining lines handle display, keypresses, printing FPS statistics, and cleanup. While our YOLO + OpenCV + CUDA script was more challenging to implement than the SSD script, you did a great job hanging in there. In the next section, we will analyze results. YOLO: 380% faster object detection with OpenCV’s NVIDIA GPU-enabled ‘dnn’ module We are now ready to test our YOLO object detector. Make sure you have used the “Downloads” section of this tutorial to download the source code and pretrained models compatible with OpenCV’s dnn module. From there, execute the following command to obtain a baseline for YOLO on our CPU: $ python yolo_object_detection.py --yolo yolo-coco \ --input ../example_videos/janie.mp4 \ --output ../output_videos/yolo_janie.avi \ --display 0 [INFO] loading YOLO from disk... [INFO] accessing video stream... [INFO] elasped time: 51.11 [INFO] approx.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
FPS: 2.47 On our CPU, YOLO is obtaining a quite pitiful 2.47 FPS. But by pushing the computation to our NVIDIA V100 GPU, we now reach 11.87 FPS, a 380% improvement: $ python yolo_object_detection.py --yolo yolo-coco \ --input ../example_videos/janie.mp4 \ --output ../output_videos/yolo_janie.avi \ --display 0 \ --use-gpu 1 [INFO] loading YOLO from disk... [INFO] setting preferable backend and target to CUDA... [INFO] accessing video stream... [INFO] elasped time: 10.61 [INFO] approx. FPS: 11.87 As I discuss in my original YOLO + OpenCV blog post, I’m not really sure why YOLO obtains such a low frames-per-second throughput rate. YOLO is consistently cited as one of the fastest object detectors. That said, it appears there is something amiss either with the converted model or how OpenCV is handling inference — unfortunately I don’t know what the exact problem is, but I welcome feedback in the comments section. Implementing Mask R-CNN Instance Segmentation for OpenCV’s CUDA-Enabled ‘dnn’ module Figure 4: Mask R-CNNs are both difficult to train and can be taxing on a CPU. In this section, we’ll use Python + OpenCV + CUDA to perform even faster Mask R-CNN deep learning inference using an NVIDIA GPU. ( image source) At this point we’ve looked at SSDs and YOLO, two different types of deep learning-based object detectors — but what about instance segmentation networks such as Mask R-CNN? Can we utilize our NVIDIA GPUs with OpenCV’s CUDA-enabled dnn module to improve our frames-per-second processing rate for Mask R-CNNs? You bet we can!
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Open up mask_rcnn_segmentation.py in your directory structure to find out how: # import the necessary packages from imutils.video import FPS import numpy as np import argparse import cv2 import os # construct the argument parse and parse the arguments ap = argparse. ArgumentParser() ap.add_argument("-m", "--mask-rcnn", required=True, help="base path to mask-rcnn directory") ap.add_argument("-i", "--input", type=str, default="", help="path to (optional) input video file") ap.add_argument("-o", "--output", type=str, default="", help="path to (optional) output video file") ap.add_argument("-d", "--display", type=int, default=1, help="whether or not output frame should be displayed") ap.add_argument("-c", "--confidence", type=float, default=0.5, help="minimum probability to filter weak detections") ap.add_argument("-t", "--threshold", type=float, default=0.3, help="minimum threshold for pixel-wise mask segmentation") ap.add_argument("-u", "--use-gpu", type=bool, default=0, help="boolean indicating if CUDA GPU should be used") args = vars(ap.parse_args()) First we handle our imports. They are identical to our previous YOLO script. From there we’ll parse command line arguments: --mask-rcnn: The base path to your pretrained Mask R-CNN model directory. --input: The optional path to our input video file. If it is not supplied, your first camera will be used by default. --output: The optional path to our output video file. --display: The optional boolean flag indicating whether we will display output frames to an OpenCV GUI window. Displaying frames costs CPU cycles, so for a true benchmark, you may wish to turn display off (by default it is on). --confidence: The minimum probability threshold to filter weak detections.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
By default the value is set to 50%; however you may override it if you wish. --threshold: Minimum threshold for pixel-wise segmentation. By default this value is set to 30%. --use-gpu: A boolean indicating whether the CUDA GPU should be used. By default this value is False (i.e.; off). If you desire for your NVIDIA CUDA-capable GPU to be used for instance segmentation with OpenCV, you need to pass a 1 value to this argument. With our imports and command line arguments in hand, now we’ll load our class labels and assign random colors: # load the COCO class labels our Mask R-CNN was trained on labelsPath = os.path.sep.join([args["mask_rcnn"], "object_detection_classes_coco.txt"]) LABELS = open(labelsPath).read().strip().split("\n") # initialize a list of colors to represent each possible class label np.random.seed(42) COLORS = np.random.randint(0, 255, size=(len(LABELS), 3), dtype="uint8") From there we’ll load our model. # derive the paths to the Mask R-CNN weights and model configuration weightsPath = os.path.sep.join([args["mask_rcnn"], "frozen_inference_graph.pb"]) configPath = os.path.sep.join([args["mask_rcnn"], "mask_rcnn_inception_v2_coco_2018_01_28.pbtxt"]) # load our Mask R-CNN trained on the COCO dataset (90 classes) # from disk print("[INFO] loading Mask R-CNN from disk...") net = cv2.dnn.readNetFromTensorflow(weightsPath, configPath) # check if we are going to use GPU if args["use_gpu"]: # set CUDA as the preferable backend and target print("[INFO] setting preferable backend and target to CUDA...") net.setPreferableBackend(cv2.dnn. DNN_BACKEND_CUDA) net.setPreferableTarget(cv2.dnn. DNN_TARGET_CUDA) Here we grab the paths to our pretrained Mask R-CNN weights and model.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
We then load the model from disk and set the target backend to the GPU if the --use-gpu command line flag is set. When using only your CPU, segmentation will be slow as molasses. If you set the --use-gpu flag, you’ll process your input video or camera stream at warp-speed. Let’s begin processing frames: # initialize the video stream and pointer to output video file, then # start the FPS timer print("[INFO] accessing video stream...") vs = cv2.VideoCapture(args["input"] if args["input"] else 0) writer = None fps = FPS().start() # loop over frames from the video file stream while True: # read the next frame from the file (grabbed, frame) = vs.read() # if the frame was not grabbed, then we have reached the end # of the stream if not grabbed: break # construct a blob from the input frame and then perform a # forward pass of the Mask R-CNN, giving us (1) the bounding box # coordinates of the objects in the image along with (2) the # pixel-wise segmentation for each specific object blob = cv2.dnn.blobFromImage(frame, swapRB=True, crop=False) net.setInput(blob) (boxes, masks) = net.forward(["detection_out_final", "detection_masks"]) After grabbing a frame, we convert it to a blob and perform a forward pass through our network to predict object boxes and masks. And now we’re ready to process our results: # loop over the number of detected objects for i in range(0, boxes.shape[2]): # extract the class ID of the detection along with the # confidence (i.e., probability) associated with the # prediction classID = int(boxes[0, 0, i, 1]) confidence = boxes[0, 0, i, 2] # filter out weak predictions by ensuring the detected # probability is greater than the minimum probability if confidence > args["confidence"]: # scale the bounding box coordinates back relative to the # size of the frame and then compute the width and the # height of the bounding box (H, W) = frame.shape[:2] box = boxes[0, 0, i, 3:7] * np.array([W, H, W, H]) (startX, startY, endX, endY) = box.astype("int") boxW = endX - startX boxH = endY - startY # extract the pixel-wise segmentation for the object, # resize the mask such that it's the same dimensions of # the bounding box, and then finally threshold to create # a *binary* mask mask = masks[i, classID] mask = cv2.resize(mask, (boxW, boxH), interpolation=cv2.INTER_CUBIC) mask = (mask > args["threshold"]) # extract the ROI of the image but *only* extracted the # masked region of the ROI roi = frame[startY:endY, startX:endX][mask] # grab the color used to visualize this particular class, # then create a transparent overlay by blending the color # with the ROI color = COLORS[classID] blended = ((0.4 * color) + (0.6 * roi)).astype("uint8") # store the blended ROI in the original frame frame[startY:endY, startX:endX][mask] = blended # draw the bounding box of the instance on the frame color = [int(c) for c in color] cv2.rectangle(frame, (startX, startY), (endX, endY), color, 2) # draw the predicted label and associated probability of # the instance segmentation on the frame text = "{}: {:.4f}".format(LABELS[classID], confidence) cv2.putText(frame, text, (startX, startY - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2) Looping over the results, we: Filter them based on confidence. Resize and draw/annotate object transparent colored masks. Annotate bounding boxes, labels, and probabilities on the output frame. From there we’ll go ahead and wrap up our loop, calculate FPS stats, and clean up: # check to see if the output frame should be displayed to our # screen if args["display"] > 0: # show the output frame cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # if an output video file path has been supplied and the video # writer has not been initialized, do so now if args["output"] ! = "" and writer is None: # initialize our video writer fourcc = cv2.VideoWriter_fourcc(*"MJPG") writer = cv2.VideoWriter(args["output"], fourcc, 30, (frame.shape[1], frame.shape[0]), True) # if the video writer is not None, write the frame to the output # video file if writer is not None: writer.write(frame) # update the FPS counter fps.update() # stop the timer and display FPS information fps.stop() print("[INFO] elasped time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) Great job developing your Mask R-CNN + OpenCV + CUDA script!
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
In the next section, we’ll compare CPU versus GPU results. For more details on the implementation, refer to this blog post on Mask R-CNN with OpenCV. Mask R-CNN: 1,549% faster Instance Segmentation with OpenCV’s ‘dnn’ NVIDIA GPU module Our final test will be to compare Mask R-CNN performance using both a CPU and an NVIDIA GPU. Make sure you have used the “Downloads” section of this tutorial to download the source code and pretrained OpenCV model files. You can then open up a command line and benchmark the Mask R-CNN model on the CPU: $ python mask_rcnn_segmentation.py \ --mask-rcnn mask-rcnn-coco \ --input ../example_videos/dog_park.mp4 \ --output ../output_videos/mask_rcnn_dog_park.avi \ --display 0 [INFO] loading Mask R-CNN from disk... [INFO] accessing video stream... [INFO] elasped time: 830.65 [INFO] approx. FPS: 0.67 The Mask R-CNN architecture is incredibly computationally expensive, so seeing a result of 0.67 FPS on a CPU is to be expected. But what about a GPU? Will a GPU be able to push our Mask R-CNN to near real-time performance? To answer that question, just supply the --use-gpu 1 command line argument to the mask_rcnn_segmentation.pyscript: $ python mask_rcnn_segmentation.py \ --mask-rcnn mask-rcnn-coco \ --input ../example_videos/dog_park.mp4 \ --output ../output_videos/mask_rcnn_dog_park.avi \ --display 0 \ --use-gpu 1 [INFO] loading Mask R-CNN from disk... [INFO] setting preferable backend and target to CUDA... [INFO] accessing video stream... [INFO] elasped time: 50.21 [INFO] approx. FPS: 11.05 On my NVIDIA Telsa V100, our Mask R-CNN model is now reaching 11.05 FPS, a massive 1,549% improvement!
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Making nearly any model compatible with OpenCV’s ‘dnn’ module run on an NVIDIA GPU If you’ve been paying attention to each of the source code examples in today’s post, you’ll note that each of them follows a particular pattern to push the computation to an NVIDIA CUDA-enabled GPU: Load the trained model from disk. Set OpenCV backend to be CUDA. Push the computation to the CUDA-enabled device. These three points neatly translate into only three lines of code: net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"]) net.setPreferableBackend(cv2.dnn. DNN_BACKEND_CUDA) net.setPreferableTarget(cv2.dnn. DNN_TARGET_CUDA)   In general, you can follow the same recipe when working with OpenCV’s dnn module — if you have a model that is compatible with OpenCV and dnn, then it likely can be used for GPU inference simply by setting CUDA as the backend and target. All you really need to do is swap out the cv2.dnn.readNetFromCaffe function with whatever method you’re using to load the network from disk, including: cv2.dnn.readNet cv2.dnn.readNetFromDarknet cv2.dnn.readNetFromModelOptimizer cv2.dnn.readNetFromONNX cv2.dnn.readNetFromTensorflow cv2.dnn.readNetFromTorch cv2.dnn.readTensorFromONNX You’ll need to refer to the exact framework your model was trained with to confirm whether or not it will be compatible with OpenCV’s dnn library — I hope to cover such a tutorial in the future as well. What's next? We recommend PyImageSearch University. Course information: 84 total classes • 114+ hours of on-demand code walkthrough videos • Last updated: February 2024 ★★★★★ 4.84 (128 Ratings) • 16,000+ Students Enrolled I strongly believe that if you had the right teacher you could master computer vision and deep learning.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Do you think learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science? That’s not the case. All you need to master computer vision and deep learning is for someone to explain things to you in simple, intuitive terms. And that’s exactly what I do. My mission is to change education and how complex Artificial Intelligence topics are taught. If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Here you’ll learn how to successfully and confidently apply computer vision to your work, research, and projects. Join me in computer vision mastery.
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Inside PyImageSearch University you'll find: ✓ 84 courses on essential computer vision, deep learning, and OpenCV topics ✓ 84 Certificates of Completion ✓ 114+ hours of on-demand video ✓ Brand new courses released regularly, ensuring you can keep up with state-of-the-art techniques ✓ Pre-configured Jupyter Notebooks in Google Colab ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!) ✓ Access to centralized code repos for all 536+ tutorials on PyImageSearch ✓ Easy one-click downloads for code, datasets, pre-trained models, etc. ✓ Access on mobile, laptop, desktop, etc. Click here to join PyImageSearch University Summary In this tutorial you learned how to apply OpenCV’s “deep neural network” (dnn) module for GPU-optimized inference. Up until the release of OpenCV 4.2, OpenCV’s dnn module had extremely limited compute capability — most readers were left to running inference on their CPU, which is certainly less than ideal. However, thanks to Davis King of dlib, Yashas Samaga (who implemented OpenCV’s “dnn” NVIDIA GPU support) and the Google Summer of Code 2019 initiative, OpenCV can now enjoy NVIDIA GPU and CUDA support, making it easier than ever to apply state-of-the-art networks to your own projects. To download the source code to this post, including the pre-trained SSD, YOLO, and Mask R-CNN models, just enter your email address in the form below! Download the Source Code and FREE 17-page Resource Guide Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Download the code!
https://pyimagesearch.com/2020/02/10/opencv-dnn-with-nvidia-gpus-1549-faster-yolo-ssd-and-mask-r-cnn/
Website
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
Click here to download the source code to this pos In this tutorial, you will learn how to implement and train autoencoders using Keras, TensorFlow, and Deep Learning. Today’s tutorial kicks off a three-part series on the applications of autoencoders: Autoencoders with Keras, TensorFlow, and Deep Learning (today’s tutorial)Denoising autoenecoders with Keras and TensorFlow (next week’s tutorial)Anomaly detection with Keras, TensorFlow, and Deep Learning (tutorial two weeks from now) A few weeks ago, I published an introductory guide to anomaly/outlier detection using standard machine learning algorithms. My intention was to immediately follow up that post with a a guide on deep learning-based anomaly detection; however, as I started writing the code for the tutorial, I realized I had never covered autoencoders on the PyImageSearch blog! Trying to discuss deep learning-based anomaly detection without prior context on what autoencoders are and how they work would be challenging to follow, comprehend, and digest. Therefore, we’re going to spend the next couple of weeks looking at autoencoder algorithms, including their practical, real-world applications. To learn about the fundamentals of autoencoders using Keras and TensorFlow, just keep reading! Looking for the source code to this post? Jump Right To The Downloads Section Autoencoders with Keras, TensorFlow, and Deep Learning In the first part of this tutorial, we’ll discuss what autoencoders are, including how convolutional autoencoders can be applied to image data. We’ll also discuss the difference between autoencoders and other generative models, such as Generative Adversarial Networks (GANs). From there, I’ll show you how to implement and train a convolutional autoencoder using Keras and TensorFlow.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
We’ll then review the results of the training script, including visualizing how the autoencoder did at reconstructing the input data. Finally, I’ll recommend next steps to you if you are interested in learning more about deep learning applied to image datasets. What are autoencoders? Autoencoders are a type of unsupervised neural network (i.e., no class labels or labeled data) that seek to: Accept an input set of data (i.e., the input).Internally compress the input data into a latent-space representation (i.e., a single vector that compresses and quantifies the input).Reconstruct the input data from this latent representation (i.e., the output). Typically, we think of an autoencoder having two components/subnetworks: Encoder: Accepts the input data and compresses it into the latent-space. If we denote our input data as and the encoder as , then the output latent-space representation, , would be .Decoder: The decoder is responsible for accepting the latent-space representation and then reconstructing the original input. If we denote the decoder function as and the output of the detector as , then we can represent the decoder as . Using our mathematical notation, the entire training process of the autoencoder can be written as: Figure 1 below demonstrates the basic architecture of an autoencoder: Figure 1: Autoencoders with Keras, TensorFlow, Python, and Deep Learning don’t have to be complex. Breaking the concept down to its parts, you’ll have an input image that is passed through the autoencoder which results in a similar output image. ( figure inspired by Nathan Hubens’ article, Deep inside: Autoencoders) Here you can see that: We input a digit to the autoencoder.
https://pyimagesearch.com/2020/02/17/autoencoders-with-keras-tensorflow-and-deep-learning/
The encoder subnetwork creates a latent representation of the digit. This latent representation is substantially smaller (in terms of dimensionality) than the input. The decoder subnetwork then reconstructs the original digit from the latent representation. You can thus think of an autoencoder as a network that reconstructs its input! To train an autoencoder, we input our data, attempt to reconstruct it, and then minimize the mean squared error (or similar loss function). Ideally, the output of the autoencoder will be near identical to the input. An autoencoder reconstructs it’s input — so what’s the big deal? Figure 2: Autoencoders are useful for compression, dimensionality reduction, denoising, and anomaly/outlier detection. In this tutorial, we’ll use Python and Keras/TensorFlow to train a deep learning autoencoder. ( image source) At this point, some of you might be thinking: Adrian, what’s the big deal here?If the goal of an autoencoder is just to reconstruct the input, why even use the network in the first place?If I wanted a copy of my input data, I could literally just copy it with a single function call.