TAO Lei, HONG Tao, CHAO Xu. Drone identification and location tracking based on YOLOv3[J]. Chinese Journal of Engineering, 2020, 42(4): 463-468. DOI: 10.13374/j.issn2095-9389.2019.09.10.002
Citation: TAO Lei, HONG Tao, CHAO Xu. Drone identification and location tracking based on YOLOv3[J]. Chinese Journal of Engineering, 2020, 42(4): 463-468. DOI: 10.13374/j.issn2095-9389.2019.09.10.002

Drone identification and location tracking based on YOLOv3

  • In recent years, increasing incidents of drone intrusion have occurred, and the drone collisions have become common. As a result, accidents may occur in densely populated areas. Therefore, drone monitoring is an important research topic in the field of security. Although many types of drone monitoring programs exist, most of them are costly and difficult to implement. To solve this problem, in the 5G context, this study proposed a method of using a city’s existing monitoring network to acquire data based on a deep learning algorithm for drone target detection, constructing a recognizable drone, and tracking the unmanned aerial vehicle. The method used the improved YOLOv3 (You only look once) model to detect the presence of drones in video frames. The YOLOv3 algorithm is the third generation version of the YOLO series, belonging to the one-stage target detection algorithm. This algorithm has significant advantages over the two-stage type of algorithm in speed. YOLOv3 outputs the position information of the drone in the video frame. According to the position information, the PID (Proportion integration differentiation) algorithm was used to adjust the center of the camera to track the drone. Then, the parameters of the plurality of cameras were used to calculate the actual coordinates of the drone, thereby realizing the positioning. We built the dataset by taking photos of the drone's flight, searching and downloading drone pictures from the Internet, and labeling the drones in the image by using the labelImg tool. The dataset was classified according to the number of rotors of the drone. In the experiment, the detection model was trained by the dataset classified by the number of rotors. The trained model can achieve 83.24% accuracy and 88.15% recall rate on the test set, and speed of 20 frames per second on the computer equipped with NVIDIA GTX 1060 for real-time tracking.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return