How to use OpenCV to implement multi-target tracking in python

王林
Release: 2023-05-10 19:31:04
forward
1368 people have browsed it

1 背景介绍

计算机视觉和机器学习的大多数初学者都学习对象检测。如果您是初学者,您可能会想到为什么我们需要对象跟踪。我们不能只检测每一帧中的物体吗?

让我们探讨一下跟踪有用的几个原因:

  • 首先,当在视频帧中检测到多个对象(比如人)时,跟踪有助于跨帧确定对象的身份。

  • 其次,在某些情况下,目标检测可能会失败,但仍可能跟踪对象,因为跟踪会考虑前一帧中对象的位置和外观。

  • 第三,一些跟踪算法非常快,因为它们进行本地搜索而不是全局搜索。因此,我们可以通过每第n帧执行目标检测并在中间帧中跟踪对象来为我们的系统获得非常高的性能。

那么,为什么不在第一次检测后无限期地跟踪对象呢?跟踪算法有时可能会丢失其正在跟踪的对象。例如,当对象的运动太大时,跟踪算法可能无法跟上。通常会在目标跟踪一段时间后再次目标检测。

在本教程中,我们将只关注跟踪部分。我们要跟踪的对象将通过指定它们周围的边界框来获取。

2 基于MultiTracker的多目标跟踪

OpenCV中的多目标跟踪器MultiTracker类提供了多目标跟踪的实现。但是这只是一个初步的实现,因为它只处理跟踪对象,而不对被跟踪对象进行任何优化。

2.1 创建单个对象跟踪器

多对象跟踪器只是单个对象跟踪器的集合。我们首先定义一个将跟踪器类型作为输入并创建跟踪器对象的函数。

OpenCV有8种不同的跟踪器类型:BOOSTING,MIL,KCF,TLD,MEDIANFLOW,GOTURN,MOSSE,CSRT。本文不使用GOTURN跟踪器。一般我们先给定跟踪器类的名称,再返回单跟踪器对象,然后建立多跟踪器类。

C++代码:

vector<string> trackerTypes = {"BOOSTING", "MIL", "KCF", "TLD", "MEDIANFLOW", "GOTURN", "MOSSE", "CSRT"};
/**
 * @brief Create a Tracker By Name object 根据设定的类型初始化跟踪器
 *
 * @param trackerType
 * @return Ptr<Tracker>
 */
Ptr<Tracker> createTrackerByName(string trackerType)
{
    Ptr<Tracker> tracker;
    if (trackerType == trackerTypes[0])
        tracker = TrackerBoosting::create();
    else if (trackerType == trackerTypes[1])
        tracker = TrackerMIL::create();
    else if (trackerType == trackerTypes[2])
        tracker = TrackerKCF::create();
    else if (trackerType == trackerTypes[3])
        tracker = TrackerTLD::create();
    else if (trackerType == trackerTypes[4])
        tracker = TrackerMedianFlow::create();
    else if (trackerType == trackerTypes[5])
        tracker = TrackerGOTURN::create();
    else if (trackerType == trackerTypes[6])
        tracker = TrackerMOSSE::create();
    else if (trackerType == trackerTypes[7])
        tracker = TrackerCSRT::create();
    else
    {
        cout << "Incorrect tracker name" << endl;
        cout << "Available trackers are: " << endl;
        for (vector<string>::iterator it = trackerTypes.begin(); it != trackerTypes.end(); ++it)
        {
            std::cout << " " << *it << endl;
        }
    }
    return tracker;
}
Copy after login

python代码:

from __future__ import print_function
import sys
import cv2
from random import randint
trackerTypes = ['BOOSTING', 'MIL', 'KCF','TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT']
def createTrackerByName(trackerType):
  # Create a tracker based on tracker name
  if trackerType == trackerTypes[0]:
    tracker = cv2.TrackerBoosting_create()
  elif trackerType == trackerTypes[1]:
    tracker = cv2.TrackerMIL_create()
  elif trackerType == trackerTypes[2]:
    tracker = cv2.TrackerKCF_create()
  elif trackerType == trackerTypes[3]:
    tracker = cv2.TrackerTLD_create()
  elif trackerType == trackerTypes[4]:
    tracker = cv2.TrackerMedianFlow_create()
  elif trackerType == trackerTypes[5]:
    tracker = cv2.TrackerGOTURN_create()
  elif trackerType == trackerTypes[6]:
    tracker = cv2.TrackerMOSSE_create()
  elif trackerType == trackerTypes[7]:
    tracker = cv2.TrackerCSRT_create()
  else:
    tracker = None
    print('Incorrect tracker name')
    print('Available trackers are:')
    for t in trackerTypes:
      print(t)
  return tracker
Copy after login

2.2 读取视频的第一帧

多对象跟踪器需要两个输入即一个视频帧和我们想要跟踪的所有对象的位置(边界框)。

给定此信息,跟踪器在所有后续帧中跟踪这些指定对象的位置。在下面的代码中,我们首先使用VideoCapture类加载视频并读取第一帧。稍后将使用它来初始化MultiTracker。

C++代码:

    // Set tracker type. Change this to try different trackers. 选择追踪器类型
    string trackerType = trackerTypes[6];
    // set default values for tracking algorithm and video 视频读取
    string videoPath = "video/run.mp4";
    // Initialize MultiTracker with tracking algo 边界框
    vector bboxes;
    // create a video capture object to read videos 读视频
    cv::VideoCapture cap(videoPath);
    Mat frame;
    // quit if unable to read video file
    if (!cap.isOpened())
    {
        cout << "Error opening video file " << videoPath << endl;
        return -1;
    }
    // read first frame 读第一帧
    cap >> frame;
Copy after login

python代码:

# Set video to load
videoPath = "video/run.mp4"
# Create a video capture object to read videos
cap = cv2.VideoCapture(videoPath)
# Read first frame
success, frame = cap.read()
# quit if unable to read the video file
if not success:
  print('Failed to read video')
  sys.exit(1)
Copy after login

2.3 在第一帧中确定我们跟踪的对象

接下来,我们需要在第一帧中找到我们想要跟踪的对象。OpenCV提供了一个名为selectROIs的函数,它弹出一个GUI来选择边界框(也称为感兴趣区域(ROI))。在C++版本中可以通过selectROIs允许您获取多个边界框,但在Python版本中,只能通过selectROI获得一个边界框。因此,在Python版本中,我们需要一个循环来获取多个边界框。对于每个对象,我们还选择随机颜色来显示边界框。selectROI函数步骤为先在图像上画框,然后按ENTER确定完成画框画下一个框。按ESC退出画框开始执行程序

C++代码:

// Get bounding boxes for first frame
// selectROI's default behaviour is to draw box starting from the center
// when fromCenter is set to false, you can draw box starting from top left corner
bool showCrosshair = true;
bool fromCenter = false;
cout << "\n==========================================================\n";
cout << "OpenCV says press c to cancel objects selection process" << endl;
cout << "It doesn't work. Press Escape to exit selection process" << endl;
cout << "\n==========================================================\n";
cv::selectROIs("MultiTracker", frame, bboxes, showCrosshair, fromCenter);

// quit if there are no objects to track
if(bboxes.size() < 1)
  return 0;

vector<Scalar> colors;
getRandomColors(colors, bboxes.size());
Copy after login
// Fill the vector with random colors
void getRandomColors(vector& colors, int numColors)
{
  RNG rng(0);
  for(int i=0; i < numColors; i++)
    colors.push_back(Scalar(rng.uniform(0,255), rng.uniform(0, 255), rng.uniform(0, 255)));
}
Copy after login

python代码:

## Select boxes
bboxes = []
colors = []
# OpenCV's selectROI function doesn't work for selecting multiple objects in Python
# So we will call this function in a loop till we are done selecting all objects
while True:
  # draw bounding boxes over objects
  # selectROI's default behaviour is to draw box starting from the center
  # when fromCenter is set to false, you can draw box starting from top left corner
  bbox = cv2.selectROI('MultiTracker', frame)
  bboxes.append(bbox)
  colors.append((randint(0, 255), randint(0, 255), randint(0, 255)))
  print("Press q to quit selecting boxes and start tracking")
  print("Press any other key to select next object")
  k = cv2.waitKey(0) & 0xFF
  if (k == 113):  # q is pressed
    break
print('Selected bounding boxes {}'.format(bboxes))
Copy after login

2.4 初始化MultiTrackerer

到目前为止,我们已经读取了第一帧并获得了对象周围的边界框。这是我们初始化多对象跟踪器所需的所有信息。我们首先创建一个MultiTracker对象,并添加你要跟踪目标数的单个对象跟踪器。在此示例中,我们使用CSRT单个对象跟踪器,但您可以通过将下面的trackerType变量更改为本文开头提到的8个跟踪器时间之一来尝试其他跟踪器类型。该CSRT跟踪器是不是最快的,但它产生在我们尝试很多情况下,最好的结果。

您也可以使用包含在同一MultiTracker中的不同跟踪器,但当然,它没有多大意义。能用的不多。CSRT精度最高,KCF速度精度综合最好,MOSSE速度最快。

MultiTracker类只是这些单个对象跟踪器的包装器。正如我们在上一篇文章中所知道的那样,使用第一帧和边界框初始化单个对象跟踪器,该边界框指示我们想要跟踪的对象的位置。MultiTracker将此信息传递给它内部包装的单个目标跟踪器。

C++代码:

    // Create multitracker 创建多目标跟踪类
    Ptr<MultiTracker> multiTracker = cv::MultiTracker::create();

    // initialize multitracker 初始化
    for (int i = 0; i < bboxes.size(); i++)
    {
        multiTracker->add(createTrackerByName(trackerType), frame, Rect2d(bboxes[i]));
    }
Copy after login

python代码:

# Specify the tracker type
trackerType = "CSRT"
# Create MultiTracker object
multiTracker = cv2.MultiTracker_create()
# Initialize MultiTracker
for bbox in bboxes:
  multiTracker.add(createTrackerByName(trackerType), frame, bbox)
Copy after login

2.5 更新MultiTracker和显示结果

最后,我们的MultiTracker准备就绪,我们可以在新的帧中跟踪多个对象。我们使用MultiTracker类的update方法在新帧中定位对象。每个被跟踪对象的每个边界框都使用不同的颜色绘制。

Update函数会返回true和false。update如果跟踪失败会返回false,C++代码加了判断,Python没有加。但是要注意的是update函数哪怕返回了false,也会继续更新函数,给出边界框。所以返回false,建议停止追踪。

C++代码:

    while (cap.isOpened())
    {
        // get frame from the video 逐帧处理
        cap >> frame;

        // stop the program if reached end of video
        if (frame.empty())
        {
            break;
        }
        //update the tracking result with new frame 更新每一帧
        bool ok = multiTracker->update(frame);
        if (ok == true)
        {
            cout << "Tracking success" << endl;
        }
        else
        {
            cout << "Tracking failure" << endl;
        }
        // draw tracked objects 画框
        for (unsigned i = 0; i < multiTracker->getObjects().size(); i++)
        {
            rectangle(frame, multiTracker->getObjects()[i], colors[i], 2, 1);
        }
        // show frame
        imshow("MultiTracker", frame);
        // quit on x button
        if (waitKey(1) == 27)
        {
            break;
        }
    }
Copy after login

python代码:

# Process video and track objects
while cap.isOpened():
  success, frame = cap.read()
  if not success:
    break
  # get updated location of objects in subsequent frames
  success, boxes = multiTracker.update(frame)
  # draw tracked objects
  for i, newbox in enumerate(boxes):
    p1 = (int(newbox[0]), int(newbox[1]))
    p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3]))
    cv2.rectangle(frame, p1, p2, colors[i], 2, 1)
  # show frame
  cv2.imshow('MultiTracker', frame)
  # quit on ESC button
  if cv2.waitKey(1) & 0xFF == 27:  # Esc pressed
    break
Copy after login

3 结果和代码

就结果而言,多目标跟踪就是生成多个单目标跟踪器,每个单目标跟踪器跟踪一个对象。如果你想和目标检测结合,其中的对象框如果要自己设定,push一个Rect对象就行了。

//自己设定对象的检测框
//x,y,width,height
//bboxes.push_back(Rect(388, 155, 30, 40));
//bboxes.push_back(Rect(492, 205, 50, 80));

总体来说精度和单目标跟踪器差不多,所耗时间差不多5到7倍,不同算法不同。

完整代码如下:

C++:

// Opencv_MultiTracker.cpp : 此文件包含 "main" 函数。程序执行将在此处开始并结束。
//
#include "pch.h"
#include 
#include 
#include 
using namespace cv;
using namespace std;
vector trackerTypes = {"BOOSTING", "MIL", "KCF", "TLD", "MEDIANFLOW", "GOTURN", "MOSSE", "CSRT"};
/**
 * @brief Create a Tracker By Name object 根据设定的类型初始化跟踪器
 *
 * @param trackerType
 * @return Ptr
 */
Ptr createTrackerByName(string trackerType)
{
    Ptr tracker;
    if (trackerType == trackerTypes[0])
        tracker = TrackerBoosting::create();
    else if (trackerType == trackerTypes[1])
        tracker = TrackerMIL::create();
    else if (trackerType == trackerTypes[2])
        tracker = TrackerKCF::create();
    else if (trackerType == trackerTypes[3])
        tracker = TrackerTLD::create();
    else if (trackerType == trackerTypes[4])
        tracker = TrackerMedianFlow::create();
    else if (trackerType == trackerTypes[5])
        tracker = TrackerGOTURN::create();
    else if (trackerType == trackerTypes[6])
        tracker = TrackerMOSSE::create();
    else if (trackerType == trackerTypes[7])
        tracker = TrackerCSRT::create();
    else
    {
        cout << "Incorrect tracker name" << endl;
        cout << "Available trackers are: " << endl;
        for (vector::iterator it = trackerTypes.begin(); it != trackerTypes.end(); ++it)
        {
            std::cout << " " << *it << endl;
        }
    }
    return tracker;
}

/**
 * @brief Get the Random Colors object 随机涂色
 *
 * @param colors
 * @param numColors
 */
void getRandomColors(vector &colors, int numColors)
{
    RNG rng(0);
    for (int i = 0; i < numColors; i++)
    {
        colors.push_back(Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)));
    }
}
int main(int argc, char *argv[])
{
    // Set tracker type. Change this to try different trackers. 选择追踪器类型
    string trackerType = trackerTypes[7];
    // set default values for tracking algorithm and video 视频读取
    string videoPath = "video/run.mp4";

    // Initialize MultiTracker with tracking algo 边界框
    vector bboxes;

    // create a video capture object to read videos 读视频
    cv::VideoCapture cap(videoPath);
    Mat frame;

    // quit if unable to read video file
    if (!cap.isOpened())
    {
        cout << "Error opening video file " << videoPath << endl;
        return -1;
    }
    // read first frame 读第一帧
    cap >> frame;
    // draw bounding boxes over objects 在第一帧内确定对象框
    /*
        先在图像上画框,然后按ENTER确定画下一个框。按ESC退出画框开始执行程序
    */
    cout << "\n==========================================================\n";
    cout << "OpenCV says press c to cancel objects selection process" << endl;
    cout << "It doesn't work. Press Esc to exit selection process" << endl;
    cout << "\n==========================================================\n";
    cv::selectROIs("MultiTracker", frame, bboxes, false);

    //自己设定对象的检测框
    //x,y,width,height
    //bboxes.push_back(Rect(388, 155, 30, 40));
    //bboxes.push_back(Rect(492, 205, 50, 80));
    // quit if there are no objects to track 如果没有选择对象
    if (bboxes.size() < 1)
    {
        return 0;
    }
    vector colors;
    //给各个框涂色
    getRandomColors(colors, bboxes.size());
    // Create multitracker 创建多目标跟踪类
    Ptr multiTracker = cv::MultiTracker::create();
    // initialize multitracker 初始化
    for (int i = 0; i < bboxes.size(); i++)
    {
        multiTracker->add(createTrackerByName(trackerType), frame, Rect2d(bboxes[i]));
    }

    // process video and track objects 开始处理图像
    cout << "\n==========================================================\n";
    cout << "Started tracking, press ESC to quit." << endl;
    while (cap.isOpened())
    {
        // get frame from the video 逐帧处理
        cap >> frame;
        // stop the program if reached end of video
        if (frame.empty())
        {
            break;
        }
        //update the tracking result with new frame 更新每一帧
        bool ok = multiTracker->update(frame);
        if (ok == true)
        {
            cout << "Tracking success" << endl;
        }
        else
        {
            cout << "Tracking failure" << endl;
        }
        // draw tracked objects 画框
        for (unsigned i = 0; i < multiTracker->getObjects().size(); i++)
        {
            rectangle(frame, multiTracker->getObjects()[i], colors[i], 2, 1);
        }

        // show frame
        imshow("MultiTracker", frame);

        // quit on x button
        if (waitKey(1) == 27)
        {
            break;
        }
    }
    waitKey(0);
    return 0;
}
Copy after login

Python:

from __future__ import print_function
import sys
import cv2
from random import randint
trackerTypes = ['BOOSTING', 'MIL', 'KCF','TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT']
def createTrackerByName(trackerType):
  # Create a tracker based on tracker name
  if trackerType == trackerTypes[0]:
    tracker = cv2.TrackerBoosting_create()
  elif trackerType == trackerTypes[1]:
    tracker = cv2.TrackerMIL_create()
  elif trackerType == trackerTypes[2]:
    tracker = cv2.TrackerKCF_create()
  elif trackerType == trackerTypes[3]:
    tracker = cv2.TrackerTLD_create()
  elif trackerType == trackerTypes[4]:
    tracker = cv2.TrackerMedianFlow_create()
  elif trackerType == trackerTypes[5]:
    tracker = cv2.TrackerGOTURN_create()
  elif trackerType == trackerTypes[6]:
    tracker = cv2.TrackerMOSSE_create()
  elif trackerType == trackerTypes[7]:
    tracker = cv2.TrackerCSRT_create()
  else:
    tracker = None
    print('Incorrect tracker name')
    print('Available trackers are:')
    for t in trackerTypes:
      print(t)

  return tracker

if __name__ == '__main__':

  print("Default tracking algoritm is CSRT \n"
        "Available tracking algorithms are:\n")
  for t in trackerTypes:
      print(t)

  trackerType = "CSRT"

  # Set video to load
  videoPath = "video/run.mp4"

  # Create a video capture object to read videos
  cap = cv2.VideoCapture(videoPath)

  # Read first frame
  success, frame = cap.read()
  # quit if unable to read the video file
  if not success:
    print('Failed to read video')
    sys.exit(1)

  ## Select boxes
  bboxes = []
  colors = []

  # OpenCV's selectROI function doesn't work for selecting multiple objects in Python
  # So we will call this function in a loop till we are done selecting all objects
  while True:
    # draw bounding boxes over objects
    # selectROI's default behaviour is to draw box starting from the center
    # when fromCenter is set to false, you can draw box starting from top left corner
    bbox = cv2.selectROI('MultiTracker', frame)
    bboxes.append(bbox)
    colors.append((randint(64, 255), randint(64, 255), randint(64, 255)))
    print("Press q to quit selecting boxes and start tracking")
    print("Press any other key to select next object")
    k = cv2.waitKey(0) & 0xFF
    if (k == 113):  # q is pressed
      break

  print('Selected bounding boxes {}'.format(bboxes))

  ## Initialize MultiTracker
  # There are two ways you can initialize multitracker
  # 1. tracker = cv2.MultiTracker("CSRT")
  # All the trackers added to this multitracker
  # will use CSRT algorithm as default
  # 2. tracker = cv2.MultiTracker()
  # No default algorithm specified

  # Initialize MultiTracker with tracking algo
  # Specify tracker type

  # Create MultiTracker object
  multiTracker = cv2.MultiTracker_create()

  # Initialize MultiTracker
  for bbox in bboxes:
    multiTracker.add(createTrackerByName(trackerType), frame, bbox)

  # Process video and track objects
  while cap.isOpened():
    success, frame = cap.read()
    if not success:
      break
    # get updated location of objects in subsequent frames
    success, boxes = multiTracker.update(frame)

    # draw tracked objects
    for i, newbox in enumerate(boxes):
      p1 = (int(newbox[0]), int(newbox[1]))
      p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3]))
      cv2.rectangle(frame, p1, p2, colors[i], 2, 1)
    # show frame
    cv2.imshow('MultiTracker', frame)

    # quit on ESC button
    if cv2.waitKey(1) & 0xFF == 27:  # Esc pressed
      break
Copy after login

The above is the detailed content of How to use OpenCV to implement multi-target tracking in python. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:yisu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact [email protected]
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!