The Decision Review System (DRS) is quite ubiquitous in the sport of cricket these days. Teams are starting to rely heavily on the DRS to overturn tight umpiring decisions and that, quite often, can turn the match in their favor.
This ball tracking concept, part of the DRS, is now an all too familiar sight for cricket fans:
This got me thinking – could I build my own ball tracking system using my knowledge of deep learning and Python?
I’m a huge cricket fan and I’m constantly looking for different use cases where I can apply machine learning or deep learning algorithms. The idea of building a ball tracking system came to me when I was working on my previous project focused on generating insights from cricket commentary data.
Cricket teams and franchises use this idea of ball-tracking to understand the weak zones of opposition players as well. Which position is a particular batsman vulnerable in? Where does the bowler consistently pitch in the death overs?
Ball tracking systems help teams analyze and understand these questions. Here is one such example from the recent cricket match:
In this article, we will walk through the various aspects of a ball tracking system and then build one in Python using the example of cricket. This promises to be quite a unique learning experience!
Note: If you’re completely new to the world of deep learning and computer vision, I suggest checking out the below resources:
Let’s quickly familiarize ourselves with two popular terms in Computer Vision prior to a discussion about the Ball Tracking System – Object Detection and Object Tracking.
Object Detection is one of the most fascinating concepts in computer vision. It has a far-reaching role in different domains such as defense, space, sports, and other fields. Here, I have listed a few interesting use cases of Object Detection in Defense and Space:
But what is object detection?
Image Classfication + Localization = Object Detection
Object Detection is the task of identifying an object and its location in an image. Object detection is similar to an image classification problem but with one additional task – identifying the location of an object as well – a concept known as Localization.
As you can see here, the location of the object is represented by a rectangular box that is popularly known as a Bounding Box. Bounding Box represents the coordinates of the object in an image. But wait – how is Object Detection different from Object Tracking? Let’s answer this question now.
Object Tracking is a special case of Object Detection. It applies to only video data. In object tracking, the object and its location are identified from every frame of a video.
Object Detection applied on each frame of a video turns into an Object Tracking problem.
Remember that Object Detection is for an image whereas Object Tracking is for the sequence of fast-moving frames. Both of the problems involve the same task but the terms are interchangeably used depending upon the type of data that you’re working with.
Ball Tracking System is one of the most interesting use cases of Object Detection & Tracking in Sports. A Ball Tracking System is used to find the trajectory of the ball in a sports video. Hawk-eye is the most advanced ball tracking system used in different sports like cricket, tennis, and football to identify the trajectory of the ball from high-performance cameras.
We can develop a similar system using the concepts of computer vision by identifying the ball and its location from every frame of a video. Here is a demo of what we will be building in this article:
Awesome, right?
The Ball Tracking System, as I’m sure you’ve gathered by now, is a powerful concept that transcends industries. In this section, I will showcase a few popular use cases of ball-tracking in sports.
We’ve discussed this earlier and I’m sure most of you will be familiar with the hawk-eye in cricket.
The trajectory of the ball assists in making critical decisions during the match. For example, in cricket, during Leg Before Wicket (or LBW), the trajectory of the ball assists in deciding whether the ball has pitched inside or outside the stumps. It also includes information about the ball hitting the stumps or not.
Similarly, in tennis, during serves or a rally, the ball tracking system assists in knowing whether the ball has pitched inside or outside the permissible lines on the court:
Every team has a set of match-winning players. Picking their wickets at the earliest opportunity is crucial for any team to win matches. With the help of ball-tracking technology, we can analyze the raw videos and generate heat maps.
From these heatmaps, we can easily identify the strong and weak zone of a batsman. This would help the team to develop a strategy against every player ahead of a match:
Can you think of other use cases of a ball tracking system in sports? Let me know in the comments section below!
There are different tracking algorithms as well as pre-trained models for tracking the object in a video. But, there are certain challenges with them when it comes to tracking a fast-moving cricket ball.
Here are the few challenges that we need to know prior to tracking a fast-moving ball in a cricket video.
Hence, in this article, I will focus on 2 simple approaches to track a fast-moving ball in a sports video:
Let’s discuss them in detail now.
One of the simplest ways could be to break down the image into smaller patches, say 3 * 3 or 5 * 5 grids, and then classify every patch into one of 2 classes – whether a patch contains a ball or not. This approach is known as the sliding window approach as we are sliding the window of a patch across every part of an image.
Remember that the formation of grids can be overlapping as well. It all depends on the way you want to formulate the problem.
Here is an example that showcases the non-overlapping grids:
This method is really simple and efficient. But, it’s a time taking process as it considers several patches of the image. Another drawback of the sliding window approach is that it’s expensive as it considers every patch of an image.
So next, I will discuss the alternative approach to the sliding window.
Instead of considering every patch, we can reduce the patches for classification based on the color of the ball. Since we know the color of the ball, we can easily differentiate the patches that have a similar color to that of the ball from the rest of the patches.
This results in a fewer number of patches to classify. This process of combining similar parts of an image via color is known as segmentation by color.
Time to code! Let’s develop a simple ball tracking system that tracks the ball on the pitch using Python. Download the necessary data files from here.
First, let’s read a video file and save the frames to a folder:
Reading frames:
As our objective is to track the ball on the pitch, we need to extract the frames that contain the pitch. Here, I am using the concept of scene detection to accomplish the task:
Output:
The outlier in the plot indicates the frame number during which the scene changes. So, fix the threshold for obtaining the frames before a scene change:
Now, we have obtained the frames that contain a pitch. Next, we will implement a segmentation approach that we discussed earlier in the article. Let’s carry out all the steps of the approach for only a single frame now.
We will read the frame and apply Gaussian blur to remove noises in an image:
Output:
As the color of the ball is known, we can easily segment the white-colored objects in an image. Here, 200 acts as a threshold. Any pixel value below this 200 will be marked as 0 and above 200 will be marked as 255.
Output:
As you can see here, the white-colored objects are segmented. The white color indicates white colored objects and black indicates the rest of the colors. And that’s it! We have separated the white-colored objects from the others.
Now, we will find the contours of segmented objects in an image:
Draw the contours on the original image:
Output:
Next, extract the patches from an image using the contours:
It’s time to build an image classifier to identify the patch containing the ball.
Reading and preparing the dataset:
Split the dataset into train and validation:
Build a baseline model for identifying the patch containing ball:
Evaluate the model on the validation data:
Repeat the similar steps for each frame in a video followed by classification:
Have a glance at the frames containing the ball along with the location:
Next, we will draw the bounding box around the frames that contain the ball and save it back to the folder:
Let’s convert back the frames into a video now:
Output:
How cool is that? Congratulations on building your own ball tracking system for cricket!
That’s it for today! This brings us to the end of the tutorial on ball tracking for cricket. Please keep in mind that a baseline model is built for image classification tasks. But there is still a lot of room for improving our model. And also, there are few hyperparameters in this approach such as the size of the Gaussian filter and the thresholding value that must be adjusted depending on the type of video.
What are your thoughts on the system we built? Share your ideas and feedback in the comments section below and let’s discuss.
Lorem ipsum dolor sit amet, consectetur adipiscing elit,
Hello Aravind Pai, very well written article. I see that the code above applies for white coloured ball, what do we need to change to detect a red colour ball ? Is it just changing the HSV code ?
Thanks, Santosh. Yes, it's just changing the color to red if you're detecting the red color ball.
Hi Arvind, When i try to deploy the ball tracking image classification using jupyter notebook. #listing down all the file names frames = os.listdir('frames/') frames.sort(key=lambda f: int(re.sub('\D', '', f))) could not able to list down the file name: [WinError 3] The system cannot find the path specified: 'frames/' tried to create path also but still, problem prevails kindly suggest and help for the same . Thanks, Bala.
Hi, make sure that the folder frames and Jupyter notebook are in the same working directory. If not, provide the full path of the folder.
Hi I read your article with great interest and wondered if yiu could contact me for a chat?
Hi Great work. I run this code on my m/c but 1. It shows tracking only on the pitch and not off the pitch ( even when the ball is on the green background field ). Can you suggest what could be the issue 2. When drawing the bounding box, I had to apply an offset to idx [ x=ball_df.loc[idx+offset,'x'] to place the rectangle in the correct frame. Am I missing anything
Arvind, This is very useful article and I appreciate your effort on this. I have some questions. 1. Can we intergrate this method in to live feed from a camera? 1.1 If possible, what are the best libraries for capture video stream? 1.2 If possible, still do we need save frames to a folder? Thank you
Bro , can you share the dataset link
Hi Aravind, excellent article. If I want to do this for a different sport (say tennis) how can I easily generate the test/train data? Is there any automated tool to do it? Otherwise its very tedious to manually do so. Thanks in advance!
Hii Arind, I really like you article. I have a question. I want to track the ball as well as the bat to see is batman out or not while taking the run. How this can be done?
Hi Aravind, Very interesting and informative article. I am new to the world of Python and coding but interested in looking at this. Can you explain a few items to a novice please? Firstly, when I try and run the first set of code it falls down with trying to import the 3 files, what are they? Do I need to rename the videos? Secondly, with regards to tracking the ball and highlighting with a box, how easy would it be to change the parameters to leave a line trace where the ball has been? Finally, is this something that can be implemented with a live video feed or would it need to be a saved video? Thanks.
Hi arvind, can you please share the dataset you used for this project? I really need it for my project. Thankyou.
Hi arvind, Can you please share the dataset you used? The link for the files given in this article is not working, I really need it urgently. Thankyou
Hey Aravind, great article! I love this idea and I'm already trying it out. However the data link that you have on this page to a Google drive link is broken. I'd greatly appreciate if you could update it when you get a chance.
Hi Arvind, I don't find the link to the dataset working anymore. Could you please update it in the blog?
Can you please provide the dataset?
Hi, its fantastic work, but could you please share the dataset? the link mentioned does not open for some reason. thanks in advance
Hi the link to the dataset isnt working could you please fix it
Hi Data set have been deleted, Google drive shows not found, could you please upload
The dataset is not available on the link. Can you please provide the updated link?
Hi the link to the dataset isn't working could you please give me new one or fix it please
the link of the dataset is not working can you please help me
Hi, dataset link isn't working. Can you please fix it or share a new link?
Hey Arvind Can You please provide the link of the data set