The Random Walk Blog

2025-05-29

From Chepauk's Stands to Smart Surveillance: AI Revolutionized Match Day Security

From Chepauk's Stands to Smart Surveillance: AI Revolutionized Match Day Security

The roar of the audience, the crack of the bat, the sea of yellow jerseys - the Indian Premier League (IPL) is an amazing spectacle. However, behind the scenes of the on-field drama, another type of high-stakes game was taking place at Chennai's Chepauk stadium during the 2025 season. Our technical team had the wonderful opportunity to work in this season of IPL with the Greater Chennai Police and the Chennai Super Kings (CSK) to reimagine stadium security by integrating cutting-edge AI with real-time monitoring. This is an account of how we transformed a great idea into a comprehensive surveillance solution.

The Spark: Creating a Proof of Concept in the Heart of the Action

Our mission began three days before the season's opening ball. We weren't simply watching; we were embedded in the Chepauk stadium's CCTV control and monitoring room. Who is our MVP? A beast of a server, powered by an RTX 4090, able to handle massive data streams. We pieced together our initial Proof of Concept (POC) with the tremendous help of the stadium staff, who quickly matched our demands for high-throughput broadbands, backup systems, and all of the essential hardware.

We stood on the shoulders of open-source giants like Frigate and Go2RTC. These powerful tools allowed us to process a crucial subset of camera streams efficiently. We weren't just watching; we were downsampling footage, detecting every motion, and then tracking and categorizing these movements.

20.webp

Our custom-trained labels could even identify "CSK fan," "RCB fan," or "TN police," providing granular insights into crowd dynamics. This object lifecycle data became the bedrock for calculating crowd anomalies and net headcount (influx versus exodus), offering unprecedented situational awareness from day one.

Scaling Up: Earning Trust and Expanding Capabilities

Our initial success quickly caught the attention of the high command at the Police department. Impressed by the capabilities, they entrusted us with a broader scope.Over the following weeks, we steadily enhanced our system. We introduced sophisticated motion filters and motion zones to cut through the noise, allowing us to categorize motion types more accurately – identifying loitering, potential crowd anomalies, and other critical events with greater precision.

21.webp

Our system's reach extended beyond general surveillance. We delved into creating use cases for specific teams. Since we could already differentiate team support based on jersey colors, we took it a step further. By scraping Twitter for live opinions based on player name keywords, we began tracking player sentiment trends as matches unfolded, offering a fascinating new dimension to understanding the fan experience and player performance perception in real-time.

Innovation in Action: QR Codes, Cloud Scaling, and the Quest for Identity

While our AI was making great progress, we realized that automation could not be the only solution. To address potential gaps and empower the public, we created a QR-based reporting system. Linked to a Telegram bot, this allowed viewers to raise complaints instantaneously. These reports were then triaged by AI into categories and severity levels before being routed to a dedicated police team prepared to resolve issues on the spot.

24.webp

As the season progressed, so did our goals. In the second part, we tackled two big challenges: expanding our system to accommodate considerably more camera feeds than our initial subset, and addressing one of the holy grails of automated surveillance: person identity tagging and tracking across several cameras.

Scaling was accomplished through a multi-pronged approach that included running several Frigate instances on distributed cloud hardware, building an efficient data aggregation gateway, and using a shared network volume, among other innovations.

The identity challenge was significantly more complicated. An extended two-week hiatus between matches provided the critical window we required. Our team worked extensively to create a distributed microservices architecture. This system was capable of accurately detecting faces within motion detections classed as individuals. The faces were then tracked with Norfair, and the facial details were recovered as fixed-size vector embeddings with Buffalo V2. This innovation enabled a slew of new applications.

23.webp

Suspicious individuals linked to anomaly warnings could now be identified. If the police provided images or sketches, we could compare these to our database of face embeddings and even general image snapshot embeddings (helpful for queries like "individual wearing all black with a trenchcoat and a fedora"). This combined capacity of matching both text and image queries enabled us to quickly retrieve relevant events and related individual identities.

The Human Element: Impact and User-Centric Design

The results of these efforts were tangible. With signals coming in directly from spectators, paired with our AI's constant monitoring and tracking, and the Police's unflinching cooperation, we helped uncover and apprehend pickpockets - a clear win for public safety (as also reported by the Indian Express: https://indianexpress.com/article/sports/ipl/ipl-2025-csk-chepauk-snatching-recovery-chennai-police-singam-app-9955863/).

However, complex backend technology is meaningless unless it is accessible and intuitive to end users. Countless overtime hours and Red Bull-fueled all-nighters (not sponsored…yet) were spent developing the user interface. We customized timeline scrubbing components, incorporated images of current team player lineups, overlaid maps with alert and camera data, and added a plethora of other visual elements. The goal was to ensure that the ultimate stakeholder, security staffs were, was not only educated, but also empowered and impressed, within the first five seconds of encounter.

22.webp

The Final Whistle (For Now)

Our journey with IPL and CSK during the 2025 season was more than just an engineering project; it was a testament to the power of collaboration, innovation, and a relentless drive to solve real-world problems. From a fledgling POC to a sophisticated, multi-faceted surveillance ecosystem, we demonstrated how AI can transform security operations in complex, dynamic environments. The lessons learned and the technology developed at Chepauk have laid the groundwork for even more ambitious endeavors in the future. The game, it seems, is just getting started.

Related Blogs

Edge System Monitoring: The Key to Managing Distributed AI Infrastructure at Scale

Managing thousands of distributed computing devices, each handling critical real-time data, presents a significant challenge: ensuring seamless operation, robust security, and consistent performance across the entire network. As these systems grow in scale and complexity, traditional monitoring methods often fall short, leaving organizations vulnerable to inefficiencies, security breaches, and performance bottlenecks. Edge system monitoring emerges as a transformative solution, offering real-time visibility, proactive issue detection, and enhanced security to help businesses maintain control over their distributed infrastructure.

Edge System Monitoring: The Key to Managing Distributed AI Infrastructure at Scale

YOLOv8, YOLO11 and YOLO-NAS: Evaluating Their Strengths on Custom Datasets

It might evade the general user’s eye, but Object Detection is one of the most used technologies in the recent AI surge, powering everything from autonomous vehicles to retail analytics. And as a result, it is also a field undergoing extensive research and development. The YOLO family of models have been at the forefront of this since J. Redmon et al. published the research paper “You Only Look Once: Unified, Real-Time Object Detection” in 2015, which introduced object detection as a regression problem rather than a classification problem (an approach that governed most prior work), making object detection faster than ever. YOLO v8 and YOLO NAS are two widely used variations of the YOLO, while YOLO11 is the latest iteration in the Ultralytics YOLO series, gaining popularity.

YOLOv8, YOLO11 and YOLO-NAS: Evaluating Their Strengths on Custom Datasets

The Intersection of Computer Vision and Immersive Technologies in AR/VR

In recent years, computer vision has transformed the fields of Augmented Reality (AR) and Virtual Reality (VR), enabling new ways for users to interact with digital environments. The AR/VR market, fueled by computer vision advancements, is projected to reach $296.9 billion by 2024, underscoring the impact of these technologies. As computer vision continues to evolve, it will create even more immersive experiences, transforming everything from how we work and learn to how we shop and socialize in virtual spaces. An example of computer vision in AR/VR is Random Walk’s WebXR-powered AI indoor navigation system that transforms how people navigate complex buildings like malls, hotels, or offices. Addressing the common challenges of traditional maps and signage, this AR experience overlays digital directions onto the user’s real-world view via their device's camera. Users select their destination, and AR visual cues—like arrows and information markers—guide them precisely. The system uses SIFT algorithms for computer vision to detect and track distinctive features in the environment, ensuring accurate localization as users move. Accessible through web browsers, this solution offers a cost-effective, adaptable approach to real-world navigation challenges.

The Intersection of Computer Vision and Immersive Technologies in AR/VR

The Great AI Detective Games: YOLOv8 vs YOLOv11

Meet our two star detectives at the YOLO Detective Agency: the seasoned veteran Detective YOLOv8 (68M neural connections) and the efficient rookie Detective YOLOv11 (60M neural pathways). Today, they're facing their ultimate challenge: finding Waldo in a series of increasingly complex scenes.

The Great AI Detective Games: YOLOv8 vs YOLOv11

AI-Powered vs. Traditional Sponsorship Monitoring: Which is Better?

Picture this: You, a brand manager, are at a packed stadium, the crowd's roaring, and suddenly you spot your brand's logo flashing across the giant screen. Your heart races, but then a nagging question hits you: "How do I know if this sponsorship is actually worth the investment?" As brands invest millions in sponsorships, the need for accurate, timely, and insightful monitoring has never been greater. But here's the million-dollar question: Is the traditional approach to sponsorship monitoring still cutting it, or is AI-powered monitoring the new MVP? Let's see how these two methods stack up against each other for brand detection in the high-stakes arena of sports sponsorship.

AI-Powered vs. Traditional Sponsorship Monitoring: Which is Better?
Edge System Monitoring: The Key to Managing Distributed AI Infrastructure at Scale

Edge System Monitoring: The Key to Managing Distributed AI Infrastructure at Scale

Managing thousands of distributed computing devices, each handling critical real-time data, presents a significant challenge: ensuring seamless operation, robust security, and consistent performance across the entire network. As these systems grow in scale and complexity, traditional monitoring methods often fall short, leaving organizations vulnerable to inefficiencies, security breaches, and performance bottlenecks. Edge system monitoring emerges as a transformative solution, offering real-time visibility, proactive issue detection, and enhanced security to help businesses maintain control over their distributed infrastructure.

YOLOv8, YOLO11 and YOLO-NAS: Evaluating Their Strengths on Custom Datasets

YOLOv8, YOLO11 and YOLO-NAS: Evaluating Their Strengths on Custom Datasets

It might evade the general user’s eye, but Object Detection is one of the most used technologies in the recent AI surge, powering everything from autonomous vehicles to retail analytics. And as a result, it is also a field undergoing extensive research and development. The YOLO family of models have been at the forefront of this since J. Redmon et al. published the research paper “You Only Look Once: Unified, Real-Time Object Detection” in 2015, which introduced object detection as a regression problem rather than a classification problem (an approach that governed most prior work), making object detection faster than ever. YOLO v8 and YOLO NAS are two widely used variations of the YOLO, while YOLO11 is the latest iteration in the Ultralytics YOLO series, gaining popularity.

The Intersection of Computer Vision and Immersive Technologies in AR/VR

The Intersection of Computer Vision and Immersive Technologies in AR/VR

In recent years, computer vision has transformed the fields of Augmented Reality (AR) and Virtual Reality (VR), enabling new ways for users to interact with digital environments. The AR/VR market, fueled by computer vision advancements, is projected to reach $296.9 billion by 2024, underscoring the impact of these technologies. As computer vision continues to evolve, it will create even more immersive experiences, transforming everything from how we work and learn to how we shop and socialize in virtual spaces. An example of computer vision in AR/VR is Random Walk’s WebXR-powered AI indoor navigation system that transforms how people navigate complex buildings like malls, hotels, or offices. Addressing the common challenges of traditional maps and signage, this AR experience overlays digital directions onto the user’s real-world view via their device's camera. Users select their destination, and AR visual cues—like arrows and information markers—guide them precisely. The system uses SIFT algorithms for computer vision to detect and track distinctive features in the environment, ensuring accurate localization as users move. Accessible through web browsers, this solution offers a cost-effective, adaptable approach to real-world navigation challenges.

The Great AI Detective Games: YOLOv8 vs YOLOv11

The Great AI Detective Games: YOLOv8 vs YOLOv11

Meet our two star detectives at the YOLO Detective Agency: the seasoned veteran Detective YOLOv8 (68M neural connections) and the efficient rookie Detective YOLOv11 (60M neural pathways). Today, they're facing their ultimate challenge: finding Waldo in a series of increasingly complex scenes.

AI-Powered vs. Traditional Sponsorship Monitoring: Which is Better?

AI-Powered vs. Traditional Sponsorship Monitoring: Which is Better?

Picture this: You, a brand manager, are at a packed stadium, the crowd's roaring, and suddenly you spot your brand's logo flashing across the giant screen. Your heart races, but then a nagging question hits you: "How do I know if this sponsorship is actually worth the investment?" As brands invest millions in sponsorships, the need for accurate, timely, and insightful monitoring has never been greater. But here's the million-dollar question: Is the traditional approach to sponsorship monitoring still cutting it, or is AI-powered monitoring the new MVP? Let's see how these two methods stack up against each other for brand detection in the high-stakes arena of sports sponsorship.

Additional

Your Random Walk Towards AI Begins Now