Team Members: Christopher Neff, Matias Mendieta, Daniel Lingerfelt, Christopher Bean
PhD Supervisor: Samuel Rogers
Sponsor: NVIDIA
Project Overview:
Embedded vision is considered one top-tier, fast-growing are. Embedded vision refers to the deployment of visual capabilities to embedded systems for a better understanding of 2D/3D visual scenes. It covers a variety of rapidly growing markets and applications. Examples are Advanced Driver Assistance System (ADAS), real-time monitoring, smart infrastructure and environment, autonomous video surveillance, and robotics. With the booming deep learning and AI, ubiquitous video analytics and computer vision will become the inherent part of many embedded platforms integrated into our community. Embedded vision technology would be a pioneer market in digital signal processing.
The aim of this project is to create a distributed real-time embedded vision system for real-time object detection and tracking across multiple cameras. The system includes multiple cameras each equipped with a single embedded vision platform (called edge node). Each embedded vision platform operates as a single IoT device with the capability of running real-time video analytics. Multiple embedded vision platforms will directly communicate with each other for sharing the object information to realize distributed detection and tracking in a larger environment. To this end, the students will implement deep learning based object detection/tracking as well as classical computer vision algorithms on the state-of-the-art embedded platforms such as Nvidia Jetson TX1 and Nvidia Jetson TX2 series which integrate Nvidia GPU solution with ARM CPUs in a single chip and Xilinx Zynq UltraScale+ solution which integrate reconfigurable FPGAs fabric with ARM CPUs. At the same time, one Nvidia Tesla Volta 1000 GPU (as the state-of-the-art server-class GPU) will be used for data aggregation and higher-level processing across multiple cameras. The Nvidia Tesla V100 GPU will be also utilized for training of deep learning inspired-object detection and tracking.
Learning opportunities in this project are many! Overall, accepted student candidates will have a chance to work with an interdisciplinary team on cutting-edge technology on AI, computer vision, and edge computing. The students will be mentored by Ph.D. and M.Sc. students in TeCSAR lab (directed by Dr. Hamed Tabkhi). In addition, the students will use deep learning hardware accelerators and LSTM-based tracking algorithms developed at TeCSAR lab.
Initial Project Requirements:
The students will work with Nvidia Tesla Volta100 GPU (donated by Nvidia) for training and aggregated processing, and Nvidia Jetson TX1 and Nvidia Jetson TX2, Xilinx Zynq ZedBoards, Xilinx Zynq UltraScale+ boards for real-time vision processing next to the camera. The development boards will be provided by TeCSAR lab. In addition, the students will use the space and computer workstation in TeCSAR lab.
Expected Deliverables/Results:
The design solution for real-time distributed tracking. This includes optimized full end-to-end tracking on Nvidia Jetson platform, hardware/software co-design on Xilinx Zynq platform, as well as distributed data sharing and top-level data aggregation on Tesla V100.
Disposition of Deliverables at the End of the Project:
A prototyped design of distributed real-time object detection and tracking across multiple cameras. The students will showcase the benefits of their technology by emulating a smart grocery store scenario. The distributed vision processing system will detect individual customers and track their shopping behaviors.