In the time of a pandemic, it is important to keep a minimum distance from others to prevent the spread of disease. However, it is not easy to measure this distance in real life. To help this issue, we aim to build a visual assistant to track the own distance from others on the street.
Assuming there is a user with a monocular mobile phone camera, webcam, or video we detect humans, estimate their depths from RGB image and build a warning mechanism.
This project was built within Perception and Learning in Robotics and Augmented Reality praktikum offered at the Technical University of Munich (TUM) in summer semester 2020.
- Mobile application: https://github.com/plarr2020-team1/flutter_app
- Video or webcam application: https://github.com/plarr2020-team1/application
Monocular depth estimation
- Mannequin Challenge
Human segmentation / tracking
and more! Make sure to checkout our Github org: https://github.com/plarr2020-team1
Supervised by Evin Pınar Örnek (evinpinar)
-  Leal-Taixé, Laura, et al. “Motchallenge 2015: Towards a benchmark for multi-target tracking.” arXiv preprint arXiv:1504.01942 (2015).
-  Godard, Clément, et al. “Digging into self-supervised monocular depth estimation.” Proceedings of the IEEE international conference on computer vision. 2019.
-  Li, Zhengqi, et al. “Learning the depths of moving people by watching frozen people.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.
-  Bolya, Daniel, et al. “Yolact: Real-time instance segmentation.” Proceedings of the IEEE international conference on computer vision. 2019.
-  Bergmann, Philipp, Tim Meinhardt, and Laura Leal-Taixe. “Tracking without bells and whistles.” Proceedings of the IEEE international conference on computer vision. 2019.