Georgia Tech Presents Latest in Machine Learning Research at Computer Vision and Pattern Recognition Conference June 19-24
Georgia Tech Presents Latest in Machine Learning Research at Computer Vision and Pattern Recognition Conference June 19-24
Georgia Institute of Technology researchers will present new technical findings in artificial intelligence, machine learning, and computer vision research and applications at the Computer Vision and Pattern Recognition (CVPR) conference taking place from June 19-24, 2022, in New Orleans, Louisiana, and virtually.
The institute is a leading contributor in the technical program and researchers will present 11 papers in the following tracks:
- 3D from multi-view and sensors
- Datasets and evaluation
- Navigation and autonomous driving
- Recognition: detection, categorization, retrieval
- Self-& semi-& meta- & unsupervised learning
- Vision + language
- Vision applications and systems
“Researchers in the Machine Learning Center at Georgia Tech aim to research and develop innovative and sustainable technologies using machine learning and artificial intelligence that serve broader communities in socially and ethically responsible ways,” said Irfan Essa, director of the center and senior associate dean in the College of Computing. “The GT research at CVPR reflects this broader goal, and we are actively building pathways to connect our experts to explore the implications of this technology in the world.”
Georgia Tech researchers at CVPR are collaborating in their current work with more than 100 peer authors from dozens of organizations that span industry, government, and academia.
The conference will draw leading authors, academics, and experts in key areas of artificial intelligence with an expected crowd of more than 7,500 attendees this year. Hosted by the IEEE Computer Society (IEEE CS) and the Computer Vision Foundation (CVF), CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses.
ML@GT has created an interactive visual analysis of the CVPR 2022 papers program to show current trends in the field. The analysis breaks down the number of papers and authors by research area and allows users to explore areas of interest, including oral and poster papers on a particular topic. Research can also be narrowed down to particular institutions.
To learn more about Georgia Tech work at CVPR, details and paper links are below.
Georgia Tech Research at CVPR 2022
3D FROM MULTI-VIEW AND SENSORS
Learning To Solve Hard Minimal Problems
Petr Hruby, Timothy Duff, Anton Leykin, Tomas Pajdla
Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation
Abhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Caroline Pantofaru, Leonidas J. Guibas, Andrea Tagliasacchi, Frank Dellaert, Thomas Funkhouser
DATASETS AND EVALUATION
Ego4D: Around the World in 3,000 Hours of Egocentric Video
Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina González, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, Jáchym Kolář, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbeláez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik
Multi-Dimensional, Nuanced and Subjective – Measuring the Perception of Facial Expressions
De'Aira Bryant, Siqi Deng, Nashlie Sephus, Wei Xia, Pietro Perona
NAVIGATION AND AUTONOMOUS DRIVING
Is Mapping Necessary for Realistic PointGoal Navigation?
Ruslan Partsey, Erik Wijmans, Naoki Yokoyama, Oles Dobosevych, Dhruv Batra, Oleksandr Maksymets
RECOGNITION: DETECTION, CATEGORIZATION, RETRIEVAL
Cross-Domain Adaptive Teacher for Object Detection
Yu-Jhe Li, Xiaoliang Dai, Chih-Yao Ma, Yen-Cheng Liu, Kan Chen, Bichen Wu, Zijian He, Kris Kitani, Peter Vajda
Group R-CNN for Weakly Semi-Supervised Object Detection With Points
Shilong Zhang, Zhuoran Yu, Liyang Liu, Xinjiang Wang, Aojun Zhou, Kai Chen
SELF-& SEMI-& META- & UNSUPERVISED LEARNING
Unbiased Teacher v2: Semi-Supervised Object Detection for Anchor-Free and Anchor-Based Detectors
Yen-Cheng Liu, Chih-Yao Ma, Zsolt Kira
VISION + LANGUAGE
Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for Image Captioning
Chia-Wen Kuo, Zsolt Kira
Habitat-Web: Learning Embodied Object-Search Strategies From Human Demonstrations at Scale
Ram Ramrakhya, Eric Undersander, Dhruv Batra, Abhishek Das
VISION APPLICATIONS AND SYSTEMS
Episodic Memory Question Answering
Samyak Datta, Sameer Dharur, Vincent Cartillier, Ruta Desai, Mukul Khanna, Dhruv Batra, Devi Parikh
DEMO
DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors
Sivapriya Vellaichamy, Matthew Hull, Zijie J. Wang, Nilaksh Das, ShengYun Peng, Haekyu Park, Duen Horng (Polo) Chau
VisCUIT: Visual Auditor for Bias in CNN Image Classifier
Seongmin Lee, Zijie J. Wang, Judy Hoffman, Duen Horng (Polo) Chau
WORKSHOP
Multi-Agent Behavior: Representation, Modeling, Measurement, and Applications
Learning Behavior Representations Through Multi-Timescale Bootstrapping
Mehdi Azabou, Michael Mendelson, Maks Sorokin, Shantanu Thakoor, Nauman Ahad, Carolina Urzay, Mohammad Gheshlaghi Azar, Eva L. Dyer