Human vs Machine: Establishing a Human Baseline for Multimodal Location Estimation
Jaeyoung Choi
ICSI
Tuesday, October 15, 2013
12:30 p.m., Conference Room 5A
Abstract:
Over the recent years, the problem of video location estimation (i.e., estimating the longitude/latitude coordinates of a video without GPS information) has been approached with diverse methods and ideas in the research community and significant improvements have been made. So far, however, systems have only been compared against each other and no systematic study on human performance has been conducted. Based on a human-subject study with 11,900 experiments, this article presents a human baseline for location estimation for different combinations of modalities (audio, audio/video, audio/video/text). Furthermore, this article compares state-of-the-art location estimation systems with the human baseline. Although the overall performance of humans’ multimodal video location estimation is better than current machine learning approaches, the difference is quite small: For 41 % of the test set, the machine’s accuracy was superior to the humans. We present case studies and discuss why machines did better for some videos and not for others. Our analysis suggests new directions and priorities for future work on the improvement of location inference algorithms.
Bio:
Jaeyoung Choi is a staff researcher at the International Computer Science Institute, a private research lab affiliated with the University of California, Berkeley, where he works in a multimedia group, focusing on merging visual, acoustic and natural language processing techniques for large-scale multimedia retrieval and online privacy issues arising from the retrieval technology. He holds a B.S. in Computer Science from KAIST and a M.S. in Computer Science from University of California, Berkeley.