[Keynote Title] Making Pier Data Broader and Deeper: PDR Challenge and Virtual Mapping Party
Big data can be gathered on a daily basis, but it has issues on its quality and variety. On the other hand, deep data is obtained in some special conditions such as in a lab or in a field with edge-heavy devices. It compensates for the above issues of big data, and also it can be training data for machine learning. Just like a platform of pier supported by stakes, there is structure in which big data is supported by deep data. That is why we call the combination of big and deep data "pier data." By making pier data broader and deeper, it becomes much easier to understand what is happening in the real world and also to realize Kaizen and innovation. We introduce two examples of activities on making pier data broader and deeper. First, we outline "PDR Challenge in Warehouse Picking"; a PDR (Pedestrian Dead Reckoning) performance competition which is very useful for gathering big data on behavior. Next, we discuss methodologies of how to gather and utilize pier data in "Virtual Mapping Party" which realizes map-content creation at anytime and from anywhere to support navigation services for visually impaired individuals.
Takeshi Kurata received the B.E., M.E. and D.E. degrees from University of Tsukuba, Japan. Since 1996, he has been working as a researcher in AIST and he is currently a research group leader, Service Sensing, Assimilation,and Modeling Research Group, Human Informatics Research Institute, AIST. He is also a professor of Faculty of Engineering, Information and Systems, University of Tsukuba (Cooperative Graduate School Program). He revieved FY2016 AIST president award (research). His other professionnal experiences are as follows: (1) 2003-2005 Visiting Scholar, HIT Lab, University of Washington, (2) 2011-2014 Doctoral co-supervisor, Joseph Fourier University, UJF-Grenoble 1,France, (3) 2012- ISO/IEC JTC 1/SC 24 Member, (4) 2014- PDR Benchmark Standardization Committee Chair. His current research topics include indoor positioning, service research, assistive technology, IoT, wearable computing, mixed and augmented reality, and computer vision.
[Invited Talk Title] Deep Visual Understanding
Since computer vision area was founded in 1960s, researchers have been dreaming of enabling a machine to describe what it sees. Recent development of deep learning has significantly boosted computer vision research. It is said that now computer vision can empower a 5-year-old human visual system. This capability has great potential to enable the natural conversation between human and computer or mobile devices through visual signals. In this talk, I will introduce recent vision techniques for visual understanding, including fine-grained image and video recognition, and how we deploy the vision models to mobile devices.
Tao Mei is a Senior Researcher and Research Manager with Microsoft Research Asia. His current research interests include multimedia analysis and computer vision. He is leading a team working on image and video analysis, vision and language, and multimedia search. He has authored or co-authored over 150 papers with 11 best paper awards. He holds over 50 filed U.S. patents (with 20 granted) and has shipped a dozen inventions and technologies to Microsoft products and services. He is an Editorial Board Member of IEEE Trans. on Multimedia, ACM Trans. on Multimedia Computing, Communications, and Applications, and Pattern Recognition. He is the General Co-chair of IEEE ICME 2019, the Program Co-chair of ACM Multimedia 2018, IEEE ICME 2015, and IEEE MMSP 2015. Tao is as a Fellow of IAPR and a Distinguished Scientist of ACM.