Building the Next Generation Digital Maps Using a Fusion of 3D Computer Vision and Deep Learning
The past decade has seen a revolution in the ability of machines to automatically extract high level, semantic meaning from raw sensory data. This revolution has largely been brought about by the field of Deep Learning. In combination with an exponential growth in availability of compute and rapidly decreasing cost of sensing, this provides for a rather explosive combination when it comes to the field of digital mapping. The 3D Vision team of Apple Maps capitalizes on these trends to push the boundaries of mapping. By going deep into state of the art technologies like Machine Learning, 3D Computer Vision and large scale sensing & compute, we are able to redefine what the modern digital map is and how it is built. In the talk, we will walk you through our end to end philosophy for large scale mapping in the age of Deep Learning. With examples and visualizations, we take a look under the hood and explain how, over the last few years, our approach has led to a number of break through products like Look Around, Visual Localization for geo referenced AR and most recently the new 3D Apple Maps.
Martin Byröd is head of RnD for the 3D Vision team at Apple Maps, where he leads research and development for processing large scale 3D sensor data (imagery and lidar) for mapping and beyond. Martin obtained his PhD in Mathematics from Lund University in June 2010 in the areas of Applied Mathematics and Computer Vision. After graduation he joined the Swedish startup C3 Technologies, where he worked on large scale image based 3D reconstruction. The past 10 years until today have been spent at Apple where he has led core RnD for products such as Flyover 3D, Look Around, Visual Localization, etc.