Imaging Systems Design for Mixed Reality Scenarios (Intel)
Abstract: Mixed Reality promises to bring in the next wave of experiences to consumer and enterprise segments. Enabling this requires a combination of different types of image capture modalities from among RGB, Depth and Beyond Visible cameras with rolling/global shutters and different FOV requirements, both on a head mounted device and in the environment. This talk will tie the different usage opportunities to imaging requirements for a Mixed Reality system for both consumer and enterprise markets in the coming years and dive into system design aspects like placement, location, types of image sensors, bandwidth requirements for tethered and wireless HMD scenarios and the processing pipeline architecture with the critical technology building blocks from multiple camera sources distributed between a HMD and a host system. Example end usages like Obstacle Avoidance and Avatar Navigation with Mixed Reality headsets will be used to provide a use case decomposition view from capture to application. Finally, the talk will address some of the opportunities for the MIPI community to drive the next wave of experiences with advanced image capture modalities and the challenges to be addressed to achieve them.
Prasanna Krishnaswamy is a Platform Architect in the Client Computing Group at Intel. His expertise is on imaging and computer vision systems architecture, tying imaging system designs with algorithmic image processing and vision blocks on the SOC and platforms, to deliver end to end imaging and vision use cases for mobile and PC like form factors. At Intel, he has contributed to the development of platform imaging solutions in the areas of Depth Sensing and Array cameras. Prior to Intel, he was managing the software stack development at Aptina Imaging for their Image Signal Processor product line. Prasanna holds a Master’s Degree in Electrical Engineering from the University of Arizona and has more than ten patents granted or pending.