4 min read

MIPI Lifetime Achievement Award and MIPI Camera Working Group – A Conversation with Tom Kopet

Featured Image
Q: Congrats on your recent MIPI Lifetime Achievement award. Tell us how long you’ve been involved with MIPI, which working groups you participate in and a bit about your background.

Thank you! It’s still surprising to me that I’ve been involved with MIPI for over 15 years. My technical career began about 40 years ago in the aerospace defense business, initially focused on DSP and image processing software and algorithms, and later moving to digital chip architectures and systems for applications such as high-speed IEEE floating point arithmetic and real-time MPEG video compression. I joined Micron Imaging in 2003, the precursor to Aptina Imaging, and image sensors have been my focus since then. Aptina was acquired by ON Semiconductor in 2014, and I’ve had the privilege of successively representing Micron, Aptina, and ON as a MIPI Camera Working Group (CWG) member since the early days of the group. I’ve served as CWG vice chair for the last four years or so. I also occasionally participate in the PHY working group and generally try to keep up with most MIPI technical activities.

Q: How has the MIPI Camera Serial Interface 2 (MIPI CSI-2SM) evolved since you’ve been involved in the MIPI Camera Working Group?

In 2003, the MIPI Alliance was formed to develop standard interface specifications for mobile phone architectures. At that time, handset manufacturers like Nokia, Sony Ericsson, Motorola and a few others were driving the industry. Companies had developed their own proprietary camera interface solutions, but there was a desire for greater standardization within the industry as it increasingly moved to adopt high-speed serial camera interfaces. Collaborating through MIPI helped jump-start MIPI CSI-2, MIPI DSISM, and MIPI D-PHYSM specification developments.

Over the years it’s been interesting to see MIPI and CSI-2 broaden to serve other application areas such as automotive, IoT and video surveillance. In addition, the CWG has ensured that when we add new features, we preserve backward compatibility. This allows companies to implement new features when it makes sense for them to do so without encroaching upon older specs.

Q: How is the CSI-2 v3.0 specification gaining traction in the market? What features make it especially beneficial to system designers? 

CSI-2 is one of several long-lived MIPI specifications, and it continues to evolve to add more features for system designers. The relatively recent changes in MIPI’s intellectual property rights (IPR) policy extending royalty-free treatment to non-mobile implementations also helps expand the potential applications base.  As mentioned before, CSI-2 specifications are backwards compatible, so implementations conforming to an earlier version like CSI-2 v1.1 still conform to the latest version, namely CSI-2 v3.0. Industry adoption of the latest v3.0 features is still ramping up.

For example, in the automotive image sensor market, there are ADAS applications where captured images are used primarily for machine vision rather than human viewing. A computer or ECU is intended to process and analyze those images. For contrast, in mobile viewing applications, we're typically dealing with around 60dB of dynamic range. However, when analyzing video in order to meet critical safety requirements, automotive sensors may need a dynamic range of over 120dB. Thus, CSI-2 v3.0 includes a new RAW24 data type for supporting up to 24-bit pixels.

Another interesting new CSI-2 v3.0 feature is “Smart Region of Interest” (SROI), which is designed to improve image transfer efficiency by enabling pixels and other information describing one or more image “regions of interest” to be efficiently sent to a processor, also reducing interface bandwidth requirements. Only information about various image segments is transferred, as opposed to an entire image frame. Such regions of interest may be statically configured or identified dynamically, for example, by a “smart” image sensor operating at “the edge” of an IoT or automotive image acquisition system.

Yet another interesting new CSI-2 v3.0 feature offers more implementation options for laptop and tablet camera interfaces. A laptop camera, for example, may be connected to a motherboard via USB. However, the new “Unified Serial Link” (USL) feature in CSI-2 v3.0 enables a potentially more cost-effective, lower-power way to make that connection. Similar to USB, USL manages the camera control channel in-band with high-speed video pixel streaming, unlike legacy CSI-2 interfaces in which the camera control channel must be implemented using a separate low-speed interface like I2C. Furthermore, when used in conjunction with the new “Alternate Lower Power” (ALP) features in D-PHY v2.5, USL can also address the technical requirements of “longer reach” camera interfaces needed by some IoT applications.

These are just a few examples of how new features in CSI-2 v3.0 can be used in applications beyond mobile.

Q: The CSI-2 v4.0 specification is due out later this year. Can you give us a sneak peek at what’s coming in that version?

One major focus is support for MIPI A-PHYSM, MIPI’s first PHY specification specifically targeting long-reach automotive SerDes interfaces. For example, MIPI CWG is developing an adaptation layer specification describing how to convert the standard CSI-2 protocol into a form that can be transported over an A-PHY link. The CWG is also developing extensions to the basic CSI-2 protocol for supporting the transport of ISO-26262 functional safety data in the same packets as image pixel data.

Another special focus is the v4.0 “Always-On Sentinel Conduit” (AOSC) feature for efficiently supporting real-time pixel transport over the two-wire MIPI I3C® interface, addressing an emerging class of ultra-low-power “always-on” imaging applications for mobile and IoT platforms. The low-complexity AOSC protocol is intended to significantly expand the range of applications the CSI-2 specification can optimally address.

Q: The CSI-2 specification has been implemented in a broad array of platforms. In your view, what are some of the most interesting implementations you've seen?

As we’ve discussed, the CSI-2 specification has historically been implemented on mobile platforms but more recently has been used with cameras on non-mobile applications such as automotive and video surveillance. For example, one application I didn't quite expect was that CSI-2 would be adopted for radar data transport by substituting radar receiver samples for image pixels. Basically, the multi-chirp radar data is structured like an image frame, but the pixels are radar receiver samples. The CSI-2 interface has proven itself to be very flexible and adaptable to various platforms beyond the original use cases. I find that very interesting and look forward to what the future holds for the specification.

Tom Kopet is vice chairman of the MIPI Camera Working Group and is a senior principal systems design engineer in the ON Semiconductor Intelligent Sensing Group.

Tom-Kopet-AwardThank you, Tom, for your work with MIPI Alliance