This is the 2nd year that the Image Sensor Auto conference took place, again in the lovely city of Brussels. Known for its waffles, fries, chocolate and beers, Brussels is a great city to visit. There was little time for indulging though, as the two-day conference had a busy schedule with many great talks and networking moments in the mornings and evenings.

The show grew from 100 people last year to over 150 people this year, a testimony to the quality of the Image Sensor conferences, as well as proof that cameras in automotive are hot, as they are a key component to making our vehicles safer, and ultimately drive themselves.

20150624_102117

The scope of the conference grew also. Last year, there was a clear focus on the image sensor only. This year, there were more talks about system architectures, ADAS applications and computer vision techniques and applications.

Presentations came from car manufacturers such as Volvo, Peugeot/Citroen (PSA), and Jaguar/Landrover, automotive camera manufacturers such as Magna, Valeo, and Autoliv, and of course the image sensor manufacturers such as OmniVision, On semiconductor (which acquired Aptina), ST, and Melexis. Each of the vendors gave a glimpse into their views on the market, challenges they see, and their research and development directions.

Other talks covered the ISO 26262 safety standard, camera performance and testing, and even fixing broken windscreens, which has become more complicated due to the glued-on forward-looking cameras.

Presentation highlights

Volvo’s electronics driving program is probably best known for their epic video where Jean Claude van Damme stands on two trucks, each driving backward. In their talk, Volvo played another video that presents their vision of semi-autonomous driving, allowing the driver to focus on something else than the road, like using a mobile phone, using a notepad or eating. In 2017, Volvo will put 100 of these self-driving cars on selected roads in Gothenburg, Sweden.

Volvo said their future cars would have 10 cameras. Already, their current prototype vehicle includes 4 surround cameras, 3 forward-looking cameras, and a driver-monitoring camera. Just like PSA and camera-manufacturer Valeo, they see the cameras that are being used for surround view becoming smarter. This allows reuse of these same cameras for autonomous driving tasks.

Jaguar showed interesting augmented reality concepts that project a wealth of additional information on the front windshield. One of their concepts puts displays on the interiors of the car, on the pillars that typically block the view. This effectively creates an environment where the driver is able to see-through the chassis of the car.

Autoliv showed an interesting application of a pedestrian detection algorithm. When the camera detects a pedestrian, she is illuminated with a “spotlight”, making her stand out for everyone on the road, and signalling to the pedestrian that she’s being seen.

Magna spoke about picture quality. How they measure it, even in complex cases such as wide-angle lenses, and how there’s trade-offs involved between resolution and color, and what parameters they measure and optimize for. An automotive camera is a complicated device and picture quality remains a key factor in the selection process.

Valeo showed the impact that some of the image-processing algorithms inside the ISP, which translates raw sensor data into good-looking images, can have on the computer vision algorithms downstream. As a solution, they propose to have two ISPs: one for our human eyes, and one for the vision algorithms.

Our talk centered around four key questions:

1. How will the cameras be connected
2. Where will the visual processing happen
3. What kind of processing is computer vision
4. What can do the visual processing

(1) and (2) involve system design trade offs, while (3) and (4) spoke about the algorithms involved, and how to efficiently implement it in silicon solutions.

Recurring themes at the show

Cameras save lives    

Many people highlighted the importance of making cars safer. According to the World Health Organization, 1.2 million traffic-related deaths occur each year, and about 10 times as many people get injured. 91% of those accidents are caused by human error. Giving the car more smart sensors, cameras combined with embedded vision processing, can greatly reduce the number of accidents.

It’s not about where we’re going; it’s how we get there

Everyone agreed fully autonomous driving will happen. When exactly you can buy the first car and use it to go where you want to go is unclear. According to IAV’s talk, Audi plans on 2017, Google 2018, Nissan 2020, Intel 2022, Tesla 2023, and Continental and Daimler say it’ll be in 2025. An IEEE committee has predicted that in 2040 75% of cars on the road will be autonomous.

There’s no doubt cars will simply become robots that take us places. What’s a more interesting question is what cars will look like between today and then. All the car manufacturers are focusing on what camera-enabled safety features to introduce in their next car models, instead of what cars look like in 10 years. Several talks referred to the 6 levels of assisted driving that the International Organization of Motor Vehicle Manufacturers (OICA) defined. In Level 0, the vehicle is operted by the driver, and in level 5 by electronics. Levels 1, 2, 3 and 4 describe the stages in between, and that’s where the car manufacturers are focusing today.

It’s all about system-level optimization

Many talks touched on this aspect. What good is a 2Mpixel sensor if the lens doesn’t support such resolution? What’s good about an ISP that sharpens images, if this hurts computer vision algorithms downstream? And what about doing the computer vision algorithms inside the camera, to prevent video compression during transmission from the camera to the ECU from hurting the vision algorithms. In order to get the best performing system, you need to look at all the system aspects: the lens, image sensor, ISP, viewing algorithms, and video analytics processing. In order to get the most out of the system, you need flexible components that can adapt to different integration scenarios.

Summary

Advanced driver assistance systems (ADAS) and autonomous driving are huge topics, with a tremendous impact on the world and human life. Our integrated processor, computer vision, and video coding technology provides a ready-to-integrate, flexible solution to automotive cameras, and matches the requirements we saw highlighted at the show very well. We’re looking forward to continue working with our customers and partners to jointly bring these advanced safety automotive solutions to market. All in all, it was a great conference again, and we’re very much looking forward to meeting and interacting with the community at next year’s event, which the organization told us will be significantly bigger again.