FPGA chip maker Xilinx, Inc. and embedded automotive AI solution provider Motovis have joined hands to tap into automotive front-end camera market.
The world’s largest FPGA maker said today that it has cooperated with Motovis to pair its Xilinx Automotive (XA) Zynq system-on-chip platform with Motovis’ convolutional neural network (CNN) IP on a frontend camera solution for cars.
Forward camera systems are a critical building block of advanced driver assistance systems, or ADAS, because they are a key technology enabler for safety-critical functions like lane-keeping assistance (LKA), automatic emergency braking (AEB), and adaptive cruise control (ACC).
Available in the market, the embedded solution supports a range of parameters necessary for the European New Car Assessment Program (NCAP) 2022 requirements by utilizing convolutional neural networks to achieve a cost-effective combination of low latency image processing, flexibility and scalability.
“This collaboration is a significant milestone for the forward camera market as it will allow automotive OEMs to innovate faster,” said Ian Riches, vice president for the Global Automotive Practice at Strategy Analytics.
“The forward camera market has tremendous growth opportunity, where we anticipate almost 20% year-on-year volume growth over 2020 to 2025. Together, Xilinx and Motovis are delivering a highly optimized hardware and software solution that will greatly serve the needs of automotive OEMs, especially as new standards emerge and requirements continue to grow.”
In-car Forward Camera Market to Surge by 20%
The forward camera solution scales across the 28nm and 16nm node XA Zynq SoC families using Motovis’ CNN IP, a unique combination of optimized hardware and software partitioning capabilities with customizable CNN-specific engines that host Motovis’ deep learning networks – resulting in a cost-effective offering at different performance levels and price points.
The solution supports image resolutions up to eight megapixels. For the first time, OEMs and tier-1 suppliers can now layer their own feature algorithms on top of Motovis’ perception stack to differentiate and future-proof their designs.
“We are extremely pleased to unveil this new initiative with Xilinx and to bring to market our CNN forward camera solution. Customers designing systems enabled with AEB and LKA functionality need efficient neural network processing within an SoC that gives them flexibility to implement future features easily,” said Dr. Zhenghua Yu, CEO, Motovis.
“With Motovis’ customizable deep learning networks and the Xilinx Zynq platform’s ability to host CNN-specific engines that provide unmatched efficiency and optimization, we’re helping to future-proof the design to meet customer needs.”
Market forces continue to drive adoption of forward-looking camera systems to adhere to global government mandates and consumer watch groups – including The European Commission General Safety Regulation, the National Highway Traffic Safety Administration, and the NCAP.
All three have issued formal mandates or strong guidance regarding automakers’ implementations of LKA and AEB in new vehicles produced between 2020-2025 and onward.
“Expanding our XA offering with a comprehensive solution for the forward camera market puts a cost-optimized, high-performance solution in the hands of our customers. We’re thrilled to bring this to life and drive the industry forward,” said Willard Tu, senior director of Automotive, Xilinx. “Motovis’ expertise in embedded deep learning and how they’ve optimized neural networks to handle the immense challenges of forward camera perception puts us both in a unique position to gain market share, all while accelerating our OEM customers’ time to market.”