OpenVX™ is an open, royalty-free standard for cross platform acceleration of computer vision applications. OpenVX enables performance and power-optimized computer vision processing, especially important in embedded and real-time use cases such as face, body and gesture tracking, smart video surveillance, advanced driver assistance systems (ADAS), object and scene reconstruction, augmented reality, visual inspection, robotics and more.
The OpenVX 1.3.1 specification was released on February 2, 2022
OpenVX extends easily with reusable vision acceleration functions to every low-power domain. This provides a key advantage, promoting wide adoption, for OpenVX and for developers this delivers the following:
OpenVX allows graph-level processing optimizations, which lets implementations to fuse nodes when possible to achieve better overall performance. The graph also allows for auto graph-level memory optimizations to achieve a low memory footprint. OpenVX graph-optimized workloads can be deployed on a wide range of computer hardware, including small embedded CPUs, ASICs, APUs, discrete GPUs, and heterogeneous servers.
Implementers may use OpenCL or compute shaders to implement OpenVX nodes on programmable processors. Developers can use OpenVX to easily connect those nodes into a graph. The OpenVX graph enables implementers to optimize execution across diverse hardware architectures. OpenVX enables the graph to be extended to include hardware architectures that don’t support programmable APIs.
Now that the OpenVX API has grown to an extensive set of functions, there is interest in creating implementations that target a set of features rather than covering the entire OpenVX API. In order to offer this option while still managing the API to prevent excessive fragmentation regarding which implementations offer which features, the OpenVX 1.3 specification defines a collection of feature sets that form coherent and useful subsets of the OpenVX API. These feature sets include the following:
Along with the release of OpenVX 1.3, the pipelining, neural network, and import kernel extensions are being updated. For the list of all extensions and features, go to the OpenVX registry .
Khronos welcomes any company creating hardware or systems to implement and ship the OpenVX API. The OpenVX specification is free for anyone to download and implement. If you want to use the OpenVX name or logo on your implementation and enjoy the protection of the Khronos Intellectual Property Framework, you can become an OpenVX Adopter.
The OpenVX specification and conformance tests were released in 2014. This was followed by the version 1.0.1 specification and open source sample implementation in 2015, version 1.1 at the Embedded Vision Summit in 2016, and version 1.2 was released in 2017 at the Embedded Vision Summit.
To enable deployment flexibility while avoiding fragmentation, OpenVX 1.3 defines a number of feature sets that are targeted at common embedded use cases. Hardware vendors can include one or more complete feature sets in their implementations to meet the needs of their customers and be fully conformant. The flexibility of OpenVX enables deployment on a diverse range of accelerator architectures, and feature sets are expected to dramatically increase the breadth and diversity of available OpenVX implementations. The defined OpenVX 1.3 feature sets include:
Implementation | Community driven open source library | Callable API implemented, optimized and shipped by hardware vendors |
Scope | 100s of imaging and vision functions Multiple camera APIs/interfaces |
Tight focus on dozens of core hardware accelerated functions plus extensions and accelerated custom nodes. Uses external camera drivers |
Conformance | Extensive OpenCV Test Suite but no formal Adopters program | Implementations must pass Khronos Conformance Test Suite to use trademark |
IP Protection | None. Source code licensed under BSD. Some modules require royalties/licensing |
Protected under Khronos IP Framework - Khronos members agree not to assert patents against API when used in Conformant implementations |
Acceleration | OpenCV 3.0 Transparent API (or T-API) enables function offload to OpenCL devices | Implementation free to use any underlying API such as OpenCL. Can use OpenCL for Custom Nodes |
Efficiency | OpenCV 4.0 G-API graph model for some filters, arithmetic/binary operations, and well-defined geometrical transformations | Graph-based execution of all Nodes. Optimizable computation and data transfer |
Inferencing | Deep Neural Network module to construct networks from layers for forward pass computations only. Import from ONNX, TensorFlow, Torch, Caffe |
Neural Network layers and operations represented directly in the OpenVX Graph. NNEF direct import, ONNX through NNEF convertor |
“As a working group, we’ve invested a lot in creating an extensive set of functions that can meet all the needs of OpenVX users. There has been interest in creating implementations that target only a subset of the features that are specific to and necessary for the application. We’ve built OpenVX 1.3 with flexibility in mind, to offer a menu of options for users who want to stay conformant but don’t need the entire specification for their application. We believe this work increases performance portability and scalability of OpenVX across vendors, enabling greater ease of implementation and promoting adoption of the standard while still enabling interoperability.”
“AMD has always supported open, royalty-free standards for HPC and Machine Learning, we believe this will benefit the research community and the industry as a whole. AMD was the first to open source highly optimized implementation of OpenVX in MIVisionX Toolkit as part of the ROCm Ecosystem which is being used by many in the industry and academia. OpenVX 1.3 with extensive support to computer vision and machine learning will help keep up the momentum in the industry.”
“Basemark is happy to collaborate with OpenVX workgroup in development of the API. We see OpenVX as one of the key APIs for performant and safety-critical machine vision applications that actually can be deployed in production systems. We support OpenVX in Rocksolid, our compute and graphics engine, and as part of our SoC performance testing software such as the Basemark Automotive Testing Suite.”
“As a leader in Vision DSPs being used in the Mobile, AR/VR, Automotive, and Surveillance markets, we would like to congratulate the OpenVX working group on releasing the latest version of the standard. We are excited to be part of the OpenVX working group.”
“We are excited to be a partner to Khronos in developing the CTS and samples for Version 1.3 and porting it to Raspberry Pi. This will provide guidance to developers in the ecosystem and enable them to develop a wider range of applications more quickly using a smaller memory footprint while achieving better performance. This is an exciting next step in the march towards more capable computer vision and machine learning systems and MulticoreWare is proud to be a leader in this ecosystem.”
“ICURO has been collaborating with AMD in proliferating computer vision machine learning models. ICURO welcomes and supports the adoption of OpenVX 1.3 for innovative business use cases across multiple industries. Our artificial intelligence (AI) lab in Silicon Valley has accelerated the development and deployment of full-stack robotic vision applications powered by AMD edge processors and OpenVX stack. We are delighted to be a strategic partner of AMD in delivering high-value, high-return AI solutions for retail, industry 4.0, warehouse, logistics, healthcare, and several other industries.”
“Raspberry Pi is excited to bring the Khronos OpenVX 1.3 API to our line of single-board computers. Many of the most exciting commercial and hobbyist applications of our products involve computer vision, and we hope that the availability of OpenVX will help lower barriers to entry for newcomers to the field.”
“Texas Instruments reinforces our support of OpenVX and its benefits to customers developing ADAS-to-autonomous applications for the automotive market. The OpenVX standard helps us to offer an easy-to-use SDK platform for customers developing embedded applications on multi-core, heterogeneous architectures such as TI’s Driver Assist (TDAx) SOCs.”