A New, Open-standards-based, Open-source Programming Model for All Accelerators As demand for AI grows, developers are attempting to squeeze more and more performance from accelerators. Ideally, developers would choose the accelerators best suited to their applications. Unfortunately, today many developers are locked into limited hardware choices because they use proprietary programming models like NVIDIA’s CUDA. The oneAPI project was launched to create an open specification and open-source software that enables developers to write software using standard C++ code and deploy to GPUs from multiple vendors. OneAPI is an open-source ecosystem based on the Khronos open-standard SYCL with libraries for enabling AI and HPC applications. OneAPI-enabled software is currently deployed on numerous supercomputers, with plans to extend into other market segments. OneAPI is evolving rapidly and the whole community of hardware and software developers is invited to contribute. In this presentation, Charles Macfarlane, Chief Business Officer at Codeplay Software, introduces how oneAPI enables developers to write multi-target software and highlights opportunities for developers to contribute to making oneAPI available for all accelerators.
Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using a Parallel Processing Framework Next-generation AI and computer vision applications for autonomous vehicles, cameras, drones and robots require higher-than-ever computing power. Often, the most efficient way to deliver high performance (especially in cost- and power-constrained applications) is to use multi-core processors. But developers must then map their applications onto the multiple cores in an efficient manner, which can be difficult. To address this challenge and streamline application development, CEVA has introduced the Architecture Planner tool as a new element in CEVA’s comprehensive AI SDK. In this talk, Rami Drucker, Machine Learning Software Architect at CEVA, shows how the Architecture Planner tool analyzes the network model and the processor configuration (number of cores, memory sizes), then automatically maps the workload onto the multiple cores in an efficient manner. He explains key techniques used by the tool, including symmetrical and asymmetrical multi-processing, partition by sub-graphs, batch partitioning and pipeline partitioning.
|
Intensive In-camera AI Vision Processing In this talk, you’ll hear about the new Hailo-15 AI vision processor family, specifically designed to empower smart cameras with unprecedented AI capabilities. Yaniv Iarovici, Head of Business Development at Hailo, explains how the Hailo-15 family of processors—with performance ranging from 7 to 20 tera operations per second—allows cameras to carry out significantly more video analytics at once, running several AI tasks in parallel. Iarovic shows how Hailo-15 processors make it possible to perform faster detection at high resolution, enabling identification of smaller and more distant objects with higher accuracy and fewer false alarms. He also explores how the elevated AI performance of these processors can be utilized for video enhancement, such as improved low-light video quality, video stabilization and high dynamic range imaging. And he shows how Hailo is pioneering the use of vision-based transformers for real-time object detection in cameras.
Using a Neural Processor for Always-sensing Cameras Always-sensing cameras are becoming a common AI-enabled feature of consumer devices, much like the always-listening Siri or Google assistants. They can enable a more natural and seamless user experience, such as automatically locking and unlocking the device based on whether the owner is looking at the screen or within view of the camera. But the complexities of cameras, and the quantity and richness of the data they produce, mean that much more processing is required for an always-sensing camera compared with listening for a wake word. Without careful attention to neural processing unit (NPU) design, an always-sensing camera can wind up consuming excessive power or performing poorly, which can lead to an unsatisfactory user experience. In this presentation, Sharad Chole, Chief Scientist and Co-founder of Expedera, explores the architecture of a neural processor in the image signal path, discusses use cases, and provides tips for how OEMs, chipmakers, and system architects can successfully evaluate, specify, and deploy an NPU in an always-on camera.
|
Deci Deep Learning Development Platform (Best Edge AI Developer Tool) Deci’s Deep Learning Development Platform is the 2023 Edge AI and Vision Product of the Year Award winner in the Edge AI Developer Tools category. Deci’s deep learning development platform empowers AI developers with a new way to develop production grade computer vision models. With Deci, teams simplify and accelerate the development process with advanced tools to build, train, optimize, and deploy highly accurate and efficient models to any environment including edge devices. Models developed with Deci deliver a combination of high accuracy, speed and compute efficiency and therefore allow teams to unlock new applications on resource constrained edge devices and migrate workloads from cloud to edge. Also, Deci enables teams to shorten development time and lower operational costs by up to 80%. Deci’s platform is powered by its proprietary Automated Neural Architecture Construction (AutoNAC) technology, the most advanced algorithmic optimization engine that generates best-in-class deep learning model architectures for any vision-related task. With AutoNAC, teams can easily build custom, hardware-aware production grade models that deliver better than state-of-the-art performance.
Please see here for more information on Deci’s Deep Learning Development Platform. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.
|