Edge AI and Vision Insights: October 11, 2023 Edition

TOOLSETS FOR EFFICIENT HETEROGENEOUS PROCESSING A New, Open-standards-based, Open-source Programming Model for All Accelerators As demand for AI grows, developers are attempting to squeeze more and more performance from accelerators. Ideally, developers would choose the accelerators best suited to their applications. Unfortunately, today many developers are locked into limited hardware choices because they use proprietary … Edge AI and Vision Insights: October 11, 2023 Edition Read More + The post Edge AI and Vision Insights: October 11, 2023 Edition appeared first on Edge AI and Vision Alliance.

Oct 11, 2023 - 15:29
Jan 28, 2024 - 13:17
 0
Edge AI and Vision Insights: October 11, 2023 Edition
Techatty All-in-1 Publishing
Techatty All-in-1 Publishing

Talk to Techatty
Talk to Techatty
TOOLSETS FOR EFFICIENT HETEROGENEOUS PROCESSING

A New, Open-standards-based, Open-source Programming Model for All AcceleratorsCodeplay Software
As demand for AI grows, developers are attempting to squeeze more and more performance from accelerators. Ideally, developers would choose the accelerators best suited to their applications. Unfortunately, today many developers are locked into limited hardware choices because they use proprietary programming models like NVIDIA’s CUDA. The oneAPI project was launched to create an open specification and open-source software that enables developers to write software using standard C++ code and deploy to GPUs from multiple vendors. OneAPI is an open-source ecosystem based on the Khronos open-standard SYCL with libraries for enabling AI and HPC applications. OneAPI-enabled software is currently deployed on numerous supercomputers, with plans to extend into other market segments. OneAPI is evolving rapidly and the whole community of hardware and software developers is invited to contribute. In this presentation, Charles Macfarlane, Chief Business Officer at Codeplay Software, introduces how oneAPI enables developers to write multi-target software and highlights opportunities for developers to contribute to making oneAPI available for all accelerators.

Efficiently Map AI and Vision Applications onto Multi-core AI Processors Using a Parallel Processing FrameworkCEVA
Next-generation AI and computer vision applications for autonomous vehicles, cameras, drones and robots require higher-than-ever computing power. Often, the most efficient way to deliver high performance (especially in cost- and power-constrained applications) is to use multi-core processors. But developers must then map their applications onto the multiple cores in an efficient manner, which can be difficult. To address this challenge and streamline application development, CEVA has introduced the Architecture Planner tool as a new element in CEVA’s comprehensive AI SDK. In this talk, Rami Drucker, Machine Learning Software Architect at CEVA, shows how the Architecture Planner tool analyzes the network model and the processor configuration (number of cores, memory sizes), then automatically maps the workload onto the multiple cores in an efficient manner. He explains key techniques used by the tool, including symmetrical and asymmetrical multi-processing, partition by sub-graphs, batch partitioning and pipeline partitioning.

SMART CAMERAS FOR IMAGE ANALYSIS AT THE EDGE

Intensive In-camera AI Vision ProcessingHailo
In this talk, you’ll hear about the new Hailo-15 AI vision processor family, specifically designed to empower smart cameras with unprecedented AI capabilities. Yaniv Iarovici, Head of Business Development at Hailo, explains how the Hailo-15 family of processors—with performance ranging from 7 to 20 tera operations per second—allows cameras to carry out significantly more video analytics at once, running several AI tasks in parallel. Iarovic shows how Hailo-15 processors make it possible to perform faster detection at high resolution, enabling identification of smaller and more distant objects with higher accuracy and fewer false alarms. He also explores how the elevated AI performance of these processors can be utilized for video enhancement, such as improved low-light video quality, video stabilization and high dynamic range imaging. And he shows how Hailo is pioneering the use of vision-based transformers for real-time object detection in cameras.

Using a Neural Processor for Always-sensing CamerasExpedera
Always-sensing cameras are becoming a common AI-enabled feature of consumer devices, much like the always-listening Siri or Google assistants. They can enable a more natural and seamless user experience, such as automatically locking and unlocking the device based on whether the owner is looking at the screen or within view of the camera. But the complexities of cameras, and the quantity and richness of the data they produce, mean that much more processing is required for an always-sensing camera compared with listening for a wake word. Without careful attention to neural processing unit (NPU) design, an always-sensing camera can wind up consuming excessive power or performing poorly, which can lead to an unsatisfactory user experience. In this presentation, Sharad Chole, Chief Scientist and Co-founder of Expedera, explores the architecture of a neural processor in the image signal path, discusses use cases, and provides tips for how OEMs, chipmakers, and system architects can successfully evaluate, specify, and deploy an NPU in an always-on camera.

Web and Cloud LLC - talk to us and let's discuss your needs.
Let's help transform your business

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events

FEATURED NEWS

Vision Components’ New MIPI Camera Module Delivers High Image Quality

FRAMOS Announces Event-based Vision Sensing Development Kit

e-con Systems Launches HDR Multi-camera Solution for NVIDIA Jetson Orin to Revolutionize Autonomous Mobility

Lattice Semiconductor Introduces Compact Embedded Vision FPGA with Integrated USB

Qualcomm’s Next-generation XR and AR Platforms Enable Immersive Experiences and Slimmer Devices

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Deci Deep Learning Development Platform (Best Edge AI Developer Tool)Deci
Deci’s Deep Learning Development Platform is the 2023 Edge AI and Vision Product of the Year Award winner in the Edge AI Developer Tools category. Deci’s deep learning development platform empowers AI developers with a new way to develop production grade computer vision models. With Deci, teams simplify and accelerate the development process with advanced tools to build, train, optimize, and deploy highly accurate and efficient models to any environment including edge devices. Models developed with Deci deliver a combination of high accuracy, speed and compute efficiency and therefore allow teams to unlock new applications on resource constrained edge devices and migrate workloads from cloud to edge. Also, Deci enables teams to shorten development time and lower operational costs by up to 80%. Deci’s platform is powered by its proprietary Automated Neural Architecture Construction (AutoNAC) technology, the most advanced algorithmic optimization engine that generates best-in-class deep learning model architectures for any vision-related task. With AutoNAC, teams can easily build custom, hardware-aware production grade models that deliver better than state-of-the-art performance.

Please see here for more information on Deci’s Deep Learning Development Platform. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

The post Edge AI and Vision Insights: October 11, 2023 Edition appeared first on Edge AI and Vision Alliance.

Techatty Connecting the world of tech differently! Read. Write. Learn. Thrive. Make an informed decision without distractions. We are building tech media and publication networks to connect YOU and everyone to reliable information, opportunities, and resources to achieve greater success.