TORONTO–()–Untether AI®, the leader in energy-centric AI acceleration technology, is pleased to participate and present at the upcoming Super Computing Conference, November 13 – 17, 2023. The company will demonstrate its at-memory compute architecture which solves the data movement bottleneck that costs energy and performance in traditional CPUs and GPUs, resulting in high-performance, low-latency neural network inference acceleration without sacrificing accuracy.

In booth #2250, Untether AI will demonstrate its accelerated AI video analysis reference design, a full-stack solution running a multi-stream instance segmentation neural network. In addition, Arun Iyengar, CEO of Untether AI, will be presenting Overcoming the Cost of Data Movement in AI Inference Accelerators at SC23 on Wednesday, November 15, at 11:00 a.m.

Traditional processor architectures are failing to keep up with the exploding compute demands of AI workloads. They are limited by the power-hungry weight-fetch of von Neumann architectures and limitations of transistor and frequency scaling. Untether AI’s energy-centric AI acceleration approach places compute elements directly next to the memory array. This reduces power consumption and increases throughput due to the massive parallelism and memory bandwidth provided by the architecture. The efficiency of energy-centric acceleration provides unrivaled compute density for a variety of AI workloads, including vision, natural language processing, and recommendation engines.