Bring your next-gen products to life with the world’s most powerful AI computers for energy-efficient autonomous machines.
The e75 and e150 DevKit is an introductory inference-only hardware kit, coupled with two distinct software approaches: TT-Buda (top-down approach) and TT-Metalium (bottom-up approach). This combination is designed to provide a foundational platform for AI experimentation and development.
Standard height, 3/4 length PCIe Gen 4 board with one Wormhole processor, our solution for balancing power and throughput, operating at up to 160W
The PCIe Card featuring the SAKURA-II accelerator, is ideal for insertion into PCIe backplanes and fits comfortably into a single slot even with the attached heat sink. This card provides significant onboard DRAM, making it optimal for Generative AI and other memory-intensive AI applications like Large Language Models
The small form factor M.2 Module, featuring the SAKURA-II accelerator, is ideal for space-constrained designs and features significant onboard DRAM, making it optimal for Generative AI and other memory-intensive AI applications like Large Language Models (LLMs).
Providing best-in-class performance, the Axelera AI M.2 module is the foundation for AI acceleration at the Edge. Powered by a single Metis AIPU and containing 512MB LPDDR4x dedicated memory, it minimizes power consumption and simplifies integration.
Need best in class performance? Powered by 4 Metis AIPUs, this card can provide peak performance of 856 TOPS, and is capable of tackling the most challenging vision applications.
The Qualcomm® Cloud AI 100 Ultra is built to optimize performance for Generative AI inferencing, including Large Language Models (LLMs). This solution combines performance and cost-efficiency, delivering market-leading performance/dollar and performance/watt, with the lowest TCO (Total Cost of Ownership) available.
For plug-and-play low latency and scalable performance, the GroqCard accelerator packages a single GroqChip™ processor into a standard PCIe Gen4 x16 form factor, providing hassle-free server integration. Featuring up to 11 RealScale™ chip-to-chip connections alongside an internal software-defined network.
The Alveo™ V70 accelerator card is the first AMD Alveo production card leveraging AMD XDNA™ architecture with AI Engines. It is designed for AI inference efficiency and is tuned for video analytics and natural language processing applications.
VCK190 is the first Versal™ AI Core series evaluation kit, enabling designers to develop solutions using AI and DSP engines capable of delivering over 100X greater compute performance compared to current server class CPUs. With a breadth of connectivity options and standardized development flows.
The heart of the tsunAImi® tsn200 AI accelerator card is a runAI200 device. The card is designed for high-performance, power-sensitive edge applications such as video analytics. The card delivers the same superior performance as the runAI200 IC, but in a low-profile, 75-watt TDP PCIe card.
The ET-SoC-1 PCIe card delivers the compute performance and efficiency of one ET-SoC-1 chip in a compact, low-profile PCIe Gen 4 form factor.
1088 energy-efficient ET-Minion 64-bit RISC-V in-order cores, each with a custom vector/tensor unit optimized for ML applications
The MM1076 M.2 M key card enables high-performance, yet power-efficient AI inference for edge devices and edge servers. The M.2 card’s compact form-factor and popularity makes integration into many different systems a straightforward task.
The Hailo-8 M.2 Module is an AI accelerator module for AI applications, compatible with NGFF M.2 form factor M, B+M and A+E keys. The AI module is based on the 26 tera-operations per second(TOPS) Hailo-8 AI processor with high power efficiency.
Hailo-15 is a series of AI vision processors for smart cameras. The Hailo-15 System-on-a-Chip (SoC) combines Hailo’s patented and field-proven AI inferencing capabilities with advanced computer vision engines, generating premium image quality and advanced video analytics.
The Kinara Ara-2 accelerator modules enable high performance, power-efficient AI inference for Edge AI applications including Generative AI workloads such as Stable Diffusion and Llama-2.
The Biscotti E1.S uses up to 2 Hailo-8 Edge AI processors, featuring up to 26 tera-operations per second (TOPS) each. With an architecture that takes advantage of the core properties of neural networks, the Hailo-8 neural chips designed onto the E1.S allow edge devices to run deep learning applications at full scale.
The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models, such as MobileNet v2 at 400 FPS, in a power-efficient manner.
Copyright © 2024 SolutionsInfra - All Rights Reserved.