Signed in as:
filler@godaddy.com
Signed in as:
filler@godaddy.com
Bring your next-gen products to life with the world’s most powerful AI computers for energy-efficient autonomous machines.
The e75 and e150 DevKit is an introductory inference-only hardware kit, coupled with two distinct software approaches: TT-Buda (top-down approach) and TT-Metalium (bottom-up approach). This combination is designed to provide a foundational platform for AI experimentation and development.
Standard height, 3/4 length PCIe Gen 4 board with one Wormhole processor, our solution for balancing power and throughput, operating at up to 160W
SAKURA-II PCIe Cards are high-performance, up to 120 TOPS, with 32 GB of memory edge AI accelerator solutions architected to run the latest vision and Generative AI models with market-leading energy efficiency and low latency.
SAKURA-II M.2 modules are high-performance, 60 TOPS, edge AI accelerators architected to run the
latest vision and Generative AI models with market-leading energy efficiency and low latency.
Providing best-in-class performance, the M.2 AI accelerator of Axelera AI is the best solution for AI acceleration. Powered by a single Metis AIPU and containing 1 GB DRAM dedicated memory, it minimizes power consumption and simplifies integration. It opens the door for every application to benefit from AI processing.
For developers seeking more performance, Axelera AI has created a range of PCIe accelerators. Powered by one Metis AIPU, this card can deliver up to 214 TOPS, enabling it to handle the most demanding vision applications. This performance is enhanced through the exceptional developer experience offered by the Voyager SD
The Qualcomm® Cloud AI 100 Ultra is built to optimize performance for Generative AI inferencing, including Large Language Models (LLMs). This solution combines performance and cost-efficiency, delivering market-leading performance/dollar and performance/watt, with the lowest TCO (Total Cost of Ownership) available.
For plug-and-play low latency and scalable performance, the GroqCard accelerator packages a single GroqChip™ processor into a standard PCIe Gen4 x16 form factor, providing hassle-free server integration. Featuring up to 11 RealScale™ chip-to-chip connections alongside an internal software-defined network.
The Alveo™ V70 accelerator card is the first AMD Alveo production card leveraging AMD XDNA™ architecture with AI Engines. It is designed for AI inference efficiency and is tuned for video analytics and natural language processing applications.
VCK190 is the first Versal™ AI Core series evaluation kit, enabling designers to develop solutions using AI and DSP engines capable of delivering over 100X greater compute performance compared to current server class CPUs. With a breadth of connectivity options and standardized development flows.
The heart of the tsunAImi® tsn200 AI accelerator card is a runAI200 device. The card is designed for high-performance, power-sensitive edge applications such as video analytics. The card delivers the same superior performance as the runAI200 IC, but in a low-profile, 75-watt TDP PCIe card.
The ET-SoC-1 PCIe card delivers the compute performance and efficiency of one ET-SoC-1 chip in a compact, low-profile PCIe Gen 4 form factor.
1088 energy-efficient ET-Minion 64-bit RISC-V in-order cores, each with a custom vector/tensor unit optimized for ML applications
The MM1076 M.2 M key card enables high-performance, yet power-efficient AI inference for edge devices and edge servers. The M.2 card’s compact form-factor and popularity makes integration into many different systems a straightforward task.
The Hailo-8 M.2 Module is an AI accelerator module for AI applications, compatible with NGFF M.2 form factor M, B+M and A+E keys. The AI module is based on the 26 tera-operations per second(TOPS) Hailo-8 AI processor with high power efficiency.
Hailo-15 is a series of AI vision processors for smart cameras. The Hailo-15 System-on-a-Chip (SoC) combines Hailo’s patented and field-proven AI inferencing capabilities with advanced computer vision engines, generating premium image quality and advanced video analytics.
The Kinara Ara-2 accelerator modules enable high performance, power-efficient AI inference for Edge AI applications including Generative AI workloads such as Stable Diffusion and Llama-2.
The Biscotti E1.S uses up to 2 Hailo-8 Edge AI processors, featuring up to 26 tera-operations per second (TOPS) each. With an architecture that takes advantage of the core properties of neural networks, the Hailo-8 neural chips designed onto the E1.S allow edge devices to run deep learning applications at full scale.
The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models, such as MobileNet v2 at 400 FPS, in a power-efficient manner.
Copyright © 2025 SolutionsInfra - All Rights Reserved.