The demands of AI data centers compute capability are set to increase dramatically from around 100 ZettaFLOPS today to around 10+ YottaFLOPS* in the next five years (approximately by about 100 times), ...
Built on the AMD CDNA™ 3 architecture, AMD Instinct MI325X accelerators are designed for exceptional performance and efficiency for demanding AI tasks spanning foundation model training, fine-tuning ...
At its Advancing AI event, the chip designer says the 288-GB HBM3e capacity of the forthcoming Instinct MI355X and MI350X data center GPUs help the processors provide better or similar AI performance ...
TL;DR: AMD's new Instinct MI430X GPU, based on CDNA 5 architecture and equipped with 432GB HBM4 memory at 19.6TB/sec bandwidth, targets HPC and large-scale AI workloads. Deployed in top supercomputers ...
TL;DR: AMD's upcoming Instinct MI400 AI accelerator, launching in 2026, will double AI compute performance over the MI350 series, featuring 432GB of next-gen HBM4 memory and 19.6TB/sec bandwidth. With ...
Micron Technology, Inc. announced its integration of the HBM3E 36GB 12-high memory product into AMD's upcoming Instinct™ MI350 Series solutions, emphasizing both power efficiency and performance ...
Supermicro introduces the latest addition of AI-accelerated solutions with a new 10U air-cooled server, which incorporates the AMD Instinct MI355X GPUs delivering breakthrough performance for AI and ...
AMD launched its latest Instinct AI GPUs, the MI350X and MI355X. The MI350 series follows the MI325 series, which helped drive AMD to a nearly 40% share of the server CPU segment in the first half of ...
ASE Holdings, the world's largest outsourced semiconductor assembly and test (OSAT) provider, has transitioned its core IT infrastructure to AMD platforms, deploying EPYC processors across its servers ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results