Questions?

We are here to help. Contact us by phone, email or stop in!

(888) 222-7822
sales@atipa.com
4921 Legends Drive, Lawrence, KS 66049

Copyright© 2001-2018 Atipa Technologies All Rights Reserved.

Atipa Technologies, a division of Microtech Computers, Inc., is not responsible for typographical or photographic errors. Designated trademarks and brands are the property of their respective owners.

Deep Learning

Designing the Future

Deep learning is the fastest-growing field in machine learning. As opposed to traditional machine learning, which uses model-specific machine learning algorithms and handwritten feature extraction to analyze voices and images, deep learning neural networks use algorithms, big data, and the computational power of the GPU to accelerate this process and learn with speeds, scale, and accuracy which are able to drive AI research and computing. With these advancements, the research community can tackle some important issues such as speech recognition, natural language processing, image recognition, object identification, and more.

As such, Atipa Technologies is proud to provide a solution to meet the demanding requirements of a deep learning focused system. Optimized for up to 8 full-size GPUs, dual Intel   Xeon   Scalable Processor sockets,14 hot-swappable 2.5” drive bays, and up to 3TB DDR4 RAM in 24 slots, the Atipa Technologies Altezza SX425-24-DL is ready to handle all your deep learning needs.

®

®

Altezza SX1025-24G16

• Dual socket Xeon   Scalable Processor Family

• 24 DIMM slots supporting up to 3TB DDR4 RAM

• (16) Double-wide PCIe x16 slots for GPU card deployment

• (2) PCIe x16 slot for high speed networking

• (2) 10GBase-T LAN ports

• (16) Hot-swappable 2.5" NVMe drive bays

• (6) Hot-swappable 2.5" SATA3 drive bays

• 10U Rackmount form factor

• 3000W Redundant Titanium certified power supplies

• AST2500 BMC with Redfish support

Specifications:

®

NVIDIA   TESLA   V100 ACCELERATOR SPECIFICATION

 

  • 14 TeraFLOPS single-precision performance with NVIDIA GPU Boost

  • 112 TeraFLOPS half-precision performance with NVIDIA GPU Boost

  • 32 GB/s bidirectional interconnect bandwidth

  • 900 GB/s memory bandwidth with CoWoS HBM2 Stacked Memory

  • 32 GB of CoWoS HBM2 Stacked Memory

  • Enhanced Programmability with Page Migration Engine and Unified Memory

  • ECC protection for increased reliability

  • Server-optimized for best throughput in the data center

®

®

NVIDIA   Tesla   V100 Data Center GPU

The World's Most Powerful Data Center GPU

HPC and hyperscale data centers need to support the ever-growing demands of data scientists and researchers while staying within a tight budget. The old approach of deploying lots of commodity compute nodes requires vast interconnect overhead that substantially increases costs without proportionally increasing data center performance.

 

Tesla   V100 is engineered for the convergence of AI and HPC. It offers a platform for HPC systems to excel at both computational science for scientific simulation and data science for finding insights in data.

®

®

®

Contact sales@atipa.com for more information