Atipa Deep Learning Servers

NVIDIA Tesla P100

The NVIDIA® Tesla® P100 is purpose-built as the most advanced data center accelerator ever. It taps into an innovative new GPU architecture to deliver the world's fastest compute node with higher performance than hundreds of slower commodity compute nodes. Lightning-fast nodes powered by Tesla P100 accelerate time-to-solution for the world's most important challenges that have infinite compute needs in HPC and deep learning.

NVIDIA TESLA P100 ACCELERATOR SPECIFICATION

 

  • 5.3 TeraFLOPS double-precision performance with NVIDIA GPU Boost™
  • 10.6 TeraFLOPS single-precision performance with NVIDIA GPU Boost
  • 21.2 TeraFLOPS half-precision performance with NVIDIA GPU Boost
  • 160 GB/s bidirectional interconnect bandwidth with NVIDIA NVLink
  • 720 GB/s memory bandwidth with CoWoS HBM2 Stacked Memory
  • 16 GB of CoWoS HBM2 Stacked Memory
  • Enhanced Programmability with Page Migration Engine and Unified Memory
  • ECC protection for increased reliability
  • Server-optimized for best throughput in the data center

The NVIDIA Tesla P40 is purpose-built to deliver maximum throughput for deep learning deployment. With 47 TOPS (Tera-Operations Per Second) of inference performance and INT8 operations per GPU, a single server with 8 Tesla P40s delivers the performance of over 140 CPU servers. As models increase in accuracy and complexity, CPUs are no longer capable of delivering interactive user experience. The Tesla P40 delivers over 30X lower latency than a CPU for real-time responsiveness in even the most complex models.

NVIDIA TESLA P40 ACCELERATOR SPECIFICATION

 

  • 12 TeraFLOPS Single-Precision Performance
  • 47 TOPS (Tera-Operations per Second) Integer Operations (INT8)
  • 24 GB GPU Memory
  • 346 GB/s Memory Bandwidth
  • Enhanced Programmability with Page Migration Engine
  • ECC protection for increased reliability
  • Server-optimized for best throughput in the data center

Atipa Visione VI128GQ-TXR

Key Application

 

  • Research & Scientifics
  • Machine Learning
  • Big Data Analytics

Key Features

 

  • Support up to 4 Tesla P100 GPUs
  • No GPU-Preheat
  • Cost Optimized System

 

  1. Dual EP E5-2600 v4 (Socket R3), QPI up to 9.6 GT/s
  2. Up to 1TB ECC LRDIMM, 512GB ECC RDIMM, DDR4, up to 2400MHz; 16x DIMM slots
  3. Support 4 Tesla P100 GPUs with 20GB/s NVLINK, 1 x16 Gen 3.0 Double-Wide, 2 x8 3.0 LP card (in x16 slot)
  4. 1x VGA, 2x Gbit or 2x 10GbaseT LAN, 2x USB 3.0, and 1x IPMI dedicated LAN port
  5. 2x Hot-swap 2.5" SATA3 Drive Bays
  6. 9x counter rotating fans with optimal fan speed control
  7. 2000W Titanium-level efficiency redundant power supply

client reviews

find us

4921 Legends Dr

Lawrence, KS 66049

(785) 841-9559

sales@atipa.com

Q Security

Clearance

Engineers

Schedule #35F-0439P

Copyright© 2001-2017 Microtech Computers Inc.

All rights reserved.

Atipa Technologies, a division of Microtech Computers, Inc., is not responsible for typographical or photographic errors. Designated trademarks and brands are the property of their respective owners.