Questions?

We are here to help. Contact us by phone, email or stop in!

(888) 222-7822
sales@atipa.com
4921 Legends Drive, Lawrence, KS 66049

Copyright© 2001-2018 Atipa Technologies All Rights Reserved.

Atipa Technologies, a division of Microtech Computers, Inc., is not responsible for typographical or photographic errors. Designated trademarks and brands are the property of their respective owners.

NVIDIA   Tesla V100 GPU Accelerator

The Most Advanced Data Center GPU Ever Built

®

Welcome to the Era of AI

Every industry wants intelligence. Within their ever-growing lakes of data lie insights that can provide the opportunity to revolutionize entire industries: personalized cancer therapy, predicting the next big hurricane, and virtual personal assistants conversing naturally. These opportunities can become a reality when data scientists are given the tools they need to realize their life’s work.

NVIDIA   Tesla   V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA   Volta, the latest GPU architecture, Tesla   V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible.

®

®

®

®

Maximum Efficiency Mode

Tensor Core

With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla   V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.

The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, Tesla   V100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.

Tesla   V100 is architected from the ground up to simplify programmability. Its new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.

Next Generation NVLINK

Programmability

HBM2

Volta Architecture

 

By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla   V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning.

Equipped with 640 Tensor Cores, Tesla   V100 delivers 125 TeraFLOPS of deep learning performance. That’s 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA   Pascal™ GPUs.

NVIDIA   NVLink in Tesla   V100 delivers 2X higher throughput compared to the previous generation. Up to eight Tesla   V100 accelerators can be interconnected at up to 300 GB/s to unleash the highest application performance possible on a single server.

®

®

®

®

®

®

®

®

®

Groundbreaking Innovations

Form Factors

Ultimate performance for deep learning.

NVIDIA   TESLA   V100 FOR NVLINK

®

®

NVIDIA   TESLA   V100 FOR PCle

®

®

Highest versatility for all workloads.

Specifications