Amazon has recently announced an Elastic Compute Cloud the leverages NVIDIA graphics processing unit to offer the huge amount of computing performance to the customers via the cloud. According to Amazon, the current CPU has some bottlenecks and limitations that can be overcome with the latest architecture. It is said that the GPUs are able to overcome the limitations only because it scales out more processors and parallel banks of memory instead of bottlenecked memory and faster processors. The effect of leveraging Graphics Processing Units is that massively parallel workloads can now be processed more efficiently and faster than the CPUs. The new AWSs are exclusively meant for large-scale deep learning, machine learning, seismic analysis, genomics, computational fluid dynamics, molecular modeling and computational finance workloads and all these are traditional aspects where GPUs shined. The new P2 instances have eight NVIDIA Tesla K8o accelerators, each of which runs a pair of NVIDIA GK210 GPUs.
Tesla K8o cards of NVIDIA provide 12 GB of memory per GK210 GPU and also offer 4GB/ s of memory bandwidth along with 2496 processing cores per card. They also provide ECC memory protection to detect double-bit errors and to recover from single-bit memory errors. All the AWS P2 instances are backed by Intel Broadwell-based processors that run at 2.7GHz. Amazon Web Services launched another deep Learning AMI (Amazon Machine Learning) that can help customers to efficiently use the new P2 instances. The latest AMI contains Caffe, MXNet, TensorFlow, Theano and Torch Frameworks with each configured, installed and tested against the MNIST database. These GPU-Accelerated P2 Instances can be used in the US West, US East, and Europe regions on Spot Instances, On-Demand Instances, Dedicated Hosts and Reserved Instances. The GPU G2 instance family home to rendering, molecular modeling, transcoding jobs and game streaming that requires the huge amount of parallel processing power.