Monday, 22 May 2017

Google’s machine-learning cloud pipeline explained

When Google first told the world about its Tensor Processing Unit, the strategy behind it seemed clear enough: Speed machine learning at scale by throwing custom hardware at the problem. Use commodity GPUs to train machine-learning models; use custom TPUs to deploy those trained models.

The new generation of Google’s TPUs is designed to handle both of those duties, training and deploying, on the same chip. That new generation is also faster, both on its own and when scaled out with others in what’s called a “TPU pod.”

No comments:

Post a Comment

Enclosure Air Conditioner Market is Expected to Grow with a CAGR of 4.3% During the Forecast Period of 2017 - 2025

Enclosure Air Conditioner segment generated the highest revenue share of the global enclosure air conditioner market. Among major regions,...