Published: Thu, May 18, 2017
Medicine | By Earnest Bishop

Google launches Google.ai site, announces powerful new Google Cloud TPUs


The first TPU, shown off previous year as a special-purpose chip designed specifically for machine learning, is used by the AlphaGo artificial intelligence system as the foundation of its predictive and decision-making skills.

Google's 2017 I/O keynote is happening now, where Google CEO Sundar Pichai is introducing new products and sharing more information about the company's "AI first" future.

This is a computationally expensive process, but Google believes that by opening up this technology to developers, we could see hundreds of thousands of new applications begin making use of machine learning.

Version two of Google's custom computer chip, the TPU. Google also uses the computation power of TPUs every time someone enters a query into its search engine.

Gregg Popovich knocks Spurs for 'lack of edge, intensity' in Game 2
The Spurs were leading by 23 points at the time, but with Leonard unable to play the Warriors came back to win a 113-111 thriller. Couple that with the Warriors having an absurd 69.8 true shooting percentage and that's a full on shove into the casket.

At Google I/O, Pichai announced that Google's Cloud Tensor Process Units (TPU) hardware will be initially available via its Google Compute Engine, which lets customers create and run virtual machines on Google infrastructure that can tap Google's computing resources.

Machine learning software has become the lifeblood of Google and its hyperscale brethren. "We do it by intuition", says Quoc Le, a machine-learning researcher at Google working on the AutoML project. The SDK will enable third parties to develop appliances that can be controlled verbally using Google Assistant. Each cloud TPU will offer 180 teraflops of floating-point performance and 64GB of memory. They are designed for data centers and can be stacked to build "machine learning supercomputers" called TPU pods.

In Dean's and Hölzle's blog, they describe the improvement in training times that the new TPUs have delivered: "One of our new large-scale translation models used to take a full day to train on 32 of the best commercially-available GPUs-now it trains to the same accuracy in an afternoon using just one eighth of a TPU pod", they write.

The 2nd gen Cloud TPU can now be used to train computationally intensive AI algorithms.
We'll be updating this article with more details. In addition, the company is launching a new TensorFlow Research Cloud that will provide researchers with free access to that hardware if they pledge to publicly release the results of their research. This training, combined with the massive computing power of the new TPU Racks, will lead to incredibly fast results on Google's services, along with the services from anyone using TensorFlow. Google is also reiterating its commitment to the open source model by offering up TPU resources to researchers who agree to publish their findings and possibly even open source their code.

Like this: