The role of hardware and algorithms in the field of artificial intelligence can be said to be half, and at the chip level, the industry has almost the same view - GPU is much more important in artificial intelligence deep learning algorithms than CPU, which is why NVIDIA The limelight in the field of artificial intelligence has even overshadowed Intel. There is no doubt that GPU is the most popular method for training deep learning neural networks. This solution has been favored by companies such as Google, Microsoft, IBM, Toyota and Baidu. Therefore, GPU manufacturers have gradually become worshipped by enterprises in the last two years. Object. As the absolute leader in the GPU field, NVIDIA has recently made frequent moves. Earlier this year, the company introduced the Tesla P100 GPU for deep neural networks and released the NVIDIA DGX-1, a single-chassis deep learning supercomputer based on the GPU. Now that this deep learning supercomputer has been released, NVIDIA CEO Huang Renxun has delivered DGX-1 to the artificial intelligence project OpenAI founded by Musk, what will OpenAI use DGX-1? how to use? It's not known, but let's talk about what this deep learning supercomputer is. It has something to offer. What is a deep learning supercomputer? As the name implies, the deep learning supercomputer is a combination of deep learning and supercomputer. The well-known “Tianhe No.1†and “Tianhe No.2†are supercomputers. Of course, not only that, but also high performance computing (HPC). Computers can be counted as supercomputers, such as NVIDIA's Tesla series. Because deep learning neural networks, especially hundreds of thousands of layers of neural networks, require a high level of computational and throughput capabilities, GPUs have a natural advantage in dealing with complex operations: they have excellent floating-point computing performance, while ensuring classification and Convolution performance and accuracy. So supercomputers equipped with GPUs have become the only choice for training various deep neural networks. For example, in the Google Brain project, 12 GPUs are configured in 3 machines, and the performance reaches the level of a CPU cluster containing 1000 nodes. The quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k broThe quick brown @The Rolek The quick brown @k broThe quick brown @TheoThe quick brown @The Rolek The quick brown @k brxxxxxxxxxxxxxxxx banan abanod 2222Bossgoo(China)Tecgnology.(Bossgoo(China)Tecgnology) , https://www.tlqcjs.com