DEEP LEARNING
Short bytes: Deep Learning has high computational demands. To develop and commercialize Deep Learning applications, a suitable hardware architecture is required. There is a huge ongoing competition to develop efficient hardware platforms for Deep Learning application deployment. Here, we shall discuss specific hardware requirements and how the future looks for Deep Learning hardware.
Deep Learning is the hottest topic of this decade and may as well be for forthcoming ones too. Although it may seem on the surface, Deep Learning is not all about math, creating models, learning and optimization. The algorithms must run on optimized hardware and learning tens of thousands of data may take a long time, even weeks. There is a growing need for faster and more efficient hardware for Deep Learning Networks.
It is easy to observe that not all processes run efficiently on a CPU. Gaming and Video processing requires dedicated hardware – the Graphics Processing Units (GPUs); Signal Processing requires separate architecture as that of Digital Signal Processors (DSPs) and so on. People have been designing dedicated hardware for learning; For instance, the AlphaGo computer that played GO against Lee Sedol in March 2016, used a distributed computing module consisting of 1920 CPUs and 280 GPUs. With NVIDIA announcing their new wave of Pascal GPUs, the focus is now balanced on both the software and hardware sides of Deep Learning. So, let’s focus on the hardware aspect of Deep Learning
Comments
Post a Comment