brainsteam.co.uk/brainsteam/content/legacy/posts/2017-08-11-machine-learning...

87 lines
13 KiB
Markdown
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: Machine Learning and Hardware Requirements
author: James
type: post
date: 2017-08-11T17:22:12+00:00
draft: true
url: /?p=195
medium_post:
- 'O:11:"Medium_Post":11:{s:16:"author_image_url";s:69:"https://cdn-images-1.medium.com/fit/c/200/200/0*naYvMn9xdbL5qlkJ.jpeg";s:10:"author_url";s:30:"https://medium.com/@jamesravey";s:11:"byline_name";N;s:12:"byline_email";N;s:10:"cross_link";s:2:"no";s:2:"id";s:12:"6e9abb882f26";s:21:"follower_notification";s:3:"yes";s:7:"license";s:19:"all-rights-reserved";s:14:"publication_id";s:2:"-1";s:6:"status";s:6:"public";s:3:"url";s:86:"https://medium.com/@jamesravey/machine-learning-and-hardware-requirements-6e9abb882f26";}'
categories:
- Uncategorized
---
_**With recent advances in machine learning techniques, vendors like [Nvidia][1], [Intel][2], [AMD][3] and [IBM][3] are announcing hardware offerings specifically tailored around machine learning. In this post we examine the key differences between “traditional” software and machine learning software and why those differences necessitate a new type of hardware stack.**_
Most readers would certainly be forgiven for wondering why NVidia (NVDA on the stock market), a company that rose to prominence for manufacturing and distributing graphics processing chips to video games enthusiasts, are suddenly being mentioned in tandem with machine learning and AI products. You would also be forgiven for wondering why machine learning needs its own hardware at all. Surely a program is a program right? To understand how these things are connected, we need to talk a little bit about how software runs and the key differences between a procedural application that you’d run on your smart phone versus a deep neural network.
## How Traditional (Procedural) Software Works
&nbsp;<figure style="width: 293px" class="wp-caption alignright">
<img loading="lazy" src="https://i1.wp.com/openclipart.org/image/2400px/svg_to_png/28411/freephile-Cake.png?resize=293%2C210&#038;ssl=1" alt="Cake by freephile" width="293" height="210" data-recalc-dims="1" /><figcaption class="wp-caption-text">An algorithm is a lot like a cake recipe</figcaption></figure>
You can think of software as a series of instructions. In fact, that&#8217;s all an algorithm is. A cooking recipe that tells you how to make a cake step-by-step is a real world example of an algorithm that you carry out by hand every day.
Traditional software is very similar to a food recipe in principle.
1. First you define your variables (a recipe tells you what ingredients you need and how much you&#8217;ll need for each).
2. Then you follow a series of instructions. (Measure out the flour, add it to the mixing bowl, measure out the sugar, add that to the bowl).
3. Somewhere along the way you&#8217;re going to encounter conditions (mix in the butter until the mixture is smooth or whip the cream until it is stiff).
4. At the end you produce a result (i.e. you present the cake to the birthday girl or boy).
A traditional Central Processing Unit (CPU) that you&#8217;d find in your laptop, mobile phone or server is designed to process one instruction at a time. When you are baking a cake that&#8217;s fine because often the steps are dependent upon each other. You wouldn&#8217;t want to beat the eggs, put them in the oven and start pouring the flour all at the same time because that would make a huge mess. In the same way, it makes no sense to send each character in an email at the same time unless you want the recipient&#8217;s message to be garbled.
## Parallel Processing and &#8220;Dual Core&#8221;<figure style="width: 273px" class="wp-caption alignleft">
<img loading="lazy" src="https://i0.wp.com/openclipart.org/image/2400px/svg_to_png/25734/markroth8-Conveyor-Belt.png?resize=273%2C114&#038;ssl=1" alt="Conveyor Belt by markroth8" width="273" height="114" data-recalc-dims="1" /><figcaption class="wp-caption-text">CPUs have been getting faster at processing like more and more efficient cake making production lines</figcaption></figure>
Over the last 2 decades, processing speed of CPUs has got faster and faster which effectively means that they are able to do more and more instructions one at a time. Imagine moving from one person making a cake to a machine that makes cakes on a conveyer belt. However, consumer computing has also become more and more demanding and with many homes globally connected to high speed internet, multitasking, running more than one application on your laptop at the same time or looking at multiple tabs in your browser, is becoming more and more common.
Before Parallel Processing (machines that advertise being &#8220;dual core&#8221;, and more recently &#8220;quad core&#8221; and even &#8220;octo-core&#8221;), computers appeared to be running multiple applications at the same time by doing little bits of each of the applications and switching around. Continuing our cake analogy, this would be like putting a chocolate cake in the oven and then proceeding to mix the flour and eggs for a vanilla sponge all the time, periodically checking that the chocolate cake isn&#8217;t burning.
Multi-processing (dual/quad/octo core) allows your computer really run multiple programs at the same time, rather than appearing to. This is because each chip has 2 (duo) 4 (quad) or 8 (octo) CPUs all working on the data at the same time. The cake analogy is that we now have 2 chefs or even 2 conveyer belt factory machines.
## How [Deep] Neural Networks Work
Neural Networks are modelled around how the human brain processes and understands information. Like a brain, they consist of neurons which get excited under certain circumstances like observing a particular word or picture and synapses which pass messages between neurons. Training a neural network is about strengthening and weakening the synapses that connect the neurons to manipulate which neurons get excited based on particular inputs. This is more or less how humans learn too!
The thing about human thinking is that we don&#8217;t tend to process the things we see and hear in small chunks, one at a time, like a traditional processor would. We process a whole image in one go, or at least if feels that way right? Our brains do a huge amount of parallel processing. Each neuron in our retinas receives a small part of the light coming in through our eyes and through communication via the synapses connecting our brain cells, we assemble a single coherent image.
<img loading="lazy" class="alignright size-medium wp-image-196" src="https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2017/08/IMG_20170811_173437.jpg?resize=169%2C300&#038;ssl=1" alt="" width="169" height="300" srcset="https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2017/08/IMG_20170811_173437.jpg?resize=169%2C300&ssl=1 169w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2017/08/IMG_20170811_173437.jpg?resize=768%2C1365&ssl=1 768w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2017/08/IMG_20170811_173437.jpg?resize=576%2C1024&ssl=1 576w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2017/08/IMG_20170811_173437.jpg?w=1320&ssl=1 1320w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2017/08/IMG_20170811_173437.jpg?w=1980&ssl=1 1980w" sizes="(max-width: 169px) 100vw, 169px" data-recalc-dims="1" />
Simulated neural networks work in the same way. In a model learning to recognise faces in an image, each neuron receives a small part of the picture &#8211; usually a single pixel &#8211; carries out some operation and passes the message along a synapse to the next neuron which carries out an operation. The calculations that each neuron makes is largely independent unless it is waiting for the output from a neuron the next layer up. That means that while it is possible to simulate a neural network on a single CPU, it is very inefficient because it has to calculate what each neuron&#8217;s verdict about it&#8217;s pixel is independently. It&#8217;s a bit like the end of the Eurovision song contest where each country is asked for its own vote over the course of about an hour. Or if you&#8217;re unfamiliar with our wonderful but[ obscure european talent contest][4], you could say its a bit like a government vote where each representative has to say &#8220;Yea&#8221; or &#8220;Ney&#8221; one after another. Even with a dual, quad or octo core machine, you can still only simulate a small number of neurons at a time. If only there was a way to do that&#8230;
## Not Just for Gaming: Enter NVidia and GPUs
&nbsp;<figure style="width: 273px" class="wp-caption alignright">
<img loading="lazy" src="https://i1.wp.com/openclipart.org/image/2400px/svg_to_png/213387/Video-card.png?resize=273%2C198&#038;ssl=1" alt="Video card by jhnri4" width="273" height="198" data-recalc-dims="1" /><figcaption class="wp-caption-text">GPUs with sporty go-faster stripes are quite common in the video gaming market.</figcaption></figure>
GPUs or Graphical Processing Units are microprocessors that were historically designed for running graphics-based workloads such as rendering 3D models in video games or animated movies like Toy Story or Shrek. Graphics workloads are also massively parallel in nature.
An image on a computer is made up of a series of pixels. In order to generate a coherent image, a traditional single-core CPU has to calculate what colour each pixel should be one-by-one. When a modern (1280&#215;1024) laptop screen is made up of 1310720 pixels &#8211; that&#8217;s 1.3 million pixels. If we&#8217;re watching a video, which usually runs at 30 frames per second, we&#8217;re looking at nearly 40 million pixels per second that have to be processed. That is a LOT of processing. If we&#8217;re playing a video game, then on top of this your CPU has to deal with all the maths that comes with running around a virtual environment and the behaviours and actions of the in-game characters. You can see how things could quickly add up and your machine grind to a halt.
GPUs, unlike CPUs are made up of thousands &#8211; that&#8217;s right, not duo or octo but thousands of processing cores so that they can do a lot of that pixel rendering in parallel. The below video, which is also hosted on the [NVidia website,][5] gives an amusing example of the differences here.
<div class="jetpack-video-wrapper">
<span class="embed-youtube" style="text-align:center; display: block;"><iframe class='youtube-player' width='660' height='372' src='https://www.youtube.com/embed/-P28LKWTzrI?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent' allowfullscreen='true' style='border:0;' sandbox='allow-scripts allow-same-origin allow-popups allow-presentation'></iframe></span>
</div>
GPUs trade off their massively parallel nature with their speed at handling sequential functions. Back to the cake analogy, a GPU is more like having 10 thousand human chefs versus a CPU which is like having 2 to 8 cake-factory-conveyer-machines. This is why traditional CPUs remain relevant for running traditional workloads today.
## GPUs and Neural Networks
In the same way that thousands of cores in a GPU can be leveraged to render an image by rendering all of the pixels at the same time, a GPU can also be used to simulate a very large number of neurons in a neural network at the same time. This is why NVidia et al., formally famous for rendering cars and tracks in your favourite racing simulation to steering real self-driving cars via a simulated deep neural network.
You don&#8217;t always need a GPU to run a Neural Network. When building a model, the training is the computationally expensive bit. This is where we expose the network to thousands of images and change the synapse weights according to whether the network provided the correct answer (e.,g. is this a picture of a face? Yes or no?). Once the network has been trained, the weights are frozen and typically the throughput of images is a lot lower. Therefore, it can sometimes be feasible to train your neural network on more expensive GPU hardware and then query or run it on cheaper commodity CPUs. Again, this all depends on the amount of usage that your model is going to be getting.
## Final Thoughts
In a world where machine learning and artificial intelligence software are transforming the way we use computers, the underlying hardware is also shifting. In order to stay relevant, organisations must understand the difference between CPU and GPU workloads and as they integrate machine learning and AI into their businesses, they need to make sure that they have the right hardware available to run these tasks effectively.
[1]: http://www.nvidia.com/object/machine-learning.html
[2]: https://software.intel.com/en-us/ai-academy/training
[3]: https://medium.com/intuitionmachine/building-a-50-teraflops-amd-vega-deep-learning-box-for-under-3k-ebdd60d4a93c
[4]: https://www.youtube.com/watch?time_continue=45&v=hfjHJneVonE
[5]: http://www.nvidia.com/object/what-is-gpu-computing.html