Updated: Aug 4
I had the good fortune to be allowed to go to the Nvidia Global Tech Conference in Munich as well as doing a small presentation there about a Proof of Concept I completed a while ago, This PoC was succesful, included Nvidia GRID technology and was therefore deemed interesting enough to be allowed to present at the GTC. The word that describes the GTC the best for me was "mindblowing". The keynote especially. CEO of Nvidia, Jensen Huang, is a brilliant presenter and did a session of almost two hours, which went by in a flash. The session showed us what Nvidia is capable of doing, based on their GPU's. Whether it is image recognition, autonomous vehicles or virtual and augmented reality, they are doing it, and doing it well. Their new flagship is the Nvidia Volta GPU, which let you hold about 22 billion transistors in the palm of your hand. Imagine 8 of those in a 2U 19 inch rack form factor and a datacenter filled with these, and you might be able to start to understand what kind of processor power we are talking about.
One of the simple comparisons done in the presentation was one of image recognition. A program was learnt to be able to determine, just based on looking at a picture of a flower, to correctly name the flower in the picture. This program had now been trained properly, as we saw live in the demonstration by feeding it enough related information to learn how to do it properly. Running on a machine with a very fast Intel Xeon processor, this program was determining the contents of flower pictures at a speed of about 5 pictures per second. Jensen had one or two pop out to check if it was working correctly. As shown, the sunflower was indeed recognized as a sunflower, and so was the tulip, and so on. They then showed this same program running on an Nvidia Volta. The speed changed quite a bit... It was now doing about 500 pictures per second, of course with the exact same accuracy. Companies as Facebook and Google are actually already using stuff like this, so in the future you will be able to go to your family photo album, hit a search button and type in grandma, and it will find all images of your dear old grandma for you, automatically. The gpu's are, as Jensen explained, very well suited for parallel processing, so essentially doing a lot of stuff simultaneously, instead of in a sequential order.
This GPU power is also required to be able to let our cars drive themselves. All the sensor information that is required for the car to determine if it has to apply the brakes, steer or do other things a car should do, is based on a lot of input. There are Lidar's, radars, distance sensors, camera's and more, that are all spewing out data to the software that is supposed to act faster than you. The fact that all this information needs to be processed at that same moment means you need a lot of processing power to be able to do that, AND make decisions based on that information.
Another application for the GPU's is something that often leads to people calling Nvidia the real Skynet, as a reference to the apocalyptic movie called The Terminator. Artificial Intelligence is not science fiction anymore, it has become science fact at a rapid pace. This kind of self programming is based on programs running sequences, and using the output again as input, thus learning more or becoming better at a given goal with each step. This can ofcourse be done in numerous simultaneous itterations, speeding up learning greatly. The software is then essentially improving and rewriting itself and this has already proven to give results that are equal to a human doing it, or even being better at it than any human could. The real question is, where does this end? No one really knows, as this field of research is still very new. One thing is for sure, Nvidia is at the front end of it. Let's just hope that those apocalyptic movies will just stay fiction.
The presentation I gave resulted a short impression of what the Proof of Concept was about, how we built and tested it and what the benefits are for the radiology department of the hospital. Unlike in the bigger rooms where there were some large professional videocamera's present, the smaller ones were recorded in a different way. The presentation slides were recorded as a video, and the audio of the presenter was dubbed over it to give you an easy way to follow the presentation. Lucky you for not having to watch me talk, you can just listen to it while watching the powerpoint presentation go by: