How I gave my deep learning workload fast GPUs and inexpensive storage and learned to love the cloud
Full Featured (30 min.)
In recent years, machine learning techniques such as deep neural networks have found uses in diverse fields where they have produced results comparable to and in some cases surpassing human experts. Machine learning requires large amount of data for its training as well as high-end GPUs and accelerators which enable parallel and efficient execution. But high capacity storage is not designed to deliver the throughput required by GPUs. This talk describes how we solved this mismatch and enabled IBM Deep Learning as a Service to take advantage of inexpensive and scalable object storage while keeping state of the art GPUs fully fed.