Machine learning

Accelerating machine learning

Artificial Intelligence has been growing substantially for some years now. Much of this is thanks to Big Data and new advances in technology. Undoubtedly, Graphics Processing Units (GPUs) take all the plaudits. These have enabled Machine Learning algorithms to "learn" faster (which is not to say that they don't learn well, just faster).


CPUs vs GPUs

However, for Machine Learning tasks, this processor takes approximately 70 times longer to train algorithms in contrast to GPUs. This is because GPUs are special-purpose processors and come with thousands of cores, which allow them to compute large mathematical operations almost in parallel.


CPU vs GPU performance demo. Source: YouTube https://youtu.be/-P28LKWTzrI

However, GPUs were initially used mainly to improve graphics processing in video games. Later, it was discovered that they had the potential to perform complex mathematical operations and so they have manifested themselves in different industries such as computational finance, crypto mining and of course Machine Learning.

Nvidia Corporation, one of the world's leading developers of graphics cards and researchers in the field of deep learning, has made it possible to train algorithms that take full advantage of the computational power of GPUs.

This acceleration in the training of Machine Learning and Deep Learning models continues to contribute to the rise of Artificial Intelligence. Processing millions of pieces of data is now not only affordable, but also makes it possible to come closer to the skills possessed by humans faster and faster.


Let's talk. Contact us for further information. 

Freddy Linares
Freddy Linares

Director