The microprocessor industry has experienced in the last decade a clear evolution, passing of the career by get working frequencies increasingly high to a clear trend to increase the number of cores that today seems stagnant, at least in consumption, and is understandable since the multicore performance of most conventional applications still leaves much to be desired.
This is a fact and you can clearly see in many environments and applications. Have a processor more than four cores does not usually bring real benefits for an average user, and this is due to that the software we use is not prepared mostly to take advantage of them.
However in specific professional environments processors with many cores really give their best, so that we know even get to use mixed configurations of CPU and GPU, since the latter allow raising the parallelized tasks to a higher level.
But why does this happen? Well it’s simple, because to make it possible to take advantage of a multi-core processor developers have to do their part and prepare their applications, which means extra work and which certainly can be tricky, since it is not only to “decompose” the software into small pieces and each core is responsible for a part, but also there are important problems that can limit performance, such as when a core needs to access a piece that already has another core to complete it.
In this sense, the MIT has achieved a major step forward with Swarm, a highly customizable solution of 64 cores equipped with algorithms that easily overcomes these limitations and offers improved considerable performance, being able to get in the best of cases to achieve up to 75 times more performance compared to conventional processors.
An impressive advancement to a technology which in theory should also be much more efficient than the current solutions, although probably it just reserved to the professional sector for a few years.