Parallel Programming

Parallel programming is the ability to perform the processing of a given activity on different computing resources at the same time, thus reducing its execution time

You probably have already wondered why notebook and cell phone processors now come with dual, quad or octa core, right? After all, how will this different number of cores benefit me? The simplest answer is: performance. However, that depends.

We are getting closer and closer to the limit of Moore’s Law, which determines the growth rate of the number of transistors(1) on the same processing chip. Each leap we made in its architecture brought, for the end consumer (whether a developer or not), a huge performance gain in the processes performed by the computer. However, as mentioned, we are reaching the maximum limit of optimizations in a single processing hardware. This opened other possibilities. It is in this context that parallel programming stands out.

Already well studied around the 1960s, many years before we reached the limit of this architecture, this programming paradigm has been gaining space and prominence in recent years. But what is parallel programming anyway? Are two programmers coding the same program together? Or even two programmers sharing the same keyboard? I don’t think so… 🙂

Parallel programming is the ability to perform the processing of the same activity on different computing resources at the same time, thus reducing its execution time. Think, for example, of the traditional cake recipe example. In this recipe, there is no need for an exact sequence of addition of ingredients, which allows us to add them to the mixing bowl in any order. To make this cake we need 2 eggs, 2 cups of wheat flour, 1 cup of milk, 1 cup of sugar and two spoons of chocolate powder. In addition to you, three more people came to help you prepare the recipe and you will ask them to help you. Right away, you realize that you don’t have to be alone in collecting all the ingredients and ask for their help. The four of you will be responsible for one ingredient each and, thus, will seek the same ingredients at the same time, that is, in a parallel way. After that, for mixing, you don’t need your helpers and you will perform the action yourself. But remember that you need to grease the mold to add the mixture and you also need to turn on the oven to preheat. What do you do? Ask for help again so that, while you mix the ingredients, two of your helpers will be responsible for one of these two new activities. In the end, you realize that it took much less time to make this cake with help than it did alone. This thinking is the very basis of parallel programming!

Remember the many cores present on our devices that we mentioned at the beginning of the text? They can be our cake recipe helpers in running a computer program! Thus, when programming software in parallel, the developer needs to identify regions of the code that can be executed in parallel (such as collecting the ingredients for the cake) and delegate such activities so that each of the cores do this work in parallel and deliver to you, who will manage the rest. But the programmer needs to tell his software to make use of all this available parallelism. Just as if you hadn’t delegated activities for your helpers to perform on the cake recipe they would have stood still, so do the multiple cores on your device. The developer of the running software needs to have delegated functions to these parallel resources so that we gain such a performance improvement from these many cores.

It is worth mentioning that this parallelization is not only restricted to the cores of a single processor. We can perform parallel computing using different complete computers, aggregated by a high-speed connection, which are what we call computer clusters. In these clusters, we can make use not only of the parallel cores but also of the entire structure of this other computing node, such as memory, disk, etc. And as far as this may seem from your reality, it’s closer than you think! Weather forecasting software, for example, would practically not exist without parallel computing (or would even exist, but we wouldn’t know today’s weather forecast until next month). More and more we have the need for high processing to compute complex calculations and so we have simulation results that impact our lives, such as weather forecast, in a timely manner for use. In these cases, our parallel cores alone won’t do, and we need a large conglomeration of computers to perform these calculations in parallel, which are what we call supercomputers. But that topic is for the next text.

Do you like our content? So, follow us on social media to stay on top of innovation and read our blog.

(1) Moore’s Law dictates that the number of components on a processing chip doubles every 18 months. Most modern processors can have over a billion transistors on a single silicon board. The more transistors you put on a single board, the more processing power it has. 

Author: Jessica Dagostini is a Principal System Architect at beecrowd. She has a Masters in Computer Science from the Federal University of Rio Grande do Sul and has had the opportunity to participate in Programming Marathons around Latin America.

CONTENT

Our Latest Articles
Read about the latest trends in technology
blog mental
Overcoming mental blocks in programming requires strategies such as regular breaks, breaking...
blog comp
Competitive programming is not just a practice for solving complex logic problems...
blog ai
Intelligent refactoring, powered by AI, transforms the code refactoring process, making it...

Extra, extra!

Assine nossa newsletter

Fique sempre atualizado com as novidades em tecnologia, transformação digital, mercado de trabalho e oportunidades de carreira

Would you like to speak with a Sales Representative?

Interested in:

Unlocking the Potencial of LATAM Tech Talent: Nearshoring Opportunities to Drive Innovation