Concurrency Vs Parallelism In JavaScript: Which is Best?

Updated on

A Lot of developers often confuse concurrency and parallelism. It’s because they almost have the same meaning. But are they the same?

NO!!!

In technological terms, they are very different. Concurrency refers to the ability of a system to handle multiple tasks by interrupting them and switching between them, whereas parallelism refers to the ability to execute tasks independently at the same time. 

If you’re a developer struggling to understand these concepts, in this article, we are going to discuss the differences between these two.

So, lets start with understanding each in detail.

Concurrency vs Parallelism

What is Concurrency?

Concurrency is a common concept in web technologies. It is the ability to execute multiple tasks simultaneously. At first, it may appear complicated, but it is a great choice for improving user experience and making websites more interactive and dynamic.

Many programming languages provide flexibility and performance with concurrency. Despite being one of the most popular programming languages in the world, though, JavaScript was never designed for it. But overtime, as the JavaScript event loop laid the groundwork, NodeJs made JavaScript a server-side concurrency solution. 

The event loop, callbacks, promises, and async/await support in JavaScript make it possible. A  single thread juggles multiple tasks, giving the illusion that they are happening at the same time.

Refer to this video; it shows how single lines are compiled one after another in concurrency.

Here are some techniques in javascript to achieve concurrency:

Example 1: Callbacks: handling asynchronous operation:

Callbacks are fundamental in event-driven programming. Upon initiating an asynchronous task, such as reading a file or making an HTTP request, Node.js provides a callback function as an argument. It is executed after the completion of the operation.

Example code:

// Function to simulate an asynchronous operation

function fetchData(callback) {

  setTimeout(() => {

    const data = 'Some data from the server';

    callback(data);

  }, 2000); // Simulating a delay of 2 seconds

}

console.log('Start fetching data...');

// Initiate fetching data

fetchData((data) => {

  console.log('Data received:', data);

});

console.log('Fetching data request sent...');

Explanation of code: 

The code above contains a function called fetchData, which takes one argument: a function itself. At first, it might seem strange to pass a function as an argument, but this is known as a “callback”. 

Let’s break down what’s happening in this code:

1. setTimeout: This built-in function triggers another function after a specified delay. Here, it’s set to 2000 milliseconds (2 seconds).

2. fetchData: This is the main function. It accepts a function called callback as an argument and simulates retrieving data from a server by using setTimeout to delay for 2 seconds before calling the callback function with some “data.”

3. console.log Statements:

“Start fetching data…”: Tells us that the data fetching process is beginning.

“Fetching data request sent…”: Confirms that the request has been sent off.

4. Executing the fetchData Function: When calling fetchData, we pass an arrow function as its callback argument. This function logs the received data to the console once the delay (simulating data retrieval) is over.

Output Explanation:

“Start fetching data…” and “Fetching data request sent…” compile first because the code outside of the setTimeout executes immediately.

“Data received: Some data from the server” compiles at the end because the callback within fetchData only runs after the 2-second delay.

So, the output will look something like this.

Output:

Start fetching data…

Fetching data request sent…

Data received: Some data from the server


Example:2 Promise and async-await:

Although callback was enough for asynchronous programming, it may raise the issue of deeply nested callbacks(often known as Callback hell). It was noticed later on and to solve this callback hell, Promise and async-await are introduced.

A promise is an object that eventually leads to an asynchronous operation complete or failure. A promise can be in one of three states: pending, fulfilled, or rejected. When the asynchronous operation is completed, the Promise will either be fulfilled with a value or rejected with an error.

Example:

const checkCondition = new Promise((resolve, reject) => {

let condition = true;

if (condition) {

resolve('Condition is true'); // Fulfilled

} else {

reject('Condition is false'); // Rejected

}

});

checkCondition

.then((message) => {

console.log(message); // This runs if the promise is resolved

})

.catch((error) => {

console.log(error); // This runs if the promise is rejected

});

Code Explanation:

Creating a Promise: A new Promise is created with a function that takes two arguments, resolve and reject

Pending State: Initially, the promise is in the pending state.

Resolution and Rejection: If the condition is true, the promise is resolved with a message Condition is true. If false, it is rejected with an error Condition is false.

Handling Outcomes: The .then() method is used to handle the resolved state, and .catch() is used to handle the rejected state.

Async/await is syntactic sugar on the top of promises. It gives a more concise way to write promise code that looks like synchronous code makes it easier to read and write and it uses promises under the hood.

In async/await, the async keyword is used to declare an asynchronous function. The await keyword is used to wait for a promise to be resolved before continuing with the execution of the function. The await keyword can only be used inside an async function.

Example:

async function fetchData() {

try {

const response = await fetch('https://api.example.com/data');

const data = await response.json();

console.log(data);

} catch (error) {

console.error('Error fetching data:', error);

}

}

fetchData();

Explanation:

async Function: The async keyword before a function makes it return a promise and allows you to use await within it.

Awaiting Promises: Inside the async function, await is used to pause the function execution until the promise resolves or rejected. Here, it waits for fetch() to get the data and then for response.json() to parse the data.

Error Handling: The try…catch block is used to catch any errors that occur during the fetching and parsing process.

Pros of Concurrency:

  • Optimal Resource Allocation: Concurrency allows a program to make better use of system resources by performing multiple operations at the same time, rather than waiting for one task to complete before starting another.
  • Improved User Experience: By not blocking the main thread (the main path of execution in a program), concurrency allows applications to remain responsive. For example, a user interface can still respond to user inputs while performing background tasks like data loading.
  • Scalability: Concurrency enables systems to scale up efficiently to handle a high number of operations. This is especially beneficial in server environments where multiple client requests need to be handled simultaneously.

Cons of Concurrency:

  • Complexity in Management: Managing concurrent operations can be complex. Coordinating multiple tasks that run at the same time requires careful planning to ensure they don’t interfere with each other.
  • Difficulty in Tracing and Debugging: Debugging concurrent programs is often more challenging than debugging sequential programs. This is because the issues may only arise under specific timing conditions, making them hard to reproduce and fix.
  • Potential for Race Conditions and Deadlocks: Race Conditions Occur when two or more operations need to read or write on shared data and the final outcome depends on the order of execution. This can lead to unpredictable results. Deadlocks happen when two or more processes get stuck, each waiting for the other to release resources or complete tasks, resulting in a standstill.

What is Parallelism?

Unlike Concurrency, Parallelism executes multiple tasks on multiple threads. The goal of parallelism is to speed up the execution of programs by dividing work among multiple processing units, which allows many operations to be carried out at the same time.

Two tasks being executed simulataneously

Two Tasks being executed simultaneously

Multiple independent objects working independently of each other(not interleaved) are usually achieved through multiple threads and cores. Languages such as Java have this feature built-in.

In JavaScript, parallelism isn’t inherently supported in the traditional sense due to its single-threaded nature. 

However, JavaScript can achieve similar effects to parallelism through various techniques and tools that allow code execution to occur outside of the single main JavaScript thread.

Here are some techniques in javascript to achieve parallelism:

Example: 1 Web Worker:

It allows Javascript code to execute multiple tasks, ideally on multiple processing units to improve performance by dividing workload over multiple workers. Each worker works independently and can perform tasks like heavy computational and I/O operations without affecting the responsiveness of the UI.

Example:

  1. Create a js file, ‘worker.js’. You can give whatever name you want to give:

self.on message = function (event) {

    // A simple long-running task

    let sum = 0;

    for (let i = 0; i < 1000000000; i++) sum += i;

    self.postMessage(sum);

};
  1. Create a main javascript file ‘main.js’:
const worker = new Worker('worker.js'); //Import worker.js file in worker

worker.onmessage = function (event) {

    console.log(`Result from worker: ${event.data}`);

};//this will receive an event from the worker and show the output

worker.postMessage('Start computation'); // Message to trigger worker action

console.log('Main thread is still responsive!'); // To know that main thread is Still responsive

  1. To check whether this main.js file is working or not, you can import in a simple html file using the script tag:

<!DOCTYPE html>

<html lang="en">

<head>

<meta charset="UTF-8">

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>Document</title>

<script src="./main.js">

</script>

</head>

<body>

</body>

</html>

After running this HTML file, you will see the output like this:

In the provided console log, the message “Main thread is still responsive!” appears first because it’s logged by the main thread, ensuring it remains unaffected by time-consuming tasks. After a brief delay, the “Result from worker: 499999999067109000” message is logged, indicating the computation was performed by a web worker. 

This delay is due to the worker executing the operation independently without blocking the main thread. This example demonstrates how web workers enable parallelism by offloading intensive tasks to separate threads, thus maintaining the responsiveness of the main thread and ensuring seamless user interactions.

Example: 2 Parallel.JS: 

Parallel.js is a library that aims to bring parallel processing capabilities to the browser and Node.js environments. It allows you to easily parallelize CPU-intensive tasks by distributing the workload across multiple CPU cores. 

Steps to use Parallel.js in javscript or nodejs:

  1. Installation: You can install Parallel.js via npm or include it directly in your HTML file using a script tag.

html

<script src=’https://unpkg.com/[email protected]/lib/parallel.js’></script>

OR
$ npm install parallels

  1. Usage: Once included, you can create a new instance of Parallel.js and define the task you want to parallelize using the `spawn` method. This task can be a function that performs a CPU-intensive computation.


Example:

console.log("Main thread Response")

// Define the initial data

const data = new Array();

for (let i = 0; i <= 10000; i++) {

data.push(i);

}

// Create a new Parallel.js instance with initial data

const parallel = new Parallel(data);

// Define the task to be parallelized

function heavyComputation(x) {

// CPU-intensive computation

return x + x;

}

// Map the data through the heavyComputation function in parallel

parallel.map(heavyComputation).then(() => {

console.log('Result:', parallel.data); // Expected output: [1, 4, 9, 16, 25]

});
  1. Handling Results: Parallel.js provides a Promise-based interface for handling the results of parallel computations. You can use the `then` method to specify a callback function that will be executed when the parallel task is completed.
  1. Configuration: Parallel.js allows you to configure the number of workers (threads) to be used for parallel processing. By default, it uses the number of CPU cores available on the system.
// Specify the number of workers

const parallel = new Parallel({ maxWorkers: 4 });
  1. Browser Compatibility: Parallel.js works in most modern browsers that support Web Workers, which are used behind the scenes to achieve parallelism. However, it’s worth noting that Web Workers have limitations, such as the inability to access the DOM directly.

Similarly, through async/await, which I discussed above in concurrency, we can simulate like parallelism with the use of Promise.all(). 

Example:

When tasks are inherently asynchronous, like fetching data from different APIs, you can use `Promise.all` to handle them concurrently:


async function fetchData() {

const urls = [

'https://api.example1.com/data',

'https://api.example2.com/data',

'https://api.example3.com/data'

];

const fetchPromises = urls.map(url => fetch(url).then(response => response.json()));

try {

const results = await Promise.all(fetchPromises);

console.log('All data fetched:', results);

} catch (error) {

console.error('Error in fetching data:', error);

}

}

fetchData();

In this case, Promise.all() runs all fetch requests concurrently. If any of the requests fail, the entire Promise.all will fail, so you can handle errors accordingly.

Pros of Parallelism:

  • Increased Performance: Parallelism significantly enhances performance by dividing tasks across multiple processors or cores. This allows multiple tasks to be executed simultaneously, reducing overall execution time for compute-intensive operations.
  • Optimal Resource Utilization: Parallelism makes efficient use of available hardware resources, such as multicore processors or GPUs, by distributing tasks across them. This can lead to better overall system performance.
  • Scalability: Parallel systems can be easily scaled by adding more processors or nodes, which is beneficial for handling large-scale computations and workloads. This scalability is essential in high-performance computing and big data applications.
  • Improved Throughput: By processing multiple tasks concurrently, parallelism increases the system’s throughput, allowing more tasks to be completed in a given time frame. This is particularly useful in environments requiring high processing power.
  • Enhanced Problem Solving Capabilities: Some complex problems can be solved more efficiently using parallel algorithms. Problems that can be divided into independent subtasks benefit greatly from parallel execution.

Cons of Parallelism:

  • Complexity in Implementation: Writing parallel programs is generally more complex than writing sequential ones. It requires careful design to manage task distribution, data partitioning, and synchronization between tasks.
  • Synchronization Overhead: Managing synchronization between parallel tasks introduces overhead, potentially offsetting some performance gains. Issues such as data dependencies, race conditions, and resource contention must be handled carefully.
  • Hardware Dependency: The benefits of parallelism are highly dependent on the underlying hardware. Systems with limited processing units may not see significant performance improvements, and specialized hardware may be required for optimal performance.
  • Debugging Challenges: Debugging parallel programs can be more difficult than debugging sequential programs due to the nondeterministic nature of task execution order. Identifying and resolving issues like race conditions and deadlocks can be challenging.
  • Resource Management: Efficiently managing and allocating resources to parallel tasks can be complex, especially in systems with limited resources. Poor resource management can lead to suboptimal performance and inefficiencies.

Difference Between Concurrency and Parallelism

1. Execution Method:

Concurrency: Involves managing multiple tasks in an overlapping manner. It allows multiple tasks to make progress within the same application by switching between them, often giving the illusion of simultaneous execution. This switching can occur on a single processing core.

Parallelism: Refers to the literal simultaneous execution of multiple tasks or pieces of a single task across multiple processors or cores, thus reducing overall execution time.

2. Resource Utilization:

Concurrency: Focuses on effectively utilizing computing resources by juggling multiple tasks. It deals with the logical structuring of tasks to ensure minimal downtime for computational resources and keeps single processors as busy as possible.

Parallelism: Aims to maximize physical hardware resources to increase throughput and computational speed. It often spreads out tasks across multiple cores or machines to handle large-scale computations faster.

4. Implementation Complexity

Concurrency: Introduces complexity primarily in managing states and coordinating tasks to ensure data consistency and avoid issues like deadlocks and race conditions. Debugging concurrent systems can be difficult due to the nondeterministic nature of task execution order.

Parallelism: Involves complexity related to system resource management and coordination across multiple processing units. There is often overhead associated with dividing tasks and merging results, and special attention is needed to handle data splitting and task distribution efficiently.

5. Use Cases

Concurrency: Beneficial in applications where handling multiple asynchronous activities is crucial. This includes web servers handling multiple web requests or interactive applications managing user input, file I/O, and network communication concurrently.

Parallelism: Suitable for speeding up CPU-intensive computational tasks that can be divided into independent subtasks. This is common in scientific computing, large-scale data processing, and applications like image and video processing, where tasks can be efficiently distributed across multiple cores.

Conclusion

In simple terms, concurrency is like multitasking on a single computer. While one task waits (like downloading data), another continues working, creating an impression of simultaneous processing. Parallelism, on the other hand, is like having multiple computers working together on different tasks at the same time.

Concurrency improves user experience by keeping apps responsive and efficiently juggling multiple tasks. However, it can get complex when tasks share data, risking errors like deadlocks. Parallelism speeds up processing by using multiple cores but requires effective task splitting.

Both techniques are valuable: concurrency helps handle many tasks efficiently, while parallelism enhances performance in data-heavy computations.

Dev Sathwara

I’m Dev Sathwara, a Backend API Developer Intern working with Node.js. I love coding and problem solving, especially when it comes to building powerful APIs.
Connect with me on Twitter and LinkedIn for updates on my journey and stay connected!