In the first three articles of the sorting algorithms series, we discussed *selection sort*, *insertion sort*, and *bubble sort*. We also did theoretical performance analysis using amortized time complexities. The following table summarizes their time complexities.

In the first two articles of the sorting algorithms series, we looked at *Selection Sort* and *Insertion Sort*. Bubble sort is another simple sorting algorithm that is easy to understand and implement. The idea is to compare the adjacent elements and swap them if they are in incorrect order. Let’s use an example to understand how bubble sort works.

Suppose the array to be sorted is (8, 4, 3, 9, 1), the following diagram illustrates how bubble sort performs several iterations until the array is completely sorted.

In the previous article, we looked at *selection sort*. Insertion sort is a simple sorting algorithm that is very similar to selection sort. Given an array of elements to be sorted, insertion sort works as follows:

- Divide the array into two subarrays:
*sorted*and*unsorted*subarray (similar to that of selection sort) - In each iteration, remove the first element from the unsorted subarray and insert it in the proper position (depending on whether to be sorted in ascending or descending order) in the sorted subarray.
- Iterate until the last element of the unsorted array is removed

Let us look at…

Selection sort is one of the basic sorting algorithms which is easy to understand and implement. The idea is simple: given an array of elements to be sorted,

- Divide the array into two subarrays:
*sorted and unsorted subarrays* - In each iteration, find the maximum/minimum element in the unsorted subarray and swap it with the first element of the unsorted subarray. After swapping, the first element of the unsorted subarray will be appended to the sorted subarray. Swapping the minimum value will sort the elements in ascending order while swapping the maximum value will sort the elements in descending order.
- Continue…

All of us learned how to multiply two numbers when we were kids. In case we have forgotten (😄), let’s look at how to multiply two numbers 19 and 95.

For a new user, the Linux environment is not as user-friendly as Windows. It would take some time to become familiar with the terminal and terminal commands. I remember myself struggling to even traverse through the directories and doing a google search for everything.

Here is a list of some basic Linux commands that would be useful for beginners.

*A. To Create a File*

`touch file_name`

For example, if we want to create a text file named mytext, then: `touch mytext.txt`

*B. To Edit a File*

`nano file_name`

After editing the file, use `ctrl + x`

to save the changes…

**Introduction to Cache Memory**

Majority of the modern-day computers have three levels of cache memories namely L1, L2, and L3 caches. Often, L1 and L2 caches reside on-chip while L3 cache resides off-chip. These cache memories play an important role in creating the illusion of having a fast main memory. But, why do they have to create such an illusion?

There has always been a significant gap between the performance of the CPU and the main memory, and as a result, the main memory has become a performance bottleneck. The rate at which CPU processes data is much higher than…

In the first two parts of the Bentley optimization series, we discussed the optimization rules related to Data Structures and Logics. This is the last part of this series where we will explore the remaining two categories: optimization rules related to Loops and Functions.

The idea of hoisting is to avoid recomputing the loop-invariant code inside the loop during each iteration. For example, let's have a look at the following code.

`#include <math.h>`

**void **scale(**double ***x, **double ***y, **int **n){

**for**(**int **i=0; i< n; i++){

y[i] = x[i] * exp(sqrt(M_PI/2));

}

}

The value of the expression *exp(sqrt(M_PI/2)) *is a…

In the first article of the Bentley Optimization Rules series, we listed down four categories of optimization rules namely, rules related to (1)Data Structures, (2)Logics, (3)Loops, and (4)Functions, and discussed the rules related to data structures in detail. This article is the second part of this series where we will be discussing the optimization rules related to Logics.

The idea of constant folding and propagation is to perform all the computations based on constants during compilation rather than during runtime. For example, look at the following code sample.

In the above example, the values of all variables are constants…

In his book “*Writing Efficient Programs*” published in 1982, Jon Louis Bentley outlined several rules for optimizing the code. In this article, we will discuss a new set of Bentley Rules adapted from his original work which focus on reducing the work (which is the total number of operations that need to be performed).

We can divide these rules into four categories: rules related to data structures, loops, logic, and functions.

The idea of packing is to store more than one data value in a machine word. For example, consider the date “18 September 2020”. If we store this date…

A Computer Science Research Student who loves to do Research, Write and Travel