###### Recent Posts

###### Featured Posts

Very recently, I started getting into more of "Computer Science" as I didn't get this opportunity in my college degree.

(P.S. - I did Electrical Engineering)

In next some posts, I'd be sharing what I am learning these days. This is kind of like going to be my notebook summarizing the learning rather than the details. I will try to write in detail on some important topics/findings, but that is not going to be the case primarily.

We are starting with algorithms.

An algorithm is a predetermined set of rules or steps to process the input.

Algorithms are present all around us. They are used in all kinds of electronic devices, in communication, transportation, by mother nature and even by your mind to do trivial task without you even noticing. There are simple algorithms and then there are complex algorithms.

We give a great deal of attention to the efficiency of algorithms, as it is often plausible to do the same thing efficiently, which sometime could mean the difference between a task being accomplished in a matter of few seconds or same task taking months. Efficiency becomes a major concern particularly when the input size is very big. Sometimes efficiency could serve a line between what is possible and what is not.

When we talk of efficiency, we could mean it in terms of memory, communication bandwidth, computational hardware or computational time. Mostly, we bother ourselves for computational time, as with the growing technology others are not very costly to improve upon.

Asymptotic Efficiency: This is when we look at input sizes large enough to make only the order of growth of running time relevant.

We are concerned with how the running time of an algorithm increases with the size of the input in limit, as the size of the input increases without bounds.

As the input grows bigger and bigger the changes in the smaller components of the input becomes smaller and eventually becomes noise; the biggest term dominates and decides what the asymptotic complexity will look like.

We use three notations for complexity: theta (θ), 'O' (the letter 'o') and omega (Ω).

When we say g(x) = θ( f(x) ), we mean two things: This means that we have both the upper bound and the lower bound of g(x) which looks like f(x). If f(x) = x^2 then this may mean the function g(x) is bounded by two functions which are similar to x^2. The two functions could be 1.5(x^2) and 2(x^2), or any variation.

Generally θ and O are used interchangeably, but to be specific, in case of O we only care about the upper bound.

In case of Ω, we only care about the lower bound.

Also, when it is not possible to represent both the upper and the lower bound of a function with another similar looking function, we define O and Ω individually, as θ makes no sense in this case.

Most of the running times look like following cases:

O (1) - constant

O (log N) - rate of growth is lower as compared to N

O (N) - rate of growth is directly proportional to the input size

O (N log N) - rate of growth less that N^2 but higher than the rate of growth of input size.

O (N^2) - rate of growth proportional to the square of growth of input size

Generally, almost all of the cases can be drilled down to the above 5 cases.

One important thing to note: Its not sufficient just to have the rate of growth of an algorithm to be less ( or in the order of O(1) ). One good example may be that, if supposedly our algorithm takes infinite time to complete and give the results, now no matter if the rate of growth is constant, our algorithm is not going to finish processing the data.

This is of prime importance to understand that the time complexity of the algorithm I have been discussing is a measure to see the growth of time taken by the algorithm as the input size grows.