Big O helps us understand how fast algorithms run.
Itâs like a filter that only keeps whatâs important about how long a function will take with n
items.
So, if a function has a big difference in run time with more data, like from a quick 300 milliseconds to a slow 30 seconds, thatâs what matters.
Imagine an equation 3x² + x + 1
. If x
is 5, the first part is the biggest (75), and the rest (5 and 1) are small.
When x
is huge, like 5 million, the first part is super big (75 trillion!) and the others barely count.
So in Big O
, we just focus on the big part, which for this would be O(n²)
.
This means for n
items, the time it takes goes up by n^2
. This is how we simplify algorithms to see how they scale up.
Solving problems often presents multiple approaches, and evaluating their efficiency is crucial. The question âCan you do better?â in algorithm interviews refers to improving Time Complexity (TC) and Space Complexity (SC).
Time Complexity (TC): - measures how fast a function runs.
Space Complexity (SC): - quantifies memory/space usage during execution.
In essence, asymptotic analysis helps compare and choose algorithms based on their time and space efficiencies.
Understanding the fundamentals of memory is crucial, serving as the foundation of knowledge. It involves grasping how data is stored under the hood, providing insights into choosing the right data structure for various situations. While the subject of memory is vast, this article touches upon key high-level concepts. This knowledge proves particularly relevant in the context of coding interviews.
Understanding Memory Allocation
In the world of programming, all variables need a place to reside. Envision this as a bounded canvas, divided into slots, representing the finite number of slots availableâsimulating real memory.
A crucial aspect is that any program stores variables in free memory slots or series of slots. If more than one block is required, they are stored consecutively. Each memory slot holds 8 bits, essentially 1 byte. For instance, 32-bit or 64-bit integers are stored in a back-to-back fashion.
Accessing a memory slot is a super-fast operation, denoted as O(1). In summary, consider memory as a bounded canvas of slots within a computer, capable of storing data. However, itâs essential to note that running out of memory is a possibility.
Academics use big 0, big Π(theta), and big Ί (omega) to describe runtimes.
For the interviews usually we should use O (big O) only. But actually we can describe our runtime for the function in 3 different ways. Consider Quick sort algorithm:
O(n)
O(N^2)
timesO(N logN)
An O(N)
algorithm can run faster than an O(1)
algorithm for certain inputs because Big O describes how runtime increases with input size.
Thatâs why we ignore constants in Big O notation.
An algorithm that might look like O(2N)
is actually O(N)
.
Some might think that labeling an algorithm with two separate for loops as O(2N)
is more accurate,
but itâs not; weâre really just looking at the trend of runtime growth, not precision in constants.
When dealing with terms like O(N^2 + N)
, we drop the less significant terms.
Since O(N^2 + N^2)
simplifies to O(N^2)
, the additional N
, which grows more slowly,
is not as important and we ignore it.
So, O(N^2 + N)
becomes O(N^2)
, O(N + log N)
simplifies to O(N)
, and O(5*2^N + 1000N^100)
simplifies to O(2^N)
.
The key is to focus on the term with the greatest impact on growth.
An ArrayList is like a flexible array that grows as you add more items. It resizes itself by creating a bigger array and moving items over when it gets full.
Hereâs the cool part about its speed: Usually, adding an item is super quick, just O(1)
time.
But occasionally, when itâs full and needs to resize, it takes longer, like O(N)
time,
because it has to shift all items to a new, bigger space.
This slow step doesnât happen often.
So, if you average out the quick and slow insertions, it still feels like each insertion is fast, at O(1)
time.
This average is what we call âamortized time.â Itâs like the rare slow steps are spread out over the many fast ones, making the overall process pretty speedy!
O(log N)
runtimes often come from processes where you keep splitting things in half.
Take binary search as an example. Youâre searching for a number in a sorted list
by repeatedly dividing the list into halves.
Imagine youâre looking for 9 in a list. First, you compare it to the middle number. If 9 is smaller, you look in the left half; if bigger, in the right half. Each step cuts your search area in half until you find 9 or have just one number left.
The key is how many times you can divide your list in half before youâre down to one. For a list with 16 items, it goes like this: 16, 8, 4, 2, 1. Each slash is a division by two.
Flipping this process, if you start with 1 and keep doubling,
how many times until you reach your original number? Thatâs what the log in O(log N)
is all
about. If 2^k
equals your number (like 2^4 = 16), then k
is log base 2 of that number (log2(16) = 4).
This is why binary search or finding an item in a balanced binary tree is O(log N)
.
Each comparison halves the number of possibilities, quickly narrowing down your search.
const f = (n: number) => {
if (n <= 1) {
return 1;
}
return f(n - 1) + f(n - 1);
}
This codeâs runtime is O(2^N)
, not the O(N)
some might guess.
Itâs a recursive function where each call generates two more calls until it reaches the base case (n <= 1).
To understand the runtime, visualize a call tree for f(4)
:
f(4)
).f(3)
s).f(2)
s).Each level has twice as many calls as the previous, leading to 2^0, 2^1, 2^2⌠up to 2^N-1 at the Nth level.
So, the total number of calls is the sum of these, which is 2^N - 1
, making the runtime O(2^N)
.
For space complexity, itâs O(N)
because at any point, the maximum depth of the call stack is N
,
representing the number of calls waiting to complete.
Space complexity is just like time complexity, but instead of measuring how long an algorithm takes, it measures how much memory it uses. Think of it like this:
O(n)
space because youâre using space for each item.n
by n
? Thatâs O(n^2)
space, as the space grows with both the rows and columns.Also, donât forget about the space used when you have functions calling themselves (recursion).
Each time a function calls itself, it takes up a bit of memory.
So, if a function calls itself n times, thatâs O(n)
space being used up.
This is because each call needs its own little memory spot to remember where it is and what itâs doing.
Why Big O? Itâs not about calculating the Big O for every function in your code. Itâs about the mindset.
For example, realizing that nesting loops might be costly is part of this mindset.
Key Points:
Big O is about understanding potential efficiency issues, but itâs just one aspect of writing effective code. Always consider the broader context and aim for clarity and maintainability.
function foo(arr: number[]) {
let sum: number = 0;
let product = 1;
for (let i = 0; i < arr.length; i++) {
sum += arr[i];
}
for (let i = 0; i < arr.length; i++) {
product *= arr[i];
}
return sum + product;
}
It will take TC: O(n). Twice iteration doesnât matter
function printPairs(arr: number[]) {
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr.length; j++) {
console.log(`${i}-${j}`);
}
}
}
It will take TC: O(n^2).
function printUnorderedPairs(arr: number[]) {
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr.length; j++) {
console.log(`${i}-${j}`);
}
}
}
It will take TC: O(n^2).
function foo(N: number, M: number) {
let a = 0, b = 0;
for (i = 0; i < N; i++) {
a = a + Math.random();
}
for (j = 0; j < M; j++) {
b = b + Math.random();
}
return a + b;
}
TC: O(N + M). N and M - different variables => we have traverses of different lengths
function foo() {
let a = 0;
for (i = 0; i < N; i++) {
for (j = N; j > i; j--) {
a = a + i + j;
}
}
return a;
}
TC: O(N^2).
The above code runs total no of times
= N + (N â 1) + (N â 2) + ⌠1 + 0
= N * (N + 1) / 2
= 1/2 * N^2 + 1/2 * N
O(N^2) times.
function foo() {
let i = 0;
let j = 0;
let k = 0;
for (i = Math.floor(n / 2); i <= n; i++) {
for (j = 2; j <= n; j = j * 2) {
k = k + Math.floor(n / 2);
}
}
return;
}
TC: (N * LogN). i runs for n / 2 steps, j would run for O(LogN) steps => O(N / 2 * LogN) => O(N * LogN)