Data Structures and Algorithms: 20 Problem-Solving Techniques by@yourdevopsguy

November 15th 2020 928 reads

Software engineer: previously at Amazon and now at eBay. Certified Professional Cloud Architect.

This is the article I wish I had read when I started coding. I will dive deep into 20 problem-solving techniques that you must know to excel at your next interview.

They have helped me at work too and even given me ideas for a side project I am working on. Also, the last section includes a **step-by-step guide **explaining how to **learn data structures and algorithms**, with examples.

Furthermore, I recommend you read this post, where I outlined a high-level strategy to prepare for your next coding interview as well as the top mistakes to avoid.

I have grouped these techniques in:

- Pointer based
- Recursion based
- Sorting and searching
- Extending basic data structures
- Miscellanea

I will explain each of them, show how to apply them to coding problems, and leave you some exercises so that you can practice on your own. For your convenience, I have copied here the problem statements, but I have left links to all of the exercises.

You can copy-paste my solution and play around with it. I strongly recommend you code your solution and see if it passes the tests.

Some of the questions are better explained through an image or diagram. For these, I have left a comment asking you to open the link to get a graphical description of the problem.

This list is part of the study notes that I took before I applied to Amazon. I hope they will be as useful to you as they have been to me.

**1. Two Pointers**

This technique is very useful on *sorted arrays* and arrays whose elements we want to *group*.

The idea is to use two (or more pointers) to split the array into different areas or groups based on some condition:

- Elements smaller than, equal to and greater than a certain value
- Elements whose sum is too small or too large
- Etc.

The following examples will help you understand this principle.

Given an array of integers that is already sorted in ascending order, find two numbers such that they add up to a specific target number. The function twoSum should return indices of the two numbers such that they add up to the target, where index1 must be less than index2.

Notes:

- Your returned answers (both index1 and index2) are not zero-based.
- You may assume that each input would have exactly one solution and you may not use the same element twice.

Example:

- Input: numbers = [2,7,11,15], target = 9
- Output: [1,2]
- Explanation: The sum of 2 and 7 is 9. Therefore index1 = 1, index2 = 2.

**Solution**

Since the array *a* is sorted, we know that:

- The largest sum is equal to the sum of the last 2 elements
- The smallest sum is equal to the sum of the first 2 elements
- For any index
*i*in [0, a.size() – 1) => a[i + 1] >= a[i]

With this, we can design the following algorithm:

- We keep 2 pointers:
*l*, starting at the first element of the array, and*r*starting at to the last. - If the sum of a[l] + a[r] is smaller than our target, we increment l by one (to change the smallest operand in the addition for another one equal or larger than it at
*l+1*); if it is larger than the target, we decrease r by one (to change our largest operand for another one equal or smaller at*r-1*). - We do this until a[l] + a[r] equals our target or l and r point to the same element (since we cannot use the same element twice) or have crossed, indicating there is no solution.

Here is a simple C++ implementation:

```
vector<int> twoSum(const vector<int>& a, int target) {
int l = 0, r = a.size() - 1;
vector<int> sol;
while(l < r) {
const int sum = a[l] + a[r];
if(target == sum) {
sol.push_back(l + 1);
sol.push_back(r + 1);
break;
} else if (target > sum) {
++l;
} else {
--r;
}
}
return sol;
}
```

The time complexity is O(N), since we may need to traverse the N elements of the array to find the solution.

The space complexity is O(1), since we only need two pointers, regardless of how many elements the array contains.

There are other ways of solving this problem (using a hash table, for example), but I have used it just as an illustration of the two pointer technique.

**Challenges**

Here are two variations of this exercise: three sum and four sum. They can be solved similarly by **reducing** them to this very same problem.

This is a very common technique: **transform a problem whose solution you don’t know to a problem that you can solve**.

Given a sorted array, nums, remove the duplicates in-place such that each element appears only once and return the new length.

Do not allocate extra space for another array, you must do this by modifying the input array in-place with O(1) extra memory.

Example 1:

- Given nums = [1,1,2],
- Output = 2

Example 2:

- Given nums = [0,0,1,1,1,2,2,3,3,4],
- Output = 5

It doesn’t matter what values are set beyond the returned length.

**Solution**

The array is **sorted** and we want to move duplicates to the end of the array, which sounds a lot like **grouping based on some condition**. How would you solve this problem using two pointers?

- You will need one pointer to iterate through the array,
*i*. - And a second pointer,
*n*, one to define the area that contains no duplicates: [0,n].

The logic is as follows. If the values of the elements at index *i* (except *i* = 0) and *i-1* are:

- The same, we don’t do anything – this duplicate will be overwritten by the next unique element in
*a*. - Different: we add
*a[i]*to the section of the array that contains no duplicates – delimited by*n*, and increment n by one.

```
int removeDuplicates(vector<int>& nums) {
if(nums.empty())
return 0;
int n = 0;
for(int i = 0; i < nums.size(); ++i){
if(i == 0 || nums[i] != nums[i - 1]){
nums[n++] = nums[i];
}
}
return n;
}
```

This problem has linear time complexity and constant space complexity (it is usually the case for problems solved using this technique).

Given an array with n objects colored red, white, or blue, sort them in-place so that objects of the same color are adjacent, with the colors in the order red, white, and blue. Here, we will use the integers 0, 1, and 2 to represent the color red, white, and blue respectively.

Note: You are not supposed to use the library’s sort function for this problem.

Example:

- Input: [2,0,2,1,1,0]
- Output: [0,0,1,1,2,2]

**Solution**

The groups this time are:

- Smaller than 1
- Equal to 1
- Larger than 1

What we can achieve with 3 pointers.

This implementation is a bit tricky, so make sure you test it thoroughly.

```
void sortColors(vector<int>& nums) {
int smaller = 0, eq = 0, larger = nums.size() - 1;
while(eq <= larger){
if(nums[eq] == 0){
swap(nums[smaller], nums[eq]);
++smaller; ++eq;
} else if (nums[eq] == 2) {
swap(nums[larger], nums[eq]);
--larger;
} else {
eq++;
}
}
}
```

Since need to traverse the array to sort it, the time complexity is O(N). The space complexity is O(1).

Just as a curiosity, this is an instance of the National Dutch flag problem described by Dijkstra.

Given a Linked List and a number n, write a function that returns the value at the n-th node from the end of the Linked List.

**Solution**

This is one of the most common variations of the two-pointer technique: introducing **an offset** so that one of the pointers reaches a certain condition, the other one is in the position you are interested in.

In this case, if we move one of the pointers, *f*, n times and then start advancing both at the same time by one node, when *f* reaches the end of the list the other pointer, *s*, will point to the node right before the node we want to delete.

Make sure you define what n = 1 means (last element or the element before the last element?), and avoid off-by-one errors.

The time and space complexity are the same as the previous problems’.

Now, you will have two pointers moving at different *speeds*: at every iteration, one of the pointers will advance one node and the other will advance two nodes. We can use this variation to:

- Get to the middle element of a linked list
- Detect cycles in a linked list
- Etc

As usual, it will become clearer once I present some examples.

Given a non-empty, singly linked list with head node head, return a middle node of the list. If there are two middle nodes, return the second middle node.

Example 1:

- Input: [1,2,3,4,5]
- Output: Node 3 from this list

**Solution**

Solving this problem in 2 passes is easy: on the first pass we compute the size of the list, *L*, and on the second pass we only advance *L/2* nodes to find the element at the middle of the list. This approach has linear complexity in time and constant in space.

How can we use 2 pointers to find the middle element in one single pass?

If one of the pointers, *f*, moves twice as fast as the other one *s*, when *f* reaches the end, *s* will be in the middle of the list.

Here’s my solution in C++. Make sure you take into account edge cases (empty list, lists of odd and even sizes, etc) when you test your code.

```
ListNode* middleNode(ListNode* head) {
ListNode* slow = head;
ListNode* fast = head;
while (fast != nullptr && fast->next != nullptr) {
slow = slow->next;
fast = fast->next->next;
}
return slow;
}
```

Given a linked list, determine if it has a cycle in it. To represent a cycle in the given linked list, we use an integer pos which represents the position (0-indexed) in the linked list where the tail connects to. If pos is -1, then there is no cycle in the linked list.

Example 1:

- Input: head = [3,2,0,-4], pos = 1
- Output: true
- Explanation: There is a cycle in the linked list, where the tail connects to the second node.

**Solution**

The simplest solution is to add all the nodes to a hash set. When we traverse the list, if we get to a node that has already been added to the set, there is a cycle. If we get to the end of the list, there are no cycles.

This has a time complexity of O(L), being *L* the length of the list, and space complexity of O(L), since in the *worst* case – no cycles – we need to add all the elements of the list to the hash set.

Time complexity cannot be improved. However, space complexity can be reduced to O(1). Think for a minute how this can be achieved with two pointers moving at different speeds.

Let’s call these pointers fast and slow. For each node slow visits, fast will move two nodes forward. Why?

- If fast reaches the end of the list, the list does not contain any cycles.
- If there is a cycle, since fast moves twice as fast as slow, it is just a matter of time (iterations, to be more precise) that the fast node catches the slow one, pointing both to the same node, which indicates the existence of a cycle.

Now, let’s translate this solution into code:

```
bool hasCycle(ListNode *head) {
ListNode* slow = head, *fast = head;
while(fast){
slow = slow->next;
fast = fast->next;
if(!fast)
break;
fast = fast->next;
if(slow == fast)
return true;
}
return false;
}
```

Given an array, nums, containing n + 1 integers where each integer is between 1 and n (inclusive), prove that at least one duplicate number must exist. Assume that there is only one duplicate number, find the duplicate one.

Example 1:

- Input: [1,3,4,2,2]
- Output: 2

**Solution**

This is the same problem/solution as the previous problems, for arrays instead of linked lists.

```
int findDuplicate(const vector<int>& nums) {
int slow = nums[0], fast = slow;
do {
slow = nums[slow];
fast = nums[nums[fast]];
} while(slow != fast);
slow = nums[0];
while(slow != fast){
slow = nums[slow];
fast = nums[fast];
}
return slow;
}
```

**Challenges**

Here are more problems that can be solved using this technique:

The sliding window technique eases the task of finding optimal *chunks of contiguous data* that meet a certain condition:

- Longest subarray that …
- Shortest substring containing …
- Etc

You can think of it as another variation of the two pointer technique, where pointers are updated separately based on a certain condition. Below is the basic recipe for this type of problems, in pseudocode:

```
Create two pointers, l, and r
Create variable to keep track of the result (res)
Iterate until condition A is satisfied:
Based on condition B:
update l, r or both
Update res
Return res
```

Given a string, find the length of the longest substring without repeating characters.

Example 1:

- Input: “abcabcbb”
- Output: 3
- Explanation: The answer is “abc”, with the length of 3

**Solution**

*Find the length of the longest substring without repeating characters *sounds a lot like *finding optimal *chunks of contiguous data* that meet a certain condition.*

Based on the recipe I described above, you will need:

- Two pointers,
*l*and*r*, to define our substring*s*. - A variable,
*sol*, to store the length of the longest substring we have seen so far. - A way of keeping track of the characters that form
*s*: a set,*seen*, will be perfect for this.

While iterating through the string:

- If the current character is in
*seen*you have to increment *l*to start removing elements from the beginning of our*s*. - Otherwise, add the character to
*seen*, move*r*forward and update*sol*.

```
int lengthOfLongestSubstring(const string& s) {
int sol = 0;
int l = 0, r = 0;
unordered_set<int> seen;
while(r < s.size()) {
const auto find = seen.find(s[r]);
if(find == seen.end()) {
sol = max (sol, r - l + 1);
seen.insert(s[r]);
++r;
} else {
seen.erase(s[l++]);
}
}
return sol;
}
```

**Challenges**

For more practice, you can try the following problems:

There might be simpler solutions but focus on using this technique to get a better grasp of it.

I already published a long and detailed article on this topic that you can find here.

The idea behind backtracking is to explore all the potential solutions for a problem, in a smart way. It builds candidate solutions incrementally and as soon as it determines that a candidate solution is not viable, it *backtracks *to a previous state and tries the next candidate.

Backtracking problems present you with a list of choices. Should you:

- Place
*this piece*in*this position*? - Add
*this number*to the set? - Try
*this number*in*this position*next? - Etc

After you have picked one of the options, it will get you a new list of choices, until you reach a state where there are no more choices: either you arrived at a solution or there is no solution.

Visually, you are moving from the root of a tree with every choice, until you get to a leaf. The basic high-level recipe (in pseudocode) for a backtracking algorithm is the following:

```
boolean backtracking(Node n){
if(isLeaf(n) {
if(isSolution(candidate)){
sol.add(candidate);
return true;
} else {
return false;
}
}
//Explore all children
for(child in n) {
if(backtracking(child))
return true;
}
return false;
}
```

This can of course change depending on the problem:

- If you need
*all*solutions, the helper function returns nothing (void) to avoid stopping when we find the first solution. - To backtrack, you may have to bring your program to a previous state before you can move forward
- After you choose a child, you need to detect if the candidate solution is viable or not: the definition of viable depends on the problem
- Etc

But the core idea is the same: examine, in a systematic way, all paths and backtrack as soon as the current path is no longer viable.

The n-queens puzzle is the problem of placing n queens on an n×n chessboard such that no two queens attack each other Given an integer n, return all distinct solutions to the n-queens puzzle.

Each solution contains a distinct board configuration of the n-queens’ placement, where ‘Q’ and ‘.’ both indicate a queen and an empty space respectively.

Example:

- Input: 4
- Output: [ [“.Q..”, “…Q”, “Q…”, “..Q.”], [“..Q.”, “Q…”, “…Q”, “.Q..”] ]
- Explanation: There exist two distinct solutions to the 4-queens puzzle as shown above.

**Solution**

This is a classic backtracking problem:

- We need all solutions here, which is why the recursive function returns nothing as I explained in the introduction of this section.
- Do not worry too much about the
*isViableSolution*function for now. Try to see the recipe I gave (slightly modified) you in action.

```
class Solution {
public:
vector<vector<string>> solveNQueens(int n) {
vector<vector<string>> solutions;
/**
This is usually solved with a vector of integers,
where each integer represents the position of the queen in that column.
This particular problem expects strings.
Each string represents a column
*/
vector<string> board(n, string(n, '.'));
solveBoard(solutions, board, 0, n);
return solutions;
}
void solveBoard(vector<vector<string>>& solutions, vector<string>& board, int col, int n){
if(col == n){
solutions.push_back(board);
return;
}
for(int row = 0; row < n; row++){
if(isViableSolution(board, row, col)){
board[row][col] = 'Q';
solveBoard(solutions, board, col + 1, n);
//Backtracking - we bring our board to the previous state
board[row][col] = '.';
}
}
}
bool isViableSolution(vector<string>& board, int row, int col){
int n = board.size();
for(int x = 1; x <= col; x++){
if(board[row][col-x] == 'Q')
return false;
}
for(int x = 1; row - x >= 0 && col >= x; x++){
if(board[row-x][col-x] == 'Q')
return false;
}
for(int x = 1; row + x < n && col >= x; x++){
if(board[row+x][col-x] == 'Q')
return false;
}
return true;
}
};
```

Given a string containing digits from 2-9 inclusive, return all possible letter combinations that the number could represent (check the link for diagram). Note that 1 does not map to any letters.

Example:

- Input: “23”
- Output: [“ad”, “ae”, “af”, “bd”, “be”, “bf”, “cd”, “ce”, “cf”].

**Solution**

For every number in the input, you have several letters to choose from. If you can draw a tree (this is what I do) where the branches are born from the different choices you take, chances are that you can apply backtracking.

*Note*: Before you start solving any problem, try different approaches: dynamic programming, greedy algorithms, divide and conquer, a combination of algorithms and data structures, etc. Coding is the *last step*.

My solution, in C++:

```
vector<string> letterCombinations(const string &digits) {
if(digits.empty())
return {};
const vector<string> letters {"", "", "abc", "def", "ghi", "jkl", "mno", "pqrs", "tuv", "wxyz"};
vector<string> sol;
string candidate (digits.size(), ' ');
h(sol, 0, candidate, letters, digits);
return sol;
}
void h(vector<string> &sol, int idx, string &candidate, const vector<string> &letters, const string &digits){
if(idx == digits.size()){
sol.push_back(candidate);
return;
}
for(const char &c : letters[digits[idx] - '0']) {
candidate[idx] = c;
h(sol, idx + 1, candidate, letters, digits);
}
}
```

Since I knew already the size of the solution, I initialize my candidate with that size and just modified the character at position idx. If the size is not known, this can be done instead:

```
string candidate; //instead of string candidate (digits.size(), ' ');
…
for(const char &c : letters[digits[idx] - '0']) {
candidate.push_back(c);
h(sol, idx + 1, candidate, letters, digits);
candidate.pop_back();
}
```

Write a program to solve a Sudoku puzzle by filling the empty cells. Open the link to get a longer description, including an image of the puzzle.

**Solution**

In an interview, unless you have plenty of time, you will not need to implement *isViableSolution*, just to sketch it. I know a colleague who got this question in an on-site.

Even though the code is long, it is mostly because of *isViableSolution*. Otherwise, it is not very different from other backtracking problems.

```
void solveSudoku(vector<vector<char>>& board){
helper(board);
}
bool helper(vector<vector<char>>& board, int row = 0, int col = 0) {
if(col == size){
col = 0;
++row;
if(row == size){
return true;
}
}
if(board[row][col] != '.'){
return helper(board, row, col + 1);
}
for(char i = '1'; i <= '9'; ++i){
if(isViableSolution(board, row, col, i)){
board[row][col] = i;
if (helper(board, row, col + 1))
return true;
}
}
board[row][col] = '.';
return false;
}
bool isViableSolution(const vector<vector<char>>& board, int row, int col, char c){
for(const auto& n : board[row]){
if(n == c)
return false;
}
for(int r = 0; r < size; ++r){
if(board[r][col] == c)
return false;
}
const int br = row / nsize;
const int bc = col / nsize;
for(int i = 0; i < nsize ; ++i){
for(int j = 0; j < nsize; ++j){
if(c == board[3 * br + i][3 * bc + j])
return false;
}
}
return true;
}
```

**Challenges**

I have created a generic separate section for:

- Subsets
- Combinations
- Permutations
- Etc

Because conceptually they are similar: take the elements of a container (array, string, etc) and create subsets from them according to some rule(s).

You will notice that most of these are backtracking problems or very similar.

Given a set of distinct integers, nums, return all possible subsets (the power set).

Note: The solution set must not contain duplicate subsets.

Example:

- Input: nums = [1,2,3]
- Output: [ [3], [1], [2], [1,2,3], [1,3], [2,3], [1,2], [] ]

**Solution**

For every index *i*, you need to generate two solutions:

- One that contains a[i]
- One that does not contain a[i]

Until you get to the end of the array.

Here’s a simple implementation of what we have discussed.

```
vector<vector<int>> subsets(const vector<int>& nums) {
vector<vector<int>> sol;
vector<int> partial;
h(sol, partial, nums);
return sol;
}
void h(vector<vector<int>>& sol, vector<int>& partial, const vector<int>& nums, int idx = 0){
sol.push_back(partial);
if(idx == nums.size()){
return;
}
for(int i = idx; i < nums.size(); ++i){
partial.push_back(nums[i]);
h(sol, partial, nums, i + 1);
partial.pop_back();
}
}
```

This is a variant of the previous problem. Try playing around with my code to see if you can change it to meet the new requirement. You have to change less than 5 lines.

Good luck!

**Solution**

```
vector<vector<int>> subsetsWithDup(vector<int> nums) {
vector<vector<int>> sol;
vector<int> partial;
sort(nums.begin(), nums.end());
h(sol, partial, nums);
return sol;
}
void h(vector<vector<int>>& sol, vector<int>& partial, const vector<int>& nums, int idx = 0){
sol.push_back(partial);
if(idx == nums.size()){
return;
}
for(int i = idx; i < nums.size(); ++i){
if(i != idx && nums[i] == nums[i - 1])
continue;
partial.push_back(nums[i]);
h(sol, partial, nums, i + 1);
partial.pop_back();
}
}
```

Given a collection of distinct integers, return all possible permutations.

Example:

- Input: [1,2,3]
- Output: [ [1,2,3], [1,3,2], [2,1,3], [2,3,1], [3,1,2], [3,2,1] ]

**Solution**

Very similar to the previous problem, but this time we need to consider all the elements of the array in our candidates. The way we move them around is by swapping elements at different positions.

```
vector<vector<int>> permute(const vector<int>& nums) {
vector<vector<int>> sol;
vector<int> p (nums);
h(nums, sol, p);
return sol;
}
void h(const vector<int> &nums, vector<vector<int>> &sol, vector<int> &p, int idx = 0){
if(idx == nums.size()){
sol.push_back(p);
return;
}
for(int i = idx; i < nums.size(); ++i){
swap(p[idx], p[i]);
h(nums, sol, p, idx + 1);
swap(p[idx], p[i]);
}
}
```

Given a collection of numbers that might contain duplicates, return all possible unique permutations.

Example:

- Input: [1,1,2]
- Output: [[1,1,2], [1,2,1], [2,1,1] ]

**Solution**

We can apply here the same trick as before for combinations. If you could not come up with this “trick”, you can always use a set to store the solutions and then create and return an array from this set.

```
void helper(vector<vector<int>>& res, int pos, vector<int>& nums) {
if(pos == nums.size()) {
res.push_back(nums);
return;
}
for(int i = pos; i < nums.size(); ++i) {
if(i != pos && nums[i] == nums[pos])
continue;
swap(nums[i], nums[pos]);
helper(res, pos + 1, nums);
}
rotate(nums.begin() + pos, nums.begin() + pos + 1, nums.end());
}
vector<vector<int>> permuteUnique(vector<int>& nums) {
if(nums.empty())
return {};
sort(nums.begin(), nums.end());
vector<vector<int>> sol;
helper(sol, 0, nums);
return sol;
}
```

As you can see, there is no need to learn every single data structure and algorithm in the literature to solve a vast amount of problems. *What is valuable, and can be trained, is the ability to systematically think through a problem and skills to turn your ideas into code*.

Challenges

For extra practice: Combination sum, Combination sum II and Combination sum III.

Sorting is not a problem-solving technique per see but as you have seen in the previous sections we have been able to solve many problems by either sorting the input or assuming it was sorted and then applying one of the other techniques.

When you are facing a new problem, ask yourself:

**Can I sort the input?**. For example, sometimes you have to return indices, therefore sorting is not an option**How would this problem change if the elements were sorted?****How does sorting affect the overall complexity?**. The best sorting algorithms have a complexity of O(n log n) – sorting integers can be done in O(n)

For instance, our first problem (Two sum) can be solved in linear time with the two-pointer technique because the input is sorted. However, if we have to sort, the complexity becomes O(n log n).

Here you have a couple of problems that can be easily solved after sorting the input.

Other solutions trade space for time, so it is worth considering them before you start writing any code.

Given two strings s and t, write a function to determine if t is an anagram of s.

Example 1:

- Input: s = “anagram”, t = “nagaram”
- Output: true

Example 2:

- Input: s = “rat”, t = “car”
- Output: false

**Solution**

By definition, two strings are anagrams if they contain **the same characters in different order**. Therefore, we can simply sort both strings and compare them. The overall time complexity is O(n log n).

```
bool isAnagram(string s, string t) {
if(s.size() != t.size())
return false;
sort(s.begin(), s.end());
sort(t.begin(), t.end());
return s == t;
}
```

Given two arrays, write a function to compute their intersection.

Example 1:

- Input: nums1 = [1,2,2,1], nums2 = [2,2]
- Output: [2,2]

**Solution**

You can solve this problem by sorting both arrays and using a two-pointer based approach. Give it a try before reading my solution.

Imagine we have the following arrays:

- A = [1,2,4,5]
- B = [2,3,5,7,9,11]

And two pointers, *l* for A and *r* for B, starting at index 0 in each array.

- A[l] = 1
- B[r] = 2

We need to increment *l* to see if A has a 2: B can’t contain a 1 to the right of *r*, because it is sorted.

In short, we compare both elements:

- If they are the same, we include them to the array representing the intersection of both arrays and advance
*both*pointers - If they are different, we move the pointer pointing to the smallest of the two elements.

```
vector<int> intersect(vector<int>& nums1, vector<int>& nums2) {
sort(nums1.begin(), nums1.end());
sort(nums2.begin(), nums2.end());
vector<int> sol;
int i = 0, j = 0;
while( i < nums1.size() && j < nums2.size()) {
if(nums1[i] == nums2[j]) {
sol.push_back(nums1[i]);
++i;
++j;
} else if(nums1[i] > nums2[j]) {
++j;
} else {
++i;
}
}
return sol;
}
```

The time complexity of this approach is O(n log n) – even though the two-pointer part is linear – and the space complexity is O(1) – not including the space needed to store the intersection, of course, in which case we could say it is O(min(length(A), length(B)).

- K closest points to origin. In another section, I will present a different solution, slightly faster.
- Largest number

Most interval related problems I have seen boil down to:

**Modeling the interval**as an array of two elements, a pair/tuple or a custom class (this is the cleanest option)**Sorting**the input- Iterating through the sorted array and deciding what to do based on the starts/ends of the intervals

You can see this as yet another type of problem that can be simplified after sorting the input, which is why I have included it in this section.

I’m leaving here my solution to two exercises, based on what I have just described. Try them before reading my solutions.

Given a collection of intervals, merge all overlapping intervals.

Example 1:

- Input: intervals = [[1,3],[2,6],[8,10],[15,18]]
- Output: [[1,6],[8,10],[15,18]]
- Explanation: Since intervals [1,3] and [2,6] overlaps, merge them into [1,6].

**Solution**

```
vector<vector<int>> merge(vector<vector<int>>& intervals) {
sort(intervals.begin(), intervals.end(), [](const auto& i1, const auto& i2){
return i1[0] < i2[0];
});
int i = 0;
vector<vector<int>> sol;
vector<int> curr(2);
while(i < intervals.size()){
curr = intervals[i++];
while(i < intervals.size() && intervals[i][0] <= curr[1]){
curr[1] = max(curr[1], intervals[i][1]);
++i;
}
sol.push_back(curr);
}
return sol;
}
```

Given two lists of closed intervals, each list of intervals is pairwise disjoint and in **sorted order**. Return the intersection of these two interval lists.

**Solution**

```
vector<vector<int>> intervalIntersection(const vector<vector<int>>& A, vector<vector<int>>& B) {
vector<vector<int>> sol;
int i = 0, j = 0;
while(i < A.size() && j < B.size()){
const int lo = max(A[i][0], B[j][0]);
const int hi = min(A[i][1], B[j][1]);
if(lo <= hi){
sol.push_back({lo, hi});
}
if(A[i][1] < B[j][1])
++i;
else
++j;
}
return sol;
}
```

*Binary search* is a search algorithm that can be applied to sorted data structures (like arrays or binary search trees) with O(log n) time complexity and O(1) space complexity.

What you might not know is its implementation’s most common bug. Assuming we are performing a binary search in the elements whose range goes from [l,r], the element in the middle is usually computed as:

`const int m = (l + r) / 2;`

Can you spot the problem here?

This line can overflow. The safe way of computing this middle element is:

`const int m = l + (r - l) / 2;`

Here’s a basic implementation of binary search, using C++ templates. If you understand how it works and how to implement it correctly, you can solve many problems that just have a little tweak in the requirements or that are *binary search* problems in disguise.

```
template<typename T>
int binary_search(const vector<T>& v, int k) {
int l = 0, r = v.size() - 1;
while(l<=r) {
const int m = l + (r - l) / 2;
if(k == v[m]) {
return m;
} else if (k > v[m]) {
l = m + 1;
} else {
r = m - 1;
}
}
return -1;
}
```

Compute and return the square root of x, where x is guaranteed to be a non-negative integer. Since the return type is an integer, the decimal digits are truncated and only the integer part of the result is returned.

Example:

- Input: 4
- Output: 2

**Solution**

We need to find a number in the range [0, x], which is sorted. It sounds a lot like a binary search problem.

It is not critical to know this, but we can optimize a little:

If *x* is 0 or 1, its square root is *x*. This lets us start testing numbers from 2. The upper bound of the range can be reduced to x/2, since (x/2)^2 = x^2/4 >= x => x >= 4, which is true for any integer in the range [2,…]

With this, we can search in [2, x/2] and speed things up a bit.

```
int mySqrt(int x) {
if(x == 0 || x == 1)
return x;
int sol = 1;
int l = 2, r = x / 2;
while(l <= r){
const long m = l + (r - l) / 2;
if(m * m == x)
return m;
else if(m * m > x){
r = m - 1;
} else {
sol = m;
l = m + 1;
}
}
return sol;
}
```

**Challenges**

Have fun!

- Search insert position
- Random pick with weight
- Minimum in rotated sorted array
- Minimum in rotated sorted array 2

This is one of the techniques you need to know to explore trees and graphs. Since many problems can be modeled as graphs, you **must know **this technique. To implement it, we just need to use a queue *q* and add to this same queue the children of the nodes we process from *q*.

At any given point in the iteration, BFS visits all the nodes at the same distance from the origin. This will become clearer after some of these examples.

Given two words (beginWord and endWord), and a dictionary’s word list, find the length of the shortest transformation sequence from beginWord to endWord, such that:

- Only one letter can be changed at a time.
- Each transformed word must exist in the word list.

Notes:

- Return 0 if there is no such transformation sequence.
- All words have the same length.
- All words contain only lowercase alphabetic characters.
- You may assume no duplicates in the word list.
- You may assume beginWord and endWord are non-empty and are not the same.

Example 1:

- Input:beginWord = “hit”,endWord = “cog”,wordList = [“hot”,”dot”,”dog”,”lot”,”log”,”cog”]
- Output: 5
- Explanation: As one shortest transformation is “hit” -> “hot” -> “dot” -> “dog” -> “cog”, return its length 5.

**Solution**

I got asked this question during my on-site interview at Amazon. The idea is to model this problem using a graph:

Nodes represent wordsEdges connect words that only differ by one letter

With this setup, this problem is equivalent to finding a path between two nodes in a graph, which BFS can solve. Since all edges have the same weight (1), we do not need Dijkstra or any other fancier algorithm.

```
int ladderLength(const string &beginWord, const string &endWord, const vector<string>& wordList) {
if(beginWord == endWord)
return 1;
unordered_set<string> dict(wordList.begin(), wordList.end());
queue<string> todo;
todo.push(beginWord);
dict.erase(beginWord);
int ladder = 1;
while (!todo.empty()) {
ladder++;
int n = todo.size();
for (int i = 0; i < n; i++) {
string word = todo.front();
todo.pop();
for (int j = 0; j < word.size(); j++) {
char c = word[j];
for (int k = 0; k < 26; k++) {
word[j] = 'a' + k;
if (dict.find(word) != dict.end()) {
if (word == endWord) {
return ladder;
}
todo.push(word);
dict.erase(word);
}
}
word[j] = c;
}
}
}
return 0;
}
```

*After you visit the DFS section*:

What would happen if we use DFS instead? Do you see any benefits/drawbacks?

Given a binary tree, return the level order traversal of its nodes’ values. (ie, from left to right, level by level).

Open the link for a graphical description of the problem.

**Solution**

You only need to add here a little tweak to the standard BFS algorithm: you need to know how many elements in the queue you need to process for each level.

This is one approach that can be applied to many other problems.

```
vector<vector<int>> levelOrder(TreeNode* root) {
if(!root)
return {};
vector<vector<int>> sol;
queue<TreeNode*> q;
q.push(root);
vector<int> partial;
while(!q.empty()){
int size = q.size();
while(size-->0){
auto n = q.front();
partial.push_back({n->val});
q.pop();
if(n->left)
q.push({n->left});
if(n->right)
q.push({n->right});
}
sol.push_back(partial);
partial.clear();
}
return sol;
}
```

**Challenges**

Let me propose a different kind of challenge: building something, instead of solving abstract problems. I find it more fun and you can add them to your Github profile. Here are just two examples:

- Web crawler using BFS to explore all the links on a website.
- Minesweeper

Similar to BFS in its purpose: explore trees and graphs. DFS is not guaranteed to find the shortest path between two points, but it will find any existing path.

It is usually shorter to implement than BFS. Some people find it easier. Others, because of the recursive calls, not so much. It is up to you. Just make sure you think about potential stack overflow issues if the size of the stack starts to get big.

Some problems are much easier to be solved with DFS/recursion, that it is worth practicing.

Given a 2d grid map of ‘1’s (land) and ‘0’s (water), count the number of islands. An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically. You may assume all four edges of the grid are all surrounded by water.

Example:

- Input: grid = [ [“1″,”1″,”1″,”1″,”0”], [“1″,”1″,”0″,”1″,”0”], [“1″,”1″,”0″,”0″,”0”], [“0″,”0″,”0″,”0″,”0”]]
- Output: 1

**Solution**

I got this problem at my first phone interview at Amazon.

As soon as you see a matrix, think of a graph. This problem (and its variations) is very straightforward:

Iterate through the matrixFor every 1 you find, increase the counter and *sink *the island

To sink the island, you need to visit all the surrounding 1s in the matrix, which is equivalent to visit all the neighbors of that node and of all its neighbors, which sounds a lot like a recursive problem.

You can try to solve this using BFS too, but DFS is much shorter.

```
int numIslands(vector<vector<char>>& grid) {
int numIslands = 0;
for(int r = 0; r < grid.size(); ++r){
for(int c = 0; c < grid[0].size(); ++c){
if(grid[r][c] == '1'){
++numIslands;
sinkIslands(grid, r, c);
}
}
}
return numIslands;
}
const vector<vector<int>> dirs {{1, 0}, {-1, 0}, {0, 1}, {0, -1}};
void sinkIslands(vector<vector<char>> &m, int r, int c){
m[r][c] = '0';
for(const auto &d : dirs){
const int nr = r + d[0];
const int nc = c + d[1];
if(isValid(m, nr, nc)){
sinkIslands(m, nr, nc);
}
}
}
bool isValid(vector<vector<char>> &m, int r, int c){
return r >= 0 && r < m.size() && c >= 0 && c < m[0].size() && m[r][c] == '1';
}
```

Given a binary tree, check whether it is a mirror of itself (ie, symmetric around its center).

**Solution**

Many tree-related problems have relatively straightforward recursive solutions. This problem could be solved using BFS but DFS makes it so much easier.

I will leave this one here as an exercise for you. Just use my solution in case you get stuck.

```
bool isSymmetric(TreeNode* root) {
if(!root)
return true;
return helper(root->left, root->right);
}
bool helper(TreeNode* p, TreeNode* q){
if(!p && !q)
return true;
if(!p || !q)
return false;
return p->val == q->val && iS(p->left, q->right) && iS(p->right, q->left);
}
```

**Challenges**

Take the same challenges and exercises I gave you for BFS and try to implement them using DFS instead. Also, for more practice, give a try to the following exercises:

You can see this algorithm as an application of DFS. Its implementation just needs one minor change to the regular DFS: after processing all the children of a node, add this node to a stack.

It is that simple.

The best way to intuitively understand what this algorithm achieves is to imagine a bunch of tasks, some of which depend on others (task 1 cannot start until task 2 has finished). A topological sort will list all these tasks preserving this structure of dependencies.

Let’s solve a problem using this algorithm.

There are a total of n courses you have to take, labeled from 0 to n-1. Some courses may have prerequisites, for example, to take course 0 you have to first take course 1, which is expressed as a pair: [0,1]

Given the total number of courses and a list of prerequisite pairs, return the ordering of courses you should take to finish all courses. There may be multiple correct orders, you just need to return one of them. If it is impossible to finish all courses, return an empty array.

Example 1:

- Input: 2, [[1,0]]
- Output: [0,1]
- Explanation: There are a total of 2 courses to take. To take course 1 you should have finished course 0. So the correct course order is [0,1].

Example 2:

- Input: 4, [[1,0],[2,0],[3,1],[3,2]]
- Output: [0,1,2,3] or [0,2,1,3]
- Explanation: There are a total of 4 courses to take. To take course 3 you should have finished both courses 1 and 2. Both courses 1 and 2 should be taken after you finished course 0. So one correct course order is [0,1,2,3]. Another correct ordering is [0,2,1,3].

**Solution**

This is the classical topological sort problem. There are a bunch of courses to take and some depend on others. This can be modeled as a directed graph. A topological sort would return *the ordering of courses you should take to finish all courses*.

A prerequisite to applying topological sort is that the graph is **directed and acyclic**. From the problem description, we can see that the graph is directed. We can detect if it contains any cycles *and* compute a topological sort in the same pass.

A more indirect but still valid approach would be to first check whether it has cycles and only if there are no cycles, compute a topological sort.

```
vector<int> findOrder(int numCourses, vector<vector<int>>& prerequisites) {
vector<vector<int>> adj(numCourses, vector<int>(0));
for(const auto &p : prerequisites){
int u = p[0];
int v = p[1];
adj[u].push_back(v);
}
vector<bool> white(numCourses, true),
grey(numCourses),
black(numCourses);
vector<int> sol (0);
for(int i = 0; i < numCourses; ++i){
if(white[i] && hasCycle(adj, i, white, grey, black, sol)){
return {};
}
}
return sol;
}
bool hasCycle(const vector<vector<int>>& adj, int i, vector<bool> &white, vector<bool> &grey, vector<bool> &black, vector<int>& sol){
//We have started exploring this node
white[i] = false;
grey[i] = true;
for(const auto & n : adj[i]){
if(black[i])
continue;
if(grey[n] || hasCycle(adj, n, white, grey, black, sol))
return true;
}
grey[i] = false;
black[i] = true;
sol.push_back(i);
return false;
}
```

Challenges

Here you can find a few more problems to practice.

Have fun!

I have seen data structure mostly used as another way to implement the sliding window technique you read about earlier in this article. The only difference with a standard FIFO queue is that you can operate (insert and delete elements) on **both ends** of the queue.

That’s it. Simple.

Let’s see how such a minor change can simplify some kind of these problems.

Given an array, nums, there is a sliding window of size k which is moving from the very left of the array to the very right. You can only see the k numbers in the window. Each time the sliding window moves right by one position. Return the max sliding window.

Note: Open the link for a better understanding of the problem (there is an image).

Example:

- Input: nums = [1,3,-1,-3,5,3,6,7], and k = 3
- Output: [3,3,5,5,6,7]

**Solution**

We will use the dequeue to store *indices*, not values. We need this to know what elements are still part of the sliding window. For every iteration, there are four things to do.

- Remove elements in the dequeue which are outside of the current sliding window (one per iteration)
- Remove
**all**elements in the dequeue which are smaller than the current element we are at since they cannot represent the max of that sliding window - Add the current element to the deque
- Once we have completed the first sliding window, we can start adding elements to our solution. By design, the element at the front of our dequeue will contain the maximum of the sliding window, which is what we are interested in.

This technique can be applied to find the minimum or other properties of a contiguous block of data in an array.

```
vector<int> maxSlidingWindow(const vector<int>& nums, int k) {
vector<int> sol;
deque<int> dq;
for (int i=0; i<nums.size(); i++) {
// Step 1
if (!dq.empty() && dq.front() == i-k)
dq.pop_front();
// Step 2
while (!dq.empty() && nums[dq.back()] < nums[i])
dq.pop_back();
// Step 3
dq.push_back(i);
// Step 4
if (i >= k-1)
sol.push_back(nums[dq.front()]);
}
return sol;
}
```

**Challenge**

Design your own circular dequeue, to fully grasp the internals of this data structure.

The best way to think of a trie is as an extension of a tree, where you store characters that form words as you move through the different branches of the trie.

There are variants where you store suffixes instead of prefixes, where you use compression to reduce the size of the trie, etc. But at its basic, it is another type of tree.

They are used everywhere:

- Auto-complete
- Spell checkers
- IP routing
- Longest prefix/suffix matching

Etc

Implement a trie with insert, search, and startsWith methods.

**Solution**

Here is a simple implementation (for an interview, of course) of a trie. Compared to a tree, you just:

An extra boolean to indicate whether that node marks the end of a wordA data structure to store pointers to the node’s children: a hash table, an array of characters, etc.

```
class Trie {
private:
struct Node{
bool isWord;
unordered_map<int, Node*> children;
Node() : isWord(false) {}
};
Node* findNode(const string &word){
Node* curr = root;
for(int i = 0; i < word.size() && curr; ++i)
curr = curr->children[word[i] - 'a'];
return curr;
}
public:
Node* root;
/** Initialize your data structure here. */
Trie() {
root = new Node();
}
/** Inserts a word into the trie. */
void insert(const string &word) {
Node * curr = root;
for(int i = 0; i < word.size(); ++i){
const char c = word[i] - 'a';
if(!curr->children[c]){
Node* newChild = new Node();
curr->children[c] = newChild;
}
curr = curr->children[c];
}
curr->isWord = true;
}
/** Returns true if the word is in the trie. */
bool search(const string &word) {
Node* curr = findNode(word);
return curr ? curr->isWord : false;
}
/** Returns true if there is any word in the trie that starts with the given prefix. */
bool startsWith(const string &prefix) {
Node* curr = findNode(prefix);
return curr ? true : false;
}
};
```

Given a 2D board and a list of words from the dictionary, find all words on the board. Each word must be constructed from letters of adjacent cells, where “adjacent” cells are those horizontally or vertically neighboring. The same letter cell may not be used more than once in a word.

Example:

- Input: board = [ [‘o’,’a’,’a’,’n’], [‘e’,’t’,’a’,’e’], [‘i’,’h’,’k’,’r’], [‘i’,’f’,’l’,’v’] ] words = [“oath”,”pea”,”eat”,”rain”]
- Output: [“eat”,”oath”]

**Solution**

This might not be the simplest way of solving this problem, but it is a clear application of a trie.

At every step, you will have built some candidate string and need to check if it belongs to the dictionary. A hash set containing all the words in the dictionary would give a very good performance. Why bothering with a trie then? Because a trie can tell you whether that path is worth exploring or not, improving the efficiency of our solution.

In our previous example, imagine we form the string “oa”.

We can check if this prefix (potential word) exists in our trie. It exists since we have added the word “oath”. Now imagine we keep moving right through our board and form the string “oaa”.

In our trie, there are no words that contain the prefix “oaa”, so we can **backtrack** at this point.

With a hash set that contains all the words, you could not do this type of prefix matching, unless you create two different tables: one for prefixes and another one for words.

This is a complex problem because it combines different elements and it is not easy to implement, so do not get discouraged if it takes you a few tries (no pun intended) to get it right.

```
const vector<vector<int>> dirs {{1, 0}, {-1, 0}, {0, 1}, {0, -1}};
struct Trie{
struct Node{
bool isWord;
unordered_map<char, Node*> children;
Node() : isWord(false){}
};
Node* root;
Trie(){root = new Node();}
void insert (const string& w){
Node* cur = root;
for(const auto &c : w){
if(cur->children.find(c) == cur->children.end()){
cur->children[c] = new Node();
}
cur = cur->children[c];
}
cur->isWord = true;
}
bool hasPrefix(const string &prefix){
Node* cur = root;
for(const auto &c : prefix){
if(cur->children.find(c) == cur->children.end()){
return false;
}
cur = cur->children[c];
}
lastNode = cur;
return cur != nullptr;
}
bool isValidWord(const string &w){
if(lastNode){
bool res = lastNode->isWord;
return res;
}
Node* cur = root;
for(const auto &c : w){
cur = cur->children[c];
if(!cur){
return false;
}
}
lastNode = cur;
return lastNode->isWord;
}
void deleteWord(){
lastNode->isWord = false;
lastNode = nullptr;
}
Node* lastNode;
};
Trie t;
public:
vector<string> findWords(vector<vector<char>>& board, const vector<string>& words) {
for(const auto& w : words)
t.insert(w);
vector<string> sol;
for(int row = 0; row < board.size(); ++row){
for(int col = 0; col < board[0].size(); ++col){
string candidate (1, board[row][col]);
if(t.hasPrefix(candidate))
addWords(board, row, col, sol, candidate);
}
}
return sol;
}
void addWords(vector<vector<char>> &board, int row, int col, vector<string> &sol, string &candidate){
if(t.isValidWord(candidate)){
sol.push_back(candidate);
t.deleteWord();
}
const char old = board[row][col];
board[row][col] = '-';
for(const auto &d : dirs){
const int nrow = row + d[0];
const int ncol = col + d[1];
if(nrow >= 0 && nrow < board.size()
&& ncol >= 0 && ncol < board[0].size()
&& board[nrow][ncol] != '.'
&& t.hasPrefix(candidate + board[nrow][ncol])){
candidate.push_back(board[nrow][ncol]);
addWords(board, nrow, ncol, sol, candidate);
candidate.pop_back();
}
}
board[row][col] = old;
}
```

**Challenges**

- Design your own autocomplete!

Some problems can be solved by using two different instances of the same data structure, so it is worth keeping it in mind when you get stuck in a problem. I have seen it mostly with:

- Queues
- Stacks
- Arrays

Do not limit yourself to these.

Median is the middle value in an ordered integer list. If the size of the list is even, there is no middle value. So the median is the mean of the two middle values.

For example,

[2,3,4], the median is 3[2,3], the median is (2 + 3) / 2 = 2.5

Design a data structure that supports the following two operations:

void addNum(int num) – Add an integer number from the data stream to the data structure.double findMedian() – Return the median of all elements so far.

**Solution**

```
class MedianFinder {
priority_queue<int, vector<int>, less<int>> L;
priority_queue<int, vector<int>, greater<int>> H;
public:
MedianFinder() { }
void addNum(int x) {
if(L.empty() == false && x > L.top()) {
H.emplace(x);
} else {
L.emplace(x);
}
if(H.size() > L.size() + 1) {
L.emplace(H.top());
H.pop();
} else if(L.size() > H.size() + 1) {
H.emplace(L.top());
L.pop();
}
}
double findMedian() {
if(H.size() == L.size()) {
return (H.top() + L.top()) * 0.5;
} else {
return H.size() > L.size() ? H.top() : L.top();
}
}
};
```

**Challenges**

In increasing order of difficulty:

This section deserves a separate article. Here I will list some basic *tricks *and common bit manipulation problems.

This is the most comprehensive site I have found on this topic. Use it as a reference.

Given an array containing n distinct numbers taken from 0, 1, 2, …, n, find the one that is missing from the array.

Example 1:

- Input: [3,0,1]
- Output: 2

**Solution**

This problem can be solved just by using the XOR operator:

- Any bit XORed with itself produces a 0 -> a ^ a = 0
- Any bit XORed with 0 produces the original bit -> a ^ 0 = a
- XOR is associative = a ^ (b ^ c) = (a ^ b ) ^ c

If we XOR all the numbers in the array (integers in the range [0,n]) with all the integers in [0 to n], the pairs will produce *zeroes* and the missing number will be XORed with 0 (resulting in itself), solving our problem.

```
int missingNumber(const vector<int>& nums) {
int res = nums.size();
for(int i = 0; i < nums.size(); ++i)
res ^= (i ^ nums[i]);
return res;
}
```

Given an integer, write a function to determine if it is a power of two.

**Solution**

I got this one in an interview.

Powers of two can be expressed in binary as a leading 1 and some 0s:

- 2: 1
- 4: 10
- 8: 100
- 16: 1000

With this, it is simple to figure out if a number is a power of two or not. You can achieve fast if you know what the following line does (I am sure you can figure it out on your own):

`x & (x-1)`

This trick is worth knowing since it is used a lot.

```
bool isPowerOfTwo(int n) {
return n > 0 ? (n & (n - 1)) == 0 : false;
}
```

Write a function that takes an unsigned integer and returns the number of ‘1’ bits it has (also known as the Hamming weight).

Example 1:

- Input: 00000000000000000000000000001011
- Output: 3
- Explanation: The input binary string 00000000000000000000000000001011 has a total of three ‘1’ bits.

**Solution**

This problem is very straightforward: iterate through all the bits in the input and count how many how of them are 1s.

Try to use the trick I showed you in the previous exercise to improve the performance of this algorithm (in the average case).

```
int hammingWeight(uint32_t n) {
int sum = 0;
while (n != 0) {
sum++;
n &= (n - 1);
}
return sum;
}
```

**Challenges**

These are for you to practice the previous techniques:

This is another very frequent type of problem. You are given a list of elements and have to return the top K, defining *top* as:

- The largest/smallest
- The closest/furthest to a point
- The most frequent in the list
- Etc

I have seen some of the following (or some sort of variation) asked in interviews.

There is no single data structure that will always give the right solution, but the following elements are very useful:

- Hash table
- Priority queue
- Sorting (the input or as an intermediate step)

Priority queues usually provide a better complexity.

Given a non-empty list of words, return the k most frequent elements. Your answer should be sorted by frequency from highest to lowest. If two words have the same frequency, then the word with the lower alphabetical order comes first.

Example 1:

- Input: [“i”, “love”, “leetcode”, “i”, “love”, “coding”], k = 2
- Output: [“i”, “love”]
- Explanation: “i” and “love” are the two most frequent words. Note that “i” comes before “love” due to a lower alphabetical order.

**Solution**

Pretty straightforward: count how many times each word appears (using a hash table) and *somehow* return the k most common elements.

For this last part, you either:

- Put all the elements with its frequencies in an array and sort it

Or

- Use a priority queue

It is worth knowing this second approach since it can be applied to other problems.

Here is a simple solution in C++ using a priority queue.

```
vector<string> topKFrequent(const vector<string>& words, int k) {
map<string, int> freq;
for(const auto &w : words)
freq[w]++;
auto l = [](const pair<int, string> & p1, const pair<int, string> &p2){
if(p1.first == p2.first)
return p1.second < p2.second;
else
return p1.first > p2.first;
};
priority_queue<pair<int, string>, vector<pair<int, string>>, decltype(l)> pq(l);
for(auto it = freq.begin(); it != freq.end(); ++it){
if(pq.size() < k){
pq.push({it->second, it->first});
} else {
auto top = pq.top();
if(top.first < it->second){
pq.pop();
pq.push({it->second, it->first});
}
}
}
vector<string> sol (k);
while(!pq.empty()){
sol[--k] = pq.top().second;
pq.pop();
}
return sol;
}
```

We have a list of points on the plane. Find the K closest points to the origin (0, 0). Here, the distance between two points on a plane is the Euclidean distance.

You may return the answer in any order. The answer is guaranteed to be unique (except for the order that it is in.)

Example 1:

- Input: points = [[1,3],[-2,2]], K = 1
- Output: [[-2,2]]

**Solution**

The obvious solution is to compute the distance to every single point, add them to an array, sort and get the top k. This is fine, but it can be improved with a priority queue.

We use a *max heap*, where the root of the heap is the maximum element of the heap. Why the maximum? Our heap contains the K points closest to the origin and the root points to the element that is the furthest from the origin amongst all the other points. If we find a point closer to this one, it may still be further than the rest, but we need to drop our current top and add this new point to the heap.

```
typedef pair<int, vector<int>> pivi;
public:
vector<vector<int>> kClosest(const vector<vector<int>>& points, int K) {
auto l = [] (const pivi& p1, const pivi &p2) {
return p1.first < p2.first;
};
priority_queue<pivi, vector<pivi>, decltype(l)> pq(l);
for(const auto &p : points){
//Avoid taking the square root. Won't change the result and it's faster
const int d = p[0] * p[0] + p[1] * p[1];
if(pq.size() < K)
pq.push({d, {p[0], p[1]}});
else if(pq.top().first > d){
pq.pop();
pq.push({d, {p[0], p[1]}});
}
}
vector<vector<int>> sol;
for(int i = 1; i <= K; ++i){
sol.push_back(pq.top().second);
pq.pop();
}
return sol;
}
```

These problems are common too. There is not a unique approach to all of them, but if you know how to solve the problem of merging 2 lists, you can solve the problem of merging K lists to a bunch of instances of the simpler problem. As I mentioned, this is a very powerful approach that you should always keep in mind.

Merge two sorted linked lists and return it as a new sorted list. The new list should be made by splicing together the nodes of the first two lists.

Example:

- Input: 1->2->4, 1->3->4
- Output: 1->1->2->3->4->4

**Solution**

We solve this problem with the same Two pointer technique we saw at the beginning of this article. In this case, we traverse linked lists instead of arrays, but the same ideas apply.

```
ListNode* mergeTwoLists(ListNode* l1, ListNode* l2) {
ListNode dh(1), *curr = &dh;
while(l1 && l2){
if(l1->val <= l2->val) {
curr->next = l1;
l1 = l1->next;
} else {
curr->next = l2;
l2 = l2->next;
}
curr = curr->next;
}
if(l1){
curr->next = l1;
} else if(l2) {
curr->next = l2;
}
return dh.next;
}
```

Given an array of linked-lists lists, each linked list is sorted in ascending order.

Merge all the linked-lists into one sort linked-list and return it.

Example 1:

- Input: lists = [[1,4,5],[1,3,4],[2,6]]
- Output: [1,1,2,3,4,4,5,6]

Explanation: The linked-lists are:

[1->4->5, 1->3->4, 2->6]

merging them into one sorted list:

1->1->2->3->4->4->5->6

**Solution**

This is the general version of the previous problem. As I said, you can reduce this problem to many instances of the previous problem by merging lists 2 at a time. However, here I will present a more efficient solution.

Now, instead of two sorted lists, we have K. We need to create a list from picking the next minimum element of all the lists. Since they are sorted, it will be the first element of one of these lists. Luckily, there is a data structure that returns its minimum element in O(1) and has an insertion complexity of O(log K): a priority queue.

For every list, we add to the priority queue a pair containing:

- The head of that list
- An index to remember to which list that element belongs

After we pop an element, we add its value to our solution and add to the priority queue the new head of that list (the pair stores the value and the index of the list).

With this information, try to solve the problem. You will learn much more from it than from reading my solution without trying first.

```
ListNode* mergeKLists(vector<ListNode*>& lists) {
typedef pair<ListNode*, int> pni;
auto comp = [](const pni & p1, const pni& p2) {
return p1.first->val > p2.first->val;
};
priority_queue<pni, vector<pni>, decltype(comp)> pq (comp);
for(int i = 0; i < lists.size(); ++i){
if(lists[i]){
pq.push({lists[i], i});
lists[i] = lists[i]->next;
}
}
ListNode dh(-1), *curr = &dh;
while(!pq.empty()){
const auto p = pq.top(); pq.pop();
curr->next = p.first;
curr = curr->next;
ListNode* next = lists[p.second];
if(next){
int idx = p.second;
pq.push({next, idx});
lists[p.second] = lists[p.second]->next;
}
}
return dh.next;
}
```

Rabin Karp is a great example of how to design a simple and efficient algorithm using intelligently a rolling hash function. It can be used to find a string *s* in a text *t*.

The basic ideas that you need to remember to be able to reconstruct this algorithm are the following:

- This is an improvement over the brute force method, where you compare
*s*and every candidate substring,*c*, which is inefficient. - If two strings are the same, they will produce the same hash.
- The inverse is not true: two different strings may produce the same hash.
- Using a rolling hash function we can compute the hash of a new string in O(1)

If we put all this together, we will efficiently compute hashes and only compare *c* and *s* when they have the same hash, reducing the number of comparisons and thus the average complexity of our algorithm (worst case, we need to compare all substrings and are back to the brute force method).

Return the index of the first occurrence of needle in a haystack, or -1 if the needle is not part of the haystack.

Example 1:

- Input: haystack = “hello”, needle = “ll”
- Output: 2

Example 2:

- Input: haystack = “aaaaa”, needle = “bba”
- Output: -1

**Solution**

Based on the above paragraphs, try to code the Rabin-Karp algorithm on your own to solve this problem.

```
int strStr(const string& haystack, const string& needle) {
int nsize = needle.size();
if(nsize == 0)
return 0;
int hsize = haystack.size();
if(nsize > hsize)
return -1;
int msbPower = 1;
int nhash = 0;
int hhash = 0;
const int mod = 10000009; //A big prime number
for(int i=0; i<nsize; i++) {
if(i != 0) {
msbPower *= 26;
msbPower %= mod;
}
nhash = nhash*26+needle[i];
nhash %= mod;
hhash = hhash*26+haystack[i];
hhash %= mod;
}
if(nhash == hhash && haystack.compare(0, nsize, needle) == 0)
return 0;
for(int i=1; i<=hsize-nsize; i++) {
hhash -= haystack[i-1]*msbPower;
hhash %= mod;
if(hhash < 0)
hhash += mod;
hhash = hhash*26+haystack[i+nsize-1];
hhash %= mod;
if(nhash == hhash && haystack.compare(i, nsize, needle) == 0)
return i;
}
return -1;
}
```

As I have already mentioned here and in other articles, do not try to memorize things. It is counterproductive. You will end up mixing concepts, giving the right solution to the wrong problem, and forgetting what you have memorized. This is why you need to **understand the algorithms**. If you understand the basic data structures and can turn your ideas into code, you can also code these algorithms.

If you are wondering how to learn data structures and algorithms, I already showed how to do it when I described Rabin-Karp:

**Understand what problem this algorithm or data structure is trying to solve**: Here, learning where other alternatives fall short is helpful.**Break it down to the key elements or steps that define the algorithm**. Understand them individually and how they connect.**From this, code the algorithm**.

- Binary search is an improvement over a linear search for
**sorted arrays**. It reduces the size of the array in half at every step, achieving a complexity of O(logN) instead of N. - Two points:

a)Since the array is sorted,**comparing our***target***to the middle element of the array**lets us discard half of the array at every step (the half containing elements that are too large or too small compared to the*target*.

b)If the pointers cross, this is*l*>*r*, there is no solution.

```
template<typename T>
int binary_search(const vector<T>& v, int k) {
int l = 0, r = v.size() - 1;
while(l<=r) {
const int m = l + (r - l) / 2;
if(k == v[m]) {
return m;
} else if (k > v[m]) {
l = m + 1;
} else {
r = m - 1;
}
}
return -1;
}
```

I wanted to show you how this breakdown process can be applied to a much more complex data structure. It is very unlikely that you will need to know about quad-trees for an interview. I just read about them once and found them so interesting that I decided to implement them.

**What is a quad-tree?**

From Wikipedia:

A quadtree is a tree data structure in whicheach internal node has exactly four children. Quadtrees are the two-dimensional analog of octrees and aremost often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions…

**What problem is it trying to solve?**

Now that we know the formal definition, I think it is easier to get an intuitive idea of what quadtrees can do from one of its applications.

Imagine we are trying to model a physical system made of a bunch of particles, each having a certain mass. Every particle attracts every other particle in the system. If we have N particles, at every step of our simulation, the number of interactions we need to process grows as N^2. This will be extremely slow for large values of N.

Since the attraction between two particles decays with the square of the distance, we can neglect the contribution of particles that are far. How do we know which particles are far *without iterating through all the particles *(the problem we are trying to avoid in the first place)? Here is where quadtrees come in handy.

Using quadtrees, we can efficiently query all the particles in a certain region. The lookup takes O(logN) time, making our overall time for every step O(n log n) since we have to do this for N particles.

Notice how at this point we do not care so much about the implementation. This is more of a **high-level understanding of the data structure and to get an idea of where it can be useful**.

In the next section, we will get into more detail.

**What are the key elements of a quad-tree?**

Let’s go back to the definition:

A quadtree is a tree data structure in whicheach internal node has exactly four children.

- They decompose space into adaptable cells
- Each cell (or bucket) has a maximum capacity. When maximum capacity is reached, the bucket splits
- The tree directory follows the spatial decomposition of the quadtree.

So, our quadtree class is not very different from any other tree that you may have encountered (including the trie I presented earlier). Each node will contain:

- An integer representing its maximum capacity
- A boolean indicating whether it is a leaf or not (which means, the node has been divided because it reached its maximum capacity)
- Pointers to it 4 children
- An array of containing the points included in that node

Knowing what problem a data structure is trying to solve and breaking it down to its basic elements makes it much easier to implement it and especially to **remember what the data structure is about and when it can be used**. All you have to “memorize” is its definitions and properties – which are easier to remember if you think about what it is trying to solve.

**Putting it all together**

You can try to implement it in your favorite language here.

```
class Solution {
private:
Node *constructHelper(vector<vector<int>> &grid, int len, int x, int y) {
int flag = grid[x][y];
for (int i = x; i < x + len; i++)
for (int j = y; j < y + len; j++)
//if not a leaf, go deeper
if (flag != grid[i][j])
{
Node *tl = constructHelper(grid, len / 2, x, y);
Node *tr = constructHelper(grid, len / 2, x, y + len / 2);
Node *bl = constructHelper(grid, len / 2, x + len / 2, y);
Node *br = constructHelper(grid, len / 2, x + len / 2, y + len / 2);
return new Node(true, false, tl, tr, bl, br);
}
return new Node(grid[x][y] == 1, true, nullptr, nullptr, nullptr, nullptr);
}
public:
Node *construct(vector<vector<int>> &grid) {
int n = grid.size();
return (n == 0) ? nullptr : constructHelper(grid, n, 0, 0);
}
};
```

If you want to learn more about this data structure, I recommend these two videos to see a quadtree in action.

**Nice to know**

Here is a list of other topics that are good to know and I have not explicitly mentioned in the article. You don’t need to learn them all in one sitting.

I have presented a list of 20 techniques that you will make your next interview a breeze. You will be using some of them at work too, or they might inspire you for side projects.

The point is not to learn every single data structure or algorithm in a book. As you can see, most of them are either basic techniques or small tweaks on basic data structures you already knew. I want to show you that you can achieve a lot with what you already know.

*Previously published at **https://www.yourdevopsguy.com/how-to-learn-data-structures-and-algorithms-20-problem-solving-techniques-you-must-know/*

Join Hacker Noon

Create your free account to unlock your custom reading experience.