**Register Now!**

116 reads

by Thomas CherickalAugust 22nd, 2024

In our previous article, available on this link, I introduced 10 powerful patterns that can help solve a wide variety of LeetCode problems efficiently.

These patterns, including Two Pointers, Sliding Window, Binary Search, Depth-First Search, and Breadth-First Search, provide a structured approach to tackling common algorithmic challenges.

However, as you progress through LeetCode and encounter more complex problems, mastering just those 10 patterns will not be enough.

To truly excel at solving LeetCode problems and become an expert problem-solver, it's crucial to expand your knowledge and learn additional patterns.

In this follow-up article, we present 10 more essential patterns that, when combined with the previous 10, form a comprehensive toolkit for conquering LeetCode.

These patterns include Divide and Conquer, Bit Manipulation, Linked List, Interval, Trie, Heap, Reservoir Sampling, Monotonic Stack, Topological Sort, and Union Find.

By mastering all 20 patterns, you'll be equipped with a powerful arsenal of techniques to solve a vast majority of LeetCode problems.

Each pattern is accompanied by a detailed explanation, a sample problem, a solution, and 10 similar LeetCode problems to practice.

This comprehensive coverage ensures that you have a solid understanding of each pattern and can apply it effectively in various problem-solving scenarios.

Remember, learning these patterns is just the beginning.

The true mastery comes from consistent practice and applying these techniques to a wide range of problems.

As you work through more LeetCode problems, you'll gain a deeper understanding of when to apply each pattern and how to optimize your solutions.

Embrace the challenge, practice diligently, and let these 20 patterns be your guide on your journey to becoming a LeetCode master.

With dedication and persistence, you'll be well on your way to solving 1000+ LeetCode problems and beyond.

Furthermore, pay attention to the Time Complexity and Space Complexity Analysis.

That is a definite interview question on any one and every one of these patterns!

Union Find, also known as Disjoint Set Union (DSU), is a data structure that keeps track of elements partitioned into a number of disjoint (non-overlapping) sets. It is particularly useful for solving problems that involve grouping elements, detecting cycles in graphs, and managing connectivity queries. The primary operations supported by Union Find are:

**Find:**Determine which set a particular element belongs to. This operation can be optimized using path compression, which flattens the structure of the tree whenever`find`

is called, ensuring that all nodes directly point to the root.**Union:**Merge two sets into a single set. This operation can be optimized using union by rank, which attaches the smaller tree to the root of the larger tree to keep the overall tree flat.

The efficiency of Union Find makes it suitable for problems in graph theory, such as detecting cycles in undirected graphs or finding connected components.

**Problem:**200. Number of Islands

Given an`m x n`

2D binary grid `grid`

where `0`

represents water and `1`

represents land, and returns the number of islands. An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically.

```
class UnionFind:
def __init__(self, grid):
m, n = len(grid), len(grid[0])
self.parent = [-1] * (m * n)
self.rank = [0] * (m * n)
self.count = 0
for i in range(m):
for j in range(n):
if grid[i][j] == '1':
self.parent[i * n + j] = i * n + j
self.count += 1
def find(self, x):
if self.parent[x] != x:
self.parent[x] = self.find(self.parent[x])
return self.parent[x]
def union(self, x, y):
x_root = self.find(x)
y_root = self.find(y)
if x_root == y_root:
return
if self.rank[x_root] < self.rank[y_root]:
self.parent[x_root] = y_root
elif self.rank[x_root] > self.rank[y_root]:
self.parent[y_root] = x_root
else:
self.parent[y_root] = x_root
self.rank[x_root] += 1
def numIslands(grid):
if not grid:
return 0
uf = UnionFind(grid)
directions = [(0, 1), (1, 0), (0, -1), (-1, 0)]
for i in range(len(grid)):
for j in range(len(grid[0])):
if grid[i][j] == '1':
for d in directions:
ni, nj = i + d[0], j + d[1]
if 0 <= ni < len(grid) and 0 <= nj < len(grid[0]) and grid[ni][nj] == '1':
uf.union(i * len(grid[0]) + j, ni * len(grid[0]) + nj)
return uf.count
```

**Time Complexity:**O(V + E), where V is the number of vertices (land cells), and E is the number of edges (connections between land cells). Each union and find operation is nearly constant time due to path compression and union by rank, making the amortized time complexity very efficient.**Space Complexity:**O(V), as we need to maintain the parent and rank arrays. The space used is proportional to the number of vertices in the grid.

- Number of Islands - 200
- Friend Circles - 547
- Accounts Merge - 721
- Redundant Connection - 684
- Longest Consecutive Sequence - 128
- Sentence Similarity II - 737
- Evaluate Division - 399
- Regions Cut By Slashes - 959
- Satisfiability of Equality Equations - 990
- Connecting Cities With Minimum Cost - 1584

The divide and conquer technique is a powerful algorithmic paradigm that involves breaking a problem down into smaller, more manageable subproblems, solving each subproblem independently, and then combining the results to solve the original problem. This approach is particularly useful for problems that can be divided into independent parts, allowing for efficient processing and often leading to significant reductions in time complexity.In practice, the divide and conquer strategy typically follows three steps:

**Divide:**Split the problem into several smaller subproblems that are similar to the original problem.**Conquer:**Solve each of the subproblems recursively. If the subproblem sizes are small enough, solve the subproblem as a base case.**Combine:**Merge the results of the subproblems into the final solution for the original problem.

This method is widely used in sorting algorithms (like Merge Sort and Quick Sort), searching algorithms, and many optimization problems.

**Problem:**148. Sort List

Given the head of a linked list, sort the list using merge sort.

```
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
def sortList(head):
if not head or not head.next:
return head
# Find the middle of the linked list
slow, fast = head, head.next
while fast and fast.next:
slow = slow.next
fast = fast.next.next
mid = slow.next
slow.next = None
# Recursively sort the left and right halves
left = sortList(head)
right = sortList(mid)
# Merge the sorted halves
return merge(left, right)
def merge(left, right):
dummy = ListNode()
curr = dummy
while left and right:
if left.val < right.val:
curr.next = left
left = left.next
else:
curr.next = right
right = right.next
curr = curr.next
if left:
curr.next = left
if right:
curr.next = right
return dummy.next
```

- Time Complexity: O(n log n), where n is the number of nodes in the linked list. The merge sort algorithm divides the list into halves and merges them back together, resulting in a time complexity of O(n log n).
- Space Complexity: O(log n) for the recursive call stack. The merge sort algorithm uses recursion to sort the left and right halves, and the maximum depth of the recursion stack is proportional to the height of the linked list, which is O(log n) for a balanced list.

- Merge Sort - 148
- Quick Sort - 912
- Majority Element - 169
- Kth Largest Element in an Array - 215
- Median of Two Sorted Arrays - 4
- Merge k Sorted Lists - 23
- Divide Two Integers - 29
- Pow(x, n) - 50
- Guess Number Higher or Lower II - 375
- Dungeon Game - 174

Bit manipulation is a technique that involves directly manipulating bits or binary digits within a number. This approach is useful for a variety of problems, particularly those that require efficient storage, fast computations, or operations on binary representations. Common bit manipulation operations include AND, OR, XOR, NOT, left shift, and right shift.Bit manipulation can help solve problems related to:

- Counting bits (e.g., counting the number of 1s in a binary representation)
- Swapping values without using a temporary variable
- Checking if a number is a power of two
- Finding unique elements in a collection where all elements except one appear twice

This technique is often favored for its efficiency, as operations on bits are generally faster than arithmetic operations on integers.

**Problem:**191. Number of 1 Bits

Write a function that takes an unsigned integer and returns the number of '1' bits it has (also known as the Hamming weight).

```
def hammingWeight(n):
count = 0
while n:
count += n & 1 # Increment count if the least significant bit is 1
n >>= 1 # Right shift n to process the next bit
return count
```

- Time Complexity: O(log n), where n is the number of bits in the integer. In the worst case, the number of iterations is equal to the number of bits in the integer.
- Space Complexity: O(1), as we are using a constant amount of extra space.

- Number of 1 Bits - 191
- Counting Bits - 338
- Single Number - 136
- Missing Number - 268
- Reverse Bits - 190
- Sum of Two Integers - 371
- Power of Two - 231
- Counting Bits - 338
- Gray Code - 89
- Subsets - 78

A linked list is a linear data structure where each element (node) contains a value and a reference (or pointer) to the next node in the sequence. This structure allows for efficient insertion and deletion of nodes, as elements do not need to be contiguous in memory. Linked lists are particularly useful for dynamic memory allocation, where the size of the data structure can change during runtime.Common operations on linked lists include:

- Traversing the list to access or modify elements
- Inserting new nodes at various positions (head, tail, or specific index)
- Deleting nodes from the list
- Reversing the linked list
- Detecting cycles within the list

The flexibility of linked lists makes them ideal for implementing stacks, queues, and other abstract data types.

**Problem:**206. Reverse Linked List

Given the head of a singly linked list, reverse the list, and return the reversed list.

```
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
def reverseList(head):
prev = None
curr = head
while curr:
next_node = curr.next # Store the next node
curr.next = prev # Reverse the link
prev = curr # Move prev to current
curr = next_node # Move to the next node
return prev # New head of the reversed list
```

- Time Complexity: O(n), where n is the number of nodes in the linked list. We traverse through the list once.
- Space Complexity: O(1), as we are using a constant amount of extra space for the pointers.

- Reverse Linked List - 206
- Merge Two Sorted Lists - 21
- Remove Nth Node From End of List - 19
- Palindrome Linked List - 234
- Linked List Cycle - 141
- Merge k Sorted Lists - 23
- Reverse Nodes in k-Group - 25
- Swap Nodes in Pairs - 24
- Odd Even Linked List - 328
- Intersection of Two Linked Lists - 160

Interval problems involve working with a collection of intervals, which are typically represented as pairs of start and end points. These problems often require merging overlapping intervals, finding gaps between intervals, or determining whether a new interval can fit within existing ones.Key concepts in working with intervals include:

**Merging Intervals:**Combining overlapping intervals into a single interval.**Finding Gaps:**Identifying spaces between intervals where no elements exist.**Inserting New Intervals:**Determining where a new interval can be placed in a sorted list of existing intervals.

Efficiently handling intervals often involves sorting the intervals based on their starting points, allowing for linear scans to process them.

**Problem:**56. Merge Intervals

Given an array of intervals where`intervals[i] = [starti, endi]`

, merge all overlapping intervals, and return an array of the merged intervals.

```
def merge(intervals):
intervals.sort(key=lambda x: x[0]) # Sort intervals based on start time
merged = []
for interval in intervals:
if not merged or merged[-1][1] < interval[0]:
merged.append(interval) # No overlap, add interval
else:
merged[-1][1] = max(merged[-1][1], interval[1]) # Merge intervals
return merged
```

- Time Complexity: O(n log n), where n is the number of intervals. Sorting the intervals takes O(n log n) time, and the merging process takes O(n) time in the worst case.
- Space Complexity: O(n), as we are creating a new list to store the merged intervals.

- Insert Interval - 57
- Merge Intervals - 56
- Non-overlapping Intervals - 435
- Meeting Rooms - 252
- Meeting Rooms II - 253
- Employee Free Time - 759
- Range Module - 715
- Data Stream as Disjoint Intervals - 352
- Interval List Intersections - 986
- Minimum Number of Arrows to Burst Balloons - 452

A trie, or prefix tree, is a specialized tree data structure used to store a dynamic set of strings, where each node represents a common prefix shared by some strings. Tries are particularly useful for problems involving prefix matching, autocomplete features, and dictionary implementations.Key operations in a trie include:

**Insertion:**Adding a new string to the trie by creating nodes for each character.**Search:**Checking if a string exists in the trie by traversing the nodes according to the characters in the string.**Prefix Search:**Finding all strings that start with a given prefix, which is efficient due to the structure of the trie.

Tries offer efficient retrieval times, making them suitable for applications like spell-checking, IP routing, and auto-suggestions.

**Problem:**208. Implement Trie (Prefix Tree)

Implement a trie data structure that supports the following operations:`insert`

, `search`

, and `startsWith`

.

```
class TrieNode:
def __init__(self):
self.children = {}
self.is_end_of_word = False
class Trie:
def __init__(self):
self.root = TrieNode()
def insert(self, word: str) -> None:
node = self.root
for char in word:
if char not in node.children:
node.children[char] = TrieNode()
node = node.children[char]
node.is_end_of_word = True
def search(self, word: str) -> bool:
node = self.root
for char in word:
if char not in node.children:
return False
node = node.children[char]
return node.is_end_of_word
def startsWith(self, prefix: str) -> bool:
node = self.root
for char in prefix:
if char not in node.children:
return False
node = node.children[char]
return True
```

- Time Complexity:
`insert`

: O(k), where k is the length of the word. We traverse through each character in the word.`search`

: O(k), where k is the length of the word. We traverse through each character in the word.`startsWith`

: O(k), where k is the length of the prefix. We traverse through each character in the prefix.

- Space Complexity: O(k * n), where k is the average length of the words and n is the number of words. In the worst case, each character of each word will have a separate node in the trie.

- Implement Trie (Prefix Tree) - 208
- Word Search II - 212
- Longest Word in Dictionary - 720
- Replace Words - 648
- Design Search Autocomplete System - 642
- Palindrome Pairs - 336
- Stream of Characters - 1032
- Maximum XOR of Two Numbers in an Array - 421
- Implement Magic Dictionary - 676
- Prefix and Suffix Search - 745

A heap is a specialized tree-based data structure that satisfies the heap property, where the value of each node is either greater than or equal to (max-heap) or less than or equal to (min-heap) the values of its children. Heaps are commonly used to implement priority queues, which allow for efficient retrieval of the highest (or lowest) priority element.Key operations in heaps include:

**Insertion:**Adding a new element while maintaining the heap property.**Deletion:**Removing the root element (highest or lowest) and re-structuring the heap.**Peek:**Accessing the root element without removing it.

Heaps are particularly useful for problems that require frequent access to the largest or smallest elements, such as finding the top K elements or implementing Dijkstra's algorithm for shortest paths.

**Problem:**703. Kth Largest Element in a Stream

Design a class to find the kth largest element in a stream. Note that it is the kth largest element in the sorted order, not the kth distinct element.

```
import heapq
class KthLargest:
def __init__(self, k: int, nums: List[int]):
self.k = k
self.heap = nums
heapq.heapify(self.heap)
while len(self.heap) > k:
heapq.heappop(self.heap)
def add(self, val: int) -> int:
if len(self.heap) < self.k:
heapq.heappush(self.heap, val)
elif val > self.heap[0]:
heapq.heappop(self.heap)
heapq.heappush(self.heap, val)
return self.heap[0]
```

- Time Complexity:
`__init__`

: O(n log k), where n is the number of elements in`nums`

. Building the heap takes O(n) time, and removing (k-1) elements takes O(k log k) time.`add`

: O(log k), as we push or pop an element from the heap of size k.

- Space Complexity: O(k), as we store at most k elements in the heap.

- Kth Largest Element in a Stream - 703
- Top K Frequent Elements - 347
- Find Median from Data Stream - 295
- Merge k Sorted Lists - 23
- Find the Kth Largest Element in an Array - 215
- Ugly Number II - 264
- Kth Smallest Element in a Sorted Matrix - 378
- Minimum Cost to Hire K Workers - 857
- Sliding Window Median - 480
- Minimum Number of Refueling Stops - 871

Reservoir sampling is a family of randomized algorithms designed for randomly choosing a sample of k items from a list S containing n items, where n is either a very large or unknown number. This technique is particularly useful when dealing with streams of data or large datasets where it is impractical to store all elements. The algorithm works by maintaining a "reservoir" of size k. As each element is processed, it has a chance of replacing an existing element in the reservoir, ensuring that each element has an equal probability of being included in the final sample. This method is efficient and allows for sampling without needing to know the total number of items in advance.

**Problem:**382. Linked List Random Node

Given a singly linked list, return a random node's value from the linked list. Each node must have the same probability of being chosen.

```
import random
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
class Solution:
def __init__(self, head: Optional[ListNode]):
self.head = head
def getRandom(self) -> int:
reservoir = None
current = self.head
i = 0
while current:
i += 1
if random.randint(1, i) == 1:
reservoir = current
current = current.next
return reservoir.val
```

- Time Complexity: O(n), where n is the number of nodes in the linked list. We traverse through the entire list once.
- Space Complexity: O(1), as we are using a constant amount of extra space for the reservoir variable.

- Linked List Random Node - 382
- Random Pick Index - 398
- Random Pick with Weight - 528
- Random Flip Matrix - 519
- Random Point in Non-overlapping Rectangles - 497
- Random Pick with Blacklist - 710
- Random Pick with Block - 710
- Random Pick Index - 398
- Random Pick with Blacklist - 710
- Random Pick with Block - 710

A monotonic stack is a data structure that maintains elements in either strictly increasing or strictly decreasing order. This structure is particularly useful for solving problems that require finding the nearest greater or smaller element for each element in an array, as well as for calculating areas in histograms or determining the maximum span of elements. The key advantage of using a monotonic stack is that it allows for efficient retrieval of the next greater or smaller elements in linear time, which is particularly beneficial in problems involving arrays or sequences.

**Problem:**84. Largest Rectangle in Histogram

Given an array of non-negative integers`heights`

representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram.

```
def largestRectangleArea(heights):
stack = []
max_area = 0
for i in range(len(heights) + 1):
while stack and (i == len(heights) or heights[i] < heights[stack[-1]]):
height = heights[stack.pop()]
width = i if not stack else i - stack[-1] - 1
max_area = max(max_area, height * width)
stack.append(i)
return max_area
```

- Time Complexity: O(n), where n is the number of elements in the input array. Each element is pushed and popped from the stack at most once.
- Space Complexity: O(n), as we are using a stack to store the indices. In the worst case, the stack can store all the indices.

- Next Greater Element I - 496
- Next Greater Element II - 503
- Daily Temperatures - 739
- Largest Rectangle in Histogram - 84
- Trapping Rain Water - 42
- Shortest Unsorted Continuous Subarray - 581
- Maximal Rectangle - 85
- Monotone Increasing Digits - 738
- Minimum Add to Make Parentheses Valid - 921
- Exclusive Time of Functions - 636

Topological sorting is an algorithm used to order the vertices of a directed acyclic graph (DAG) in such a way that for every directed edge from vertex A to vertex B, vertex A comes before vertex B in the ordering. This technique is particularly useful for scheduling tasks, resolving dependencies, and managing prerequisites. Topological sort can be implemented using either Depth-First Search (DFS) or Kahn's algorithm (which uses in-degree counting). The key characteristic of topological sorting is that it only applies to directed acyclic graphs, as cycles would prevent a valid ordering.

**Problem:**207. Course Schedule

There are a total of`numCourses`

courses you have to take, labeled from `0`

to `numCourses - 1`

. You are given an array `prerequisites`

where `prerequisites[i] = [ai, bi]`

indicates that you must take course `bi`

first if you want to take course `ai`

.Return `true`

if you can finish all courses. Otherwise, return `false`

.

```
from collections import defaultdict
def canFinish(numCourses, prerequisites):
graph = defaultdict(list)
indegree = [0] * numCourses
for course, prereq in prerequisites:
graph[prereq].append(course)
indegree[course] += 1
queue = [course for course in range(numCourses) if indegree[course] == 0]
topological_order = []
while queue:
course = queue.pop(0)
topological_order.append(course)
for neighbor in graph[course]:
indegree[neighbor] -= 1
if indegree[neighbor] == 0:
queue.append(neighbor)
return len(topological_order) == numCourses
```

- Time Complexity: O(V + E), where V is the number of vertices (courses) and E is the number of edges (prerequisites). We traverse each vertex and each edge once.
- Space Complexity: O(V), as we use an adjacency list to represent the graph and an array to store the in-degrees.

- Course Schedule - 207
- Course Schedule II - 210
- Alien Dictionary - 269
- Minimum Height Trees - 310
- Sequence Reconstruction - 444
- Reconstruct Itinerary - 332
- Parallel Courses - 1136
- Parallel Courses II - 1857

One Last Tip.

*Enjoy Every Minute!*

*You are a part of an elite, special force on Earth.*

People who actually love their jobs!

And possess the potential to change the world through the artificial creations they produce.

Look at the thousands who have gone before you.

Look at the millionaires and billionaires today who started with a good idea and good programming knowledge.

You are following in their footsteps.

And at least some of you who read this article will become one of those millionaires/billionaires.

To be a coder is a privilege not given to many.

My advice:

Never lose your passion for any problem, and:

*Enjoy every minute! Every minute!*

Cheers!

All Images generated by DALL-E-3.

L O A D I N G

. . . comments & more!

. . . comments & more!