paint-brush
Exploring Graph Traversal: From Breadth-First Search to Dijkstra's Algorithmby@arslan
146 reads

Exploring Graph Traversal: From Breadth-First Search to Dijkstra's Algorithm

by Arslan UrtashevMarch 14th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The breadth-first search of a graph is an algorithm that can be easily modified to find what we need, such as the distance and path from any vertices to all others. Here is an implementation of the search for distances and paths and how to restore the graph to its original state.
featured image - Exploring Graph Traversal: From Breadth-First Search to Dijkstra's Algorithm
Arslan Urtashev HackerNoon profile picture


If you know about the breadth-first search algorithm, and even if you don't, you will be able to easily understand Dijkstra's algorithm as we address it in this article.

The Breadth-first Search of a Graph

The first algorithm that I would like to describe is the breadth-first search of a graph.


What is it?


Let's step back from the formal description of graphs and imagine a picture. We lay out ropes on the ground, soaked in something flammable, of the same length so that none of them intersect, but some of them touch each other at the ends. And now we light one end.


How will the fire behave?


It will spread evenly over the ropes to neighboring intersections until everything ignites. It is easy to generalize this picture to three-dimensional space. This is what the breadth-first search of a graph will look like in real life.


Now let's describe it more formally. Let us start the breadth-first search from some vertex V. In the next moment of time, we will examine the neighbors of vertex V (a neighbor of vertex V is a vertex that shares an edge with V). And so on until all the vertices in the graph have been examined.


Suppose we are in some vertex during the breadth-first search process. Then we will use a queue into which we will add all the neighbors of the vertex except those that we have already visited.


void bfs(int u) {
  used[u] = true;
  queue<int> q;
  q.push(u);
  while (!q.empty()) {
    int u = q.front();
    q.pop();
    for (int i = 0; i < (int) g[u].size(); i++) {
      int v = g[u][i];
      if (!used[v]) {
        used[v] = true;
        q.push(v);
      }
    }
  }
}


In this implementation, g is the adjacency list, i.e. g[u] contains a list of adjacent vertices to u (std::vector is used as a list), and used is an array that allows us to understand which vertices we have already visited.


Here, breadth-first search does nothing except for the search itself. It may seem unnecessary, but it can be easily modified to find what we need, such as the distance and path from any vertex to all others. It should be noted that the edges do not have weights, i.e. the graph is unweighted. Here is an implementation of the search for distances and paths.


void bfs(int u) {
  used[u] = true;
  p[u] = u;
  dist[u] = 0;
  queue<int> q;
  q.push(u);
  while (!q.empty()) {
    int u = q.front();
    q.pop();
    for (int i = 0; i < (int) g[u].size(); i++) {
      int v = g[u][i];
      if (!used[v]) {
        used[v] = true;
        p[v] = u;
        dist[v] = dist[u] + 1;
        q.push(v);
      }
    }
  }
}


Here, p is an array of ancestors, i.e. p[v] is the ancestor of vertex v, and dist[v] is the distance from the starting vertex of the traversal to vertex v. How to restore the path? It's quite easy to do, just by traversing through the array of ancestors of the required vertex. For example, recursively:


void print_way(int u) {
  if (p[u] != u) {
    print_way(p[u]);
  }
  cout << u << ' ';
}


Shortest paths

All further reasoning will be valid only if the edge weights are non-negative. Now let's move on to weighted graphs, where the edges of the graph have a weight, such as the length of the edge, the fee for passing through it, or the time it takes to pass through it. The task is to find the shortest path from one vertex to another. In this article, I will start with the breadth-first search and I don't recall seeing such an approach anywhere else. Maybe I missed it. So let's take a look at the breadth-first search implementation again, specifically at the condition for adding to the queue.


In the breadth-first search, we only add to the queue those vertices that we haven’t been to yet. Now, we will change this condition and add those vertices whose distance can be reduced. Obviously, the queue will be empty only when there are no vertices left whose distance can be reduced. The process of reducing the path from vertex V is called vertex V relaxation.


It should be noted that initially, the path to all vertices is infinity (take some sufficiently large value as infinity, namely, such that no path will have a length greater than the value of infinity). This algorithm directly follows from breadth-first search, and it was the first algorithm I came up with when solving my first problem of finding the shortest paths in a graph. It is worth mentioning that this method finds the shortest paths from the vertex from which we started the algorithm to all others.


Here is the implementation:

const int INF = 1e+9;

vector< pair<int, int> > g[LIM]; // In g[u] there is a list of pairs: (the length of the path between vertex u and v, vertex v).

void shortcut(int u) {
  fill(dist, dist + n, INF);
  dist[u] = 0;
  p[u] = u;
  queue<int> q;
  q.push(u);
  while (!q.empty()) {
    int u = q.front();
    q.pop();
    for (int i = 0; i < (int) g[u].size(); i++) {
      int v = g[u][i].second, len = g[u][i].first;
      if (dist[v] > dist[u] + len) {
        p[v] = u;
        dist[v] = dist[u] + len;
        q.push(v);
      }
    }
  }
}


Why do we store such pairs in the adjacency list? It will become clear a little later when we improve this algorithm, but to be honest, for this implementation, the pairs can be stored the other way around. We can slightly improve the queue insertion process by introducing a bool array to mark whether the vertex that needs to be relaxed is currently in the queue.


const int INF = 1e+9;

vector< pair<int, int> > g[LIM];
bool inque[LIM];

void shortcut(int u) {
  fill(dist, dist + n, INF);
  dist[u] = 0;
  p[u] = u;
  queue<int> q;
  q.push(u);
  inque[u] = true;
  while (!q.empty()) {
    int u = q.front();
    q.pop();
    inque[u] = false;
    for (int i = 0; i < (int) g[u].size(); i++) {
      int v = g[u][i].second, len = g[u][i].first;
      if (dist[v] > dist[u] + len) {
        p[v] = u;
        dist[v] = dist[u] + len;
        if (!inque[v]) {
          q.push(v);
          inque[v] = true;
        }
      }
    }
  }
}


If you take a closer look, this algorithm is very similar to the Levit algorithm, but it's not the same, even though in the worst-case scenario it has the same asymptotic complexity. Let's make a rough estimate of this. In the worst-case scenario, we will have to perform relaxation each time we pass through any edge. The total is O(n * m). The estimate is quite rough, but it's enough for the initial stage. It's also worth noting that this is the worst-case scenario, and in practice, even such an implementation works quite quickly.


And now, the most interesting part!... drum roll... Let's improve our algorithm to Dijkstra's algorithm!


Dijkstra's Algorithm

The first optimization that comes to mind is to relax only those vertices whose current path is minimal. This idea came to me one fine day, but as it turns out, I was far from the first. The wonderful scientist Edsger Dijkstra was the first to come up with this idea.


Moreover, he proved that by choosing vertices for relaxation in this way, we will perform relaxation no more than n times! Intuitively, it is clear that if the path to a certain vertex is currently minimal, we cannot make it any smaller. You can read about it on Wikipedia.


Now the only thing left to do is to understand how to efficiently find the vertex with the minimum distance to it. To do this, we will use a priority queue (heap). The STL has a class that implements it and is called priority_queue. I will provide an implementation that, again, directly follows from the previous (described in this article) algorithm.


void dijkstra(int u) {
  fill(dist, dist + n, INF);
  dist[u] = 0;
  p[u] = u;
  priority_queue< pair<int, int>, vector< pair<int, int> >, greater< pair<int, int> > > q;
  q.push(make_pair(0, u));
  while (!q.empty()) {
    pair<int, int> u = q.top();
    q.pop();
    if (u.first > dist[u.second]) continue;
    for (int i = 0; i < (int) g[u.second].size(); i++) {
      int v = g[u.second][i].second, len = g[u.second][i].first;
      if (dist[v] > dist[u.second] + len) {
        p[v] = u.second;
        dist[v] = dist[u.second] + len;
        q.push(make_pair(dist[v], v));
      }
    }
  }
}


Most likely it's not entirely clear what's going on here. So, let's start with declaring a priority queue.


Here, the first template argument is the data stored in the queue, specifically pairs of the form (distance to the vertex, vertex number). The second template argument is the container in which the data will be stored, and the third argument is the comparator (which, by the way, is found in the functional header file).


Why do we need a different comparator?


Because with the standard declaration of priority_queue< pair<int, int> >, we'll only be able to extract the vertices with the maximum distance, whereas we actually need the opposite. The asymptotic complexity of this solution to the problem is O(n * log(n) + m * log(n)).


Indeed, we only have n relaxations, and we can find the vertex with the minimum path length to it in log(n) time (which is the asymptotic complexity of the standard priority queue implementation in the STL).


It should also be noted that we may end up adding the same vertex to the queue with different paths to it. For example, we may have relaxed from vertex A, whose neighbor is vertex C, and then relaxed from vertex B, whose neighbor is also vertex C. To avoid problems related to this, we simply skip the vertices that we extracted from the queue, but whose distance from the queue is no longer relevant, i.e. greater than the current shortest path. This is why the implementation includes the line "if (u.first > dist[u.second]) continue;".


Conclusion

It turns out that it is actually very easy to implement Dijkstra's algorithm with a time complexity of O(n * log(n) + m * log(n)).