paint-brush
RANSAC, OLS, PCA: 3 Ways to Draw a Straight Line Across a Set of Pointsby@dmitriikhizbullin
796 reads
796 reads

RANSAC, OLS, PCA: 3 Ways to Draw a Straight Line Across a Set of Points

by Dmitrii KhizbullinMay 20th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The task is phrased as linear regression: finding a straight line that regresses our set of points. In this article, we will look into 3 solutions coded in Python and NumPy and compare them. RANSAC is a method of finding a consensus that is based on random samples. PCA Analysis is often used for dimensionality reduction when we need to find an affine transform of a point in a way of reducing the variance in a given point in most cases. The Principal Component Analysis (PCA), PCAA, PCAPCA and PCA are all solutions to the problem.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - RANSAC, OLS, PCA: 3 Ways to Draw a Straight Line Across a Set of Points
Dmitrii Khizbullin HackerNoon profile picture

As an autonomous driving enthusiast, I’ve been interviewing at several autonomous driving companies for the position of a software engineer. Typically, autonomous driving companies build a car that is able to navigate its way in 3D surroundings. Consequently, the company cares how good you as a candidate are with computational geometry and specifically your ability to code it up. One of the interview questions that intrigued me was formulated like the following:

Imagine your autonomous car is driving on the road and its locations are captured with GNSS (ex.: GPS), not precisely but with some measurement error. Given a car drives straight with constant speed, compute the locations of its most likely positions knowing a sequence of 2D noisy measurements in the bird-eye-view projection.

This task can be attacked from several different approaches. In this article, we will look into 3 solutions coded in Python and NumPy and compare them. Mathematically, the task is phrased as linear regression: finding a straight line that regresses our set of points.

Approach 1: Ordinary least squares

At the first glance, the problem looks like a regular least-squares problem.

We are given 2D coordinates that are pairs of (x, y). We can treat x as the argument and y as the value of a function. We can follow the formula below:

This method is very handy since it is straightforward (no pun intended), i.e. we just take the formula and calculate according to it. Without further ado let’s code it up! Please follow the comments in the code below.

def least_squares(points: np.ndarray) -> np.ndarray:
   """
   Function to approximate a set of points by a straight line
   using least squares method.

   :param points: an input array of points of shape [N, 2]
   :return: Numpy array of shape [N, 2] of points on a straight line
   """

   x = points[:, 0]
   y = points[:, 1]
   # For least squares method we need X to be a matrix containing
   # 1-s in the first column and x-s in the second
   X = np.vstack((np.ones(x.shape[0]), x)).T
   # We compute normal matrix and moment matrix as a part of
   # formula to compute intercept and slope
   normal_matrix = np.dot(X.T, X)
   moment_matrix = np.dot(X.T, y)
   # beta_hat is a vector [intercept, slope], we need to invert
   # the normal matrix and compute cross product with the moment matrix
   beta_hat = np.dot(np.linalg.inv(normal_matrix), moment_matrix)
   intercept = beta_hat[0]
   slope = beta_hat[1]
   # Now when we know the parameters of the line, computing
   # y coordinates is straightforward
   y_hat = intercept + slope * x
   # Let's combine x and y into a single matrix that we want to return
   points_hat = np.vstack((x, y_hat)).T

   return points_hat

Let’s see what we get as the result.

Looks exactly as we would like it to - the line is straight and is well fit into the given set of 2D points. However, the least squares method has a significant flaw -- it does not work if the line is vertical. In this case, the slope of the line becomes infinite and the program either crashes or returns NaNs (Not a Numbers). Consequently, it is not suited for production purposes when we expect all possible kinds of trajectories.

Approach 2: RANSAC

RANSAC is a method of finding a consensus that is based on random samples. It was proposed in 1981.

In our 2D case, we are going to randomly sample 2 points (which we are going to call “a support”) from the input array and check how well the input points fit into a stripe of the vicinity of a straight line formed by the support. Of course, not all trials will yield a line that goes through the point set, that is why RANSAC is an iterative algorithm. For RANSAC we have a nice formula that allows us to estimate the number of iterations required to obtain a successful result given an estimated fraction of outliers and a probability of success:

where p - the probability of success, ω - a fraction of outliers, n - the size of the support. In our case of a straight line in 2D space, the size of the support is equal to 2. For a line in 3D space, it would be 2 as well, and for a plane in 3D it would be 3. In the code example below the number of iterations turns out to be 16 which is a pretty affordable number. Going further , I suggest following comments in the code.

def ransac(points: np.ndarray,
          min_inliers: int = 4,
          max_distance: float = 0.15,
          outliers_fraction: float = 0.5,
          probability_of_success: float = 0.99) -> Optional[np.ndarray]:
   """
   RANdom SAmple Consensus method of finding the best fit line.

   :param points: an input array of points of shape [N, 2]
   :param min_inliers: Minimum number of inliers to consider a support
   :param max_distance: Maximum distance from a support line
                        for a point to be considered as an inlier
   :param outliers_fraction: As estimated fraction of outliers
   :param probability_of_success: desired probability that the support
                                  does not contain outliers
   :return: Numpy array of shape [N, 2] of points on a straight line
   """

   # Let's calculate the required number of trials to sample a support
   num_trials = int(math.log(1 - probability_of_success) /
                    math.log(1 - outliers_fraction**2))

   best_num_inliers = 0
   best_support = None
   for _ in range(num_trials):
       # For each trial, randomly sample 2 different points
       # from the input array to form the support
       random_indices = np.random.choice(
           np.arange(0, len(points)), size=(2,), replace=False)
       assert random_indices[0] != random_indices[1]
       support = np.take(points, random_indices, axis=0)

       # Here we compute distances from all points to the line
       # defined by the support. Distances are nicely computed
       # with cross product function. A peculiarity of np.cross
       # is that it returns z coordinate of the cross product -
       # exactly what we need to compute the distance.
       cross_prod = np.cross(support[1, :] - support[0, :],
                             support[1, :] - points)
       support_length = np.linalg.norm(support[1, :] - support[0, :])
       # cross_prod contains signed distances thus we need modulus
       distances = np.abs(cross_prod) / support_length

       # Inliers are all the points that are close enough
       # to the support line
       num_inliers = np.sum(distances < max_distance)
       # Here we update the support with the better one
       if num_inliers >= min_inliers and num_inliers > best_num_inliers:
           best_num_inliers = num_inliers
           best_support = support

   # If we succeeded to find a good support
   if best_support is not None:
       # Let's project all points onto the support line
       support_start = best_support[0]
       support_vec = best_support[1] - best_support[0]
       # Dot product is the right function to compute projections
       offsets = np.dot(support_vec, (points - support_start).T)
       proj_vectors = np.outer(support_vec, offsets).T
       support_sq_len = np.inner(support_vec, support_vec)
       projected_vectors = proj_vectors / support_sq_len
       projected_points = support_start + projected_vectors

   return projected_points

What do we get having launched the code?

The resulting line approximates the set of points well. The points that form the best support are denoted with black squares.

What are the drawbacks of RANSAC? First of all, it does not guarantee a successful result. The probability that random sampling chooses an outlier as one of the support points or chooses a couple of bad inliers (for example very close to each other) is fairly low. However, for applications that require high reliability like autonomous driving, RANSAC may not be a good fit. Also, RANSAC won’t work in cases when we do not have access to a random or pseudo-random number generator. Apart from this RANSAC has several parameters which must be chosen appropriately for each application. Specifically, max_distance is hard to estimate beforehand without spending some time on the estimation of the amount of noise in 2D locations.

Approach 3: PCA

PCA, Principal Component Analysis is often used for dimensionality reduction when we need to find an affine transform of a point cloud in a way that the variance in most of the dimensions becomes small, or at least small compared to the minority of important dimensions. In this case, we can drop the dimensions that do not provide the information. This is very similar to our case when we want to get rid of the noise in the direction perpendicular to the main direction of point locations and then project the points on the line of the main direction.

To perform PCA we need to calculate pairs of (eigenvector, eigenvalue) the given point cloud. The bigger an eigenvalue is the bigger the variance is along the direction of the corresponding eigenvector. For our problem, we need to find an eigenvector that has the largest eigenvalue. This eigenvector will describe the straight line that we will project the input points on. Below is the code with comments:

def pca(points: np.ndarray) -> np.ndarray:
   """
   Principal Component Analysis (PCA) method to estimate the direction
   of the maximal variance of a point set.

   :param points: an input array of points of shape [N, 2]
   :return: Numpy array of shape [N, 2] of points on a straight line
   """

   # Perform PCA to understand what the primary axis
   # of the given point set is
   mean = np.mean(points, axis=0)
   # Points have to be zero-mean
   centered = points - mean
   # np.linalg.eig takes a covariance matrix as an argument
   cov = np.cov(centered.T)
   # Call eigenvector decomposition to obtain principal components
   eigenval, eigenvec = np.linalg.eig(cov)
   # We want to parametrize target straight line
   # in the coordinate frame given by the eigenvector
   # that corresponds to the biggest eigenvalue
   argmax_eigen = np.argmax(eigenval)
   # We'll need projections of data points
   # on the primary axis
   loc_pca = np.dot(centered, eigenvec)
   loc_maxeigen = loc_pca[:, argmax_eigen]
   max_eigenval = eigenval[argmax_eigen]
   max_eigenvec = eigenvec[:, argmax_eigen]
   # Re-parametrize the line
   loc_start = mean + max_eigenvec * loc_maxeigen[0]
   loc_final = mean + max_eigenvec * loc_maxeigen[-1]
   linspace = np.linspace(0, 1, num=len(points))
   positions = loc_start + np.outer(linspace, loc_final - loc_start)

   return positions

Having run the code we get the following graph.

PCA has done well! Important to notice that PCA does not have a drawback of the least squares method: PCA gracefully deals with vertical lines. Also, PCA will always give us the result as opposed to RANSAC. However, compared to RANSAC, PCA is more sensitive to wild outliers in 2D point locations.

Conclusion

This problem caught my attention because it has several significantly different solutions, all of which have their own pros and cons. Also, this problem allows us to practice computational geometry and matrix operations with NumPy on a simple example. Let’s take a look at all three solutions in the same figure.

Out of these 3 solutions my personal preference is PCA.

Dear reader, I hope you like the article and that it helped you get more insight into the practical experience of implementing a computational geometry problem into code. The full code is available at my GitHub page.

Thank you for your attention and best luck with your interviews!