paint-brush
SciPy Tutorial: Linear Algebraby@kacawi
2,030 reads
2,030 reads

SciPy Tutorial: Linear Algebra

by Karlijn WillemsFebruary 10th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

If you want to read why you should learn linear algebra or SciPy for <a href="https://hackernoon.com/tagged/data-science" target="_blank">data science</a> or which NumPy functions are useful when you’re working with SciPy, check out the <a href="https://www.datacamp.com/community/tutorials/python-scipy-tutorial" target="_blank">full tutorial</a>.

Company Mentioned

Mention Thumbnail
featured image - SciPy Tutorial: Linear Algebra
Karlijn Willems HackerNoon profile picture

If you want to read why you should learn linear algebra or SciPy for data science or which NumPy functions are useful when you’re working with SciPy, check out the full tutorial.

Now that you know what you need to use both packages to your advantage, it’s time to dig into the topic of this tutorial: linear algebra.

Prepping Your Workspace: Install SciPy

But before you go into how you can use Python, make sure that your workspace is completely ready!

Of course you first need to make sure firstly that you have Python installed. Go to this page if you still need to do this :) If you’re working on Windows, make sure that you have added Python to the PATH environment variable. In addition, don’t forget to install a package manager, such as pip, which will ensure that you’re able to use Python’s open-source libraries.

Note that recent versions of Python 3 come with pip, so double check if you have it and if you do, upgrade it before you install any other packages:

pip install pip --upgrade pip --version

But just installing a package manager is not enough; You also need to download to download the wheel for the library: go here to get your SciPy wheel. After the download, open up the terminal at the download directory on your pc and install it. Additionally, you can check whether the installation was successful and confirm the package version that you’re running:

# Install the wheel install "scipy‑0.18.1‑cp36‑cp36m‑win_amd64.whl" # Confirm successful install import scipy 

# Check package version scipy.__version__

After these steps, you’re ready to goy!

Tip: install the package by downloading the Anaconda Python distribution. It’s an easy way to get started quickly, as Anaconda not only includes 100 of the most popular](https://docs.continuum.io/anaconda/pkg-docs) Python, R and Scala packages for data science, but also includes several open course development environments such as Jupyter and Spyder. If you’d like to start working with Jupyter Notebook, check out this Jupyter notebook tutorial.

If you haven’t downloaded it already, go here to get it.

Linear Algebra: Vectors & Matrices

Now that you have made sure that your workspace is prepped, you can finally get started with linear algebra in Python. In essence, this discipline is occupied with the study of vector spaces and the linear mappings that exist between them. These linear mappings can be described with matrices, which also makes it easier to calculate.

Remember that a vector space is a fundamental concept in linear algebra. It’s a space where you have a collection of objects (vectors) and where you can add or scale two vectors without the resulting vector leaving the space. Remember also that vectors are rows (or columns) of a matrix.

But how does this work in Python?

You can easily create a vector with the np.array() function. Similarly, you can give a matrix structure to every one-or two-dimensional ndarray with either the np.matrix() or np.mat() commands.

So arrays and matrices are the same, besides from the formatting?

Well, not exactly. There are some differences:

  • A matrix is 2-D, while arrays are usually n-D,
  • As the functions above already implied, the matrix is a subclass of ndarray,
  • Both arrays and matrices have .T(), but only matrices have .H() and .I(),
  • Matrix multiplication works differently from element-wise array multiplication, and
  • To add to this, the ** operation has different results for matrices and arrays

When you’re working with matrices, you might sometimes have some in which most of the elements are zero. These matrices are called “sparse matrices”, while the ones that have mostly non-zero elements are called “dense matrices”.

In itself, this seems trivial, but when you’re working with SciPy for linear algebra, this can sometimes make a difference in the modules that you use to get certain things done. More concretely, you can use scipy.linalg for dense matrices, but when you’re working with sparse matrices, you might also want to consider checking up on the scipy.sparse module, which also contains its own scipy.sparse.linalg.

For sparse matrices, there are quite a number of options to create them. Go here to read about all the options.

There are really a lot of options, but which one should you choose if you’re making a sparse matrix yourself?

It’s not that hard.

Basically, it boils down to first is how you’re going to initialize it. Next, consider what you want to be doing with your sparse matrix.

More concretely, you can go through the following checklist to decide what type of sparse matrix you want to use:

  • If you plan to fill the matrix with numbers one by one, pick a coo_matrix() or dok_matrix() to create your matrix.
  • If you want to initialize the matrix with an array as the diagonal, pick dia_matrix() to initialize your matrix.
  • For sliced-based matrices, use lil_matrix().
  • If you’re constructing the matrix from blocks of smaller matrices, consider using bsr_matrix().
  • If you want to have fast access to your rows and columns, convert your matrices by using the csr_matrix() and csc_matrix() functions, respectively. The last two functions are not great to pick when you need to initialize your matrices, but when you’re multiplying, you’ll definitely notice the difference in speed.

Easy peasy!

Vector Operations

Now that you have learned or refreshed the difference between vectors, dense matrices and sparse matrices, it’s time to take a closer look at vectors and what kind of mathematical operations you can do with them. The tutorial focuses here explicitly on mathematical operations so that you’ll come to see the similarities and differences with matrices, and because a huge part of linear algebra is, ultimately, working with matrices.

You have already seen that you can easily create a vector with np.array(). But now that you have vectors at your disposal, you might also want to know of some basic operations that can be performed on them.

Now that you have successfully seen some vector operations, it’s time to get started on to the real matrix work!

Matrices: Operations & Routines

Similarly to where you left it off at the start of the previous section, you know how to create matrices, but you don’t know yet how you can use them to your advantage. This section will provide you with an overview of some matrix functions and basic matrix routines that you can use to work efficiently.

Firstly, let’s go over some functions. These will come quite easily if you have worked with NumPy before, but even if you don’t have any experience with it yet, you’ll see that these functions are easy to get going.

Let’s look at some examples of functions.

There’s np.add() and np.subtract() to add and subtract arrays or matrices, and also np.divide() and np.multiply for division and multiplication. This really doesn’t seem like a big msytery, does it? Also the np.dot() function that you have seen in the previous section where it was used to calculate the dot product, can also be used with matrices. But don’t forget to pass in two matrices instead of vectors.

These are basic, right?

Let’s go a bit less basic. When it comes to multiplications, there are also some other functions that you can consider, such as np.vdot() for the dot product of vectors, np.inner() ornp.outer() for the inner or outer products of arrays, np.tensordot() and np.kron() for the Kronecker product of two arrays.

Besides these, it might also be useful to consider some functions of thelinalg module: the matrix exponential functions linalg.expm(), linalg.expm2() and linalg.expm3(). The difference between these three lies in the ways that the exponential is calculated. Stick to the first one for a general matrix exponential, but definitely try the three of them out to see the difference in results!

Also trigonometric functions such as linalg.cosm(), linalg.sinm() and linalg.tanm(), hyperbolic trigonometric functions such as linalg.coshm(), linalg.sinhm() and linalg.tanhm(), the sign function linalg.signm(), the matrix logarithm linalg.logm(), and the matrix square root linalg.sqrtm().

Additionally, you can also evaluate a matrix function with the help of the linalg.funm() function. For example, check out the original tutorial.

You see that you pass in the matrix to which you want to apply a function as a first argument and a function (in this case a lambda function) that you want to apply to the matrix you passed. Note that the function that you pass to linalg.funm() has to be vectorized.

Let’s now take a look at some basic matrix routines. The first thing that you probably want to check out are the matrix attributes: T for transposition, H for conjugate transposition, I for inverse, and A to cast as an array.

When you transpose a matrix, you make a new matrix whose rows are the columns of the original. A conjugate transposition, on the other hand, interchanges the row and column index for each matrix element. The inverse of a matrix is a matrix that, if multiplied with the original matrix, results in an identity matrix.

But besides those attributes, there are also real functions that you can use to perform some basic matrix routines, such as np.transpose() and linalg.inv() for transposition and matrix inverse, respectively.

Besides these, you can also retrieve the trace or sum of the elements on the main matrix diagonal with np.trace(). Similarly, you can also retrieve the matrix rank or the number of Singular Value Decomposition singular values of an array that are greater than a certain treshold with linalg.matrix_rank from NumPy.

Don’t worry if the matrix rank doesn’t make sense for now; You’ll see more on that later on in this tutorial.

For now, let’s focus on two more routines that you can use:

  • The norm of a matrix can be computed with linalg.norm: a matrix norm is a number defined in terms of the entries of the matrix. The norm is a useful quantity which can give important information about a matrix because it tells you how large the elements are.
  • On top of that, you can also calculate the determinant, which is a useful value that can be computed from the elements of a square matrix, with linalg.det(). The determinant boils down a square matrix to a a single number, which determines whether the square matrix is invertible or not.

Lastly, solving large systems of linear equations are one of the most basic applications of matrices. If you have a system of Ax = b, where A is a square matrix and b a general matrix, you have two methods that you can use to find x, depending of course on which type of matrix you’re working with. For a code example, go here.

To solve sparse matrices, you can use linalg.spsolve(). When you can not solve the equation, it might still be possible to obtain an approximate \(x\) with the help of the linalg.lstsq() command.

Tip: don’t miss DataCamp’s SciPy cheat sheet.

Linear Algebra For Machine Learning with SciPy

Now that you have gotten a clue on how you can create matrices and how you can use them for mathematical operations, it’s time to tackle some more advanced topics that you’ll need to really get into machine learning.

Eigenvalues & Eigenvectors

The first topic that you will tackle are the eigenvalues and eigenvectors.

Eigenvalues are a new way to see into the heart of a matrix. But before you go more into that, let’s explain first what eigenvectors are. Almost all vectors change direction, when they are multiplied by a matrix. However, certain exceptional, resulting vectors are in the same direction as the vectors that are the result of the multiplication. These are the eigenvectors.

In other words, multiply an eigenvector by a matrix, and the resulting vector of that multiplication is equal to a multiplication of the original eigenvector with lambda, the eigenvalue: Ax = lambda x.

This means that the eigenvalue gives you very valuable information: it tells you whether one of the eigenvectors is stretched, shrunk, reversed, or left unchanged — when it is multiplied by a matrix.

You use the eig() function from the linalg SciPy module to solve ordinary or generalized eigenvalue problems for square matrices.

Note that the eigvals() function is another way of unpacking the eigenvalues of a matrix.

When you’re working with sparse matrices, you can fall back on the module scipy.sparse to provide you with the correct functions to find the eigenvalues and eigenvectors:

la, v = sparse.linalg.eigs(myMatrix,1)

Note that the code above specifies the number of eigenvalues and eigenvectors that has to be retrieved, namely, 1.

The eigenvalues and eigenvectors are important concepts in many computer vision and machine learning techniques, such as Principal Component Analysis (PCA) for dimensionality reduction and EigenFaces for face recognition.

Singular Value Decomposition (SVD)

Next, you need to know about SVD if you want to really learn data science. The singular value decomposition of a matrix A is the decomposition or facorization of A into the product of three matrices: A = U * \Sigma * V^t.

The size of the individual matrices is as follows if you know that matrix A is of size M * N:

  • Matrix U is of size M * M
  • Matrix V is of size N * N
  • Matrix Sigma is of size M * N

The * indicates that the matrices are multiplied and the ^t that you see in V^t means that the matrix is transposed, which means that the rows and columns are interchanged.

Simply stated, singular value decomposition provides a way to break a matrix into simpler, meaningful pieces. These pieces may contain some data we are interested in. Find an example here. Note that for sparse matrices, you can use the sparse.linalg.svds() function to perform the decomposition.

If you’re new to data science, the matrix decomposition will be quite opaque for you: you might not immediately see any use cases to apply this. But SVD is useful in many tasks, such as data compression, noise reduction and data analysis. You see how SVD can be used to compress images:

For the code to thee images, see this page.

Consider also the following examples where SVD is used:

  • SVD is closely linked to Principal Component Analysis (PCA), which is used for dimensionality reduction: both result in a set of “new axes” that are constructed from linear combinations of the the feature space axes of your data. These “new axes” break down the variance in the data points based on each direction’s contribution to the variance in the data. To see a concrete example of how PCA works on data, go to our Scikit-Learn Tutorial.
  • Another link is one with data mining and natural language processing (NLP): Latent Semantic Indexing (LSI). It s a technique that is used in document retrieval and word similarity. Latent semantic indexing uses SVD to group documents to the concepts that could consist of different words found in those documents. Various words can be grouped into a concept. Also here, SVD reduces the noisy correlation between words and their documents, and it decreases the number of dimensions that the original data has.

You see, SVD is an important concept in your data science journey that you must cover. That’s why you should consider going deeper into SVD than what this tutorial covers: for example, go to this page to read more about this matrix decomposition.

[Read more here]

Originally published at www.datacamp.com.