paint-brush
How to Solve for Eigenstates: The Time-Independent Schrödinger Equationby@arvinsk
3,194 reads
3,194 reads

How to Solve for Eigenstates: The Time-Independent Schrödinger Equation

by Arvin Kushwaha10mJanuary 5th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

If you’ve ever ventured anywhere near the field of quantum mechanics, you've heard of the “mystical” Schrödinger Equation. Today, we'll use Python to solve it!

Coin Mentioned

Mention Thumbnail
featured image - How to Solve for Eigenstates: The Time-Independent Schrödinger Equation
Arvin Kushwaha HackerNoon profile picture

If you’ve ever ventured anywhere near the field of quantum mechanics, you’ve almost certainly heard of the “mystical” Schrödinger Equation.

(Made with CodeCogs)

Much like Newton’s Second Law, which describes the propagation of particles in a classical system, where every particle has definite properties like position and momentum, the Schrödinger equation allows us to understand how a quantum system evolves, where each particle is described by a single, collective wavefunction.

What’s the difference between the Time-Dependent and Time-Independent Schrodinger Equation?

Since the Time-Dependent Schrodinger Equation (TDSE) allows for standing waves, we can solve for those specifically by simplifying the TDSE into the Time-Independent Schrodinger Equation (TISE).

(Made with CodeCogs)

If you look at the TISE, you’ll see a two main stark differences:

Firstly, there is no time-dependence (though that should have been implied by the name of the equation). Standing waves remain constant over time (except in phase), so the probability density of the particle remains constant. Thus, time at which the wavefunction is captured is irrelevant.

Secondly, instead of the partial derivative with respect to time on the left-hand side (which represents our energy operator), we have a constant energy (E), which represents the energy of the state. Therefore, the solutions to the TISE are energy eigenstates.

If you want to learn more about Quantum Mechanics before we dive in, I highly recommend checking out MIT OpenCourseWare at https://ocw.mit.edu, which has some excellent lectures on Quantum Mechanics.

HOW DO WE SOLVE IT, THOUGH?

Because of the simplified structure of the TISE, we can use a few clever (in my opinion, at least) tricks to solve for the energy levels and eigenstates.

If you’ve taken linear algebra, the word eigenstates probably reminds you of another similar term: eigenvectors. Let’s examine the (now color-coded) TISE, where we see a very familiar structure.

(Made with CodeCogs)

On the right-hand side, we have the magenta operator (a.k.a. the Hamiltonian) being applied to our eigenstate and on the left-hand side, we have the blue variable E (a.k.a. the total energy) scaling the eigenstate.

(Made with CodeCogs)

Here we have the equation describing the relationship between a transformation matrix, an eigenvector, and its corresponding eigenvalue. Just like the TISE, this is color-coded, with magenta being our transformation matrix and the blue being our eigenvalues.

THAT ALL SEEMS GREAT, BUT HOW DO WE TURN THE OPERATOR INTO A MATRIX?

Unfortunately, we can’t perfectly convert the operator into the matrix. However, we can closely approximate the operator using massive matrices.

Think back to your introductory calculus class when you had to differentiate with the difference quotient. Surprise, surprise! We’re doing that again (but in a slightly different manner.

In this instance, we will use a finite number of points to sample the wavefunction. We can approximate the second derivative with the difference quotient like so:

(Made with CodeCogs)

If we take a look at the difference quotient here, we notice that it is symmetrical, which is to say, the samples used to calculate the second derivative lie symmetrically around the point at which we seek to calculate the derivative. This means we can create a Hermitian matrix describing the second derivative of the vector describing the wavefunction.

(Made with CodeCogs)

Unfortunately, we cannot wholly describe a derivative with this matrix representation (or with any finite-size matrix).

However, with this revelation of using matrices to approximate derivatives, we are one step closer to solving this problem.

The Hamiltonian consists of two parts, the kinetic and potential energy.

(Made with CodeCogs)

Our matrix representation for the second derivative can be scaled by the Planck constant and the mass of our system to give us our kinetic energy.

As for the matrix representation of the potential energy, we simply create a diagonal matrix with our sampled potential energy function.

The total energy (represented by our Hamiltonian) is the sum of the kinetic and potential energies, giving us our final matrix:

For simplicity, we will assume that the Planck Constant is 1 and the mass is 0.5, so that our coefficients can be removed.

(Made with CodeCogs)

Now that we have our matrix representation, we can rewrite our TISE into a form we can approximate the solution for.

Bolded variables represent our matrix approximations and arrow-ed variables are vectors.

(Made with CodeCogs)

Finally, we have an eigenvalue problem we can solve! Let’s dive in!

AT LAST, THE GOOD STUFF: CODE

LANDING LIBRARIES & PREPARING PARAMETERS

We’re now at the final stretch where we can make all our lives easier just using code.

For this section, I’m assuming you have Python 3 and the NumPy, SciPy, and Matplotlib libraries ready to go.

First off, let’s import the necessary methods and modules!

import numpy as np
from scipy.sparse import diags #Allows us to construct our matrices
from scipy.sparse.linalg import eigsh #Solves the Eigenvalue problem
import matplotlib.pyplot as plt

(Made with CodeCogs)

Because our matrices are mostly just filled with zeros, we can use sparse matrices to hold our approximation for the Hamiltonian to conserve storage and optimize our calculations.

The 

scipy.sparse
 module provides a vast array of methods that we can use to make sure our program is efficient.

Now that we’ve imported the necessary information, let’s set the parameters that allow us to keep our program as flexible as possible.

First off, let’s decide how accurate we want our program to be.

Our variable 

N
 will determine the size of our matrices, where a bigger matrix typically tends to be more accurate. This comes with the heavy price of computational expense. The bigger our matrices, the longer the eigenstates will take to compute.

I find that 

N = 1000
 is a suitable value for my device, but depending on your computational constraints, another value might be better suited for you. Play around!

Next, we’ll set the physical parameters. Remember the Planck Constant and the mass of the system? We’ll set those here. For simplicity, however, we’ll just leave them as 

hbar = 1
 and 
m = 0.5
.

Of course, we can’t do much in the way of calculations without setting a coordinate system. Since we’re working in one dimension right now, we’ll use the variable 

x
 to keep track of our x-axis and the variables 
xmin
 and
xmax
 to record to limits of our axis.

Let’s set 

xmin = -1
 and 
xmax = 1
 to keep a reasonable area. To get our x, we’ll use the 
np.linspace
 method to create our axis: 
x = np.linspace(xmin, xmax, N)
.

Of course, we also need to remember 

dx
 so we can scale our kinetic energy matrix appropriately. This gives us 
dx = (xmax - xmin)/(N + 1)
.

Finally, we need the potential function for which we are solving our TISE for. Let’s start off with the finite well.

If you’ve taken a QM course before, you’ve probably come across this potential before. It’s a fairly standard potential with easily recognizable solutions.

(Image sourced from Annafitzgerald, CC BY-SA 3.0, via Wikimedia Commons)

We’ll store the potential function in the variable 

U
. But to create our well, we need to decide the length of the well (
L
) and the depth of the well (
W
). Since our axis spans from -1 to 1, we can set 
L = 1.5
, so our well spans from -0.75 to 0.75. We’ll also set 
W = 100
, because it’s nice and large, and gives us some room for adjustment (I also might have tested for some nice peaceful-looking numbers.

To create our potential functions, we can use the np.where method, which allows us to conditionally set the values of the array depending on the x-values:

U = np.where((x < L/2) & (x > -L/2), 0, W)

Let’s sum it all up!

### IMPORTS ###
import numpy as np
from scipy.sparse import diags #Allows us to construct our matrices
from scipy.sparse.linalg import eigsh #Solves the Eigenvalue problem
import matplotlib.pyplot as plt

### CONSTANTS ###
N = 1000
hbar = 1
m = 0.5
xmin = -1
xmax = 1
x = np.linspace(xmin, xmax, N)
dx = (xmax - xmin)/(N + 1)
L = 1.5
W = 1
U = np.where((x < L/2) & (x > -L/2), 0, W)

Tada! We have all of our constants set up!

MANUFACTURING MATRICES & EVALUATING EIGENSTATES

Next, we have to produce the matrices to actually calculate our eigenstates. Thankfully, all the hard work of figuring out how these matrices work has already been done.

Let’s start off with the matrix for the second-derivative.

(Made with CodeCogs)

To create this matrix, we’ll use the 

scipy.sparse.diags
 method. This method allows us to feed in a 
float
 or 
ndarray
 list containing the values for the diagonals along with an 
int
 list describing the offsets (which diagonal our values will live on).

We see that the -2 values are on the main diagonal (offset = 0). The 1’s are on the closest upper and lower diagonals (offsets = ∓1).

We also can’t forget our scaling factor 

1 / (dx**2)
!

Plugging that all in to the function, we get our variable 

d2dx2
:

d2dx2 = diags([1, -2, 1], offsets=[-1, 0, 1], shape=(N, N))/(dx**2)

From here, we can create our kinetic energy operator (

T
), which is just 
d2dx2
 scaled by 
-(hbar**2)/(2 * m)
, to give us:

T = -(hbar**2)/(2 * m) * d2dx2

We’re nearly halfway done!

To create our potential energy operator (

V
), all we have to do is stick it on a diagonal, which is easier done than said.

V = diags([U], offsets=[0], shape=(N, N))

And with that, our potential energy operator is complete.

Finally, to get our Hamiltonian we just sum the potential energy and kinetic energy.

H = T + V

With the Hamiltonian, all we have to do now is actually calculate the eigenvalues and eigenvectors.

Buuut, before we do that, let’s discuss our eigenvalue and eigenvector method 

eigsh
.

In our problem, the eigenvalues represent the energies of our wavefunctions, so we aim to find the smallest eigenvalue and eigenvector pairs.

Unfortunately, 

eigsh
, by default, returns the largest eigenvalues. However, it does have a handy keyword parameter to return the smallest eigenvalues first.

eigvals, eigvecs = eigsh(H, which="SM")

eigvals
 and 
eigvecs
 are our two variables containing the eigenvalues and eigenvectors, respectively.

To tie it all together:

### PREPARE MATRICES ###
d2dx2 = diags([1, -2, 1], offsets=[-1, 0, 1], shape=(N, N))/(dx**2)
T = -(hbar**2)/(2 * m) * d2dx2
V = diags(U)
H = T + V

### CALCULATE EIGENSTATES ###
eigvals, eigvecs = eigsh(H, which="SM", k=k)

You’ll notice there is an extra parameter 

k
, that’s just the number of solutions you want to return. The default is
6
.

VISUALIZING VECTORS & PLOTTING PROBABILITIES

Of course, as they say, “A picture is worth a thousand words.” I’d like this code to reflect that, so we’ll just create some nice old visualization code.

First, let’s analyze what our two variables (

eigvals
 and 
eigvecs
) are like.

eigvals
: An 
np.ndarray
of floats with shape 
(k,)
 listing the eigenvalues.

eigvecs
: An 
np.ndarray
of floats with shape 
(N, k)
 listing the eigenvectors.

To get the pairs of eigenvalues and eigenvectors, essentially

eigvals[i]
pairs with 
eigvecs[:, i]
.

Now that we have that, let’s start plotting! (Yay, we can finally bring in the Matplotlib!)

First, let’s plot our potential:

plt.plot(x, U)

To visualize the i-th eigenvalue and eigenvector, we will plot a dotted line at the energy level along with the eigenvector itself. Of course, these eigenvectors aren’t already located near the energy level and for our visualization, we’ll just add the energy level to the eigenvector (read as wavefunction) to translate it to the appropriate location.

We’re also going to display our probability distribution, so if you remember, the probability distribution is the product of the wavefunction with its conjugate. Thankfully, the .conj() method of np.ndarray comes to our rescue!

plt.plot(x, np.full_like(x, eigvals[i]), "k--")
plt.plot(x, eigvals[i] + eigvecs[:, i] * scale, 'k')
plt.plot(x, eigvals[i] + eigvecs[:, i] * eigvecs[:, i].conj() * scale ** 2, 'b')

Sorry, sorry, once again, I threw in an extra parameter scale (I prefer scale = 60). All for good reason, though. Depending on the scale of the system, it might be helpful to scale the wavefunctions to a more visible scale.

Throwing in the loop we get:

for i in range(k):
    plt.plot(x, np.full_like(x, eigvals[i]), "k--")
    plt.plot(x, eigvals[i] + eigvecs[:, i] * scale, 'k')
    plt.plot(x, eigvals[i] + eigvecs[:, i] * eigvecs[:, i].conj() * scale ** 2, 'b')

Of course, we can’t have a proper display without showing our plot, so:

plt.show()

Wrapping it all up, our plotting demands can be satisfied in just six very simple lines:

### PLOTTING ###
plt.plot(x, U)
for i in range(k):
    plt.plot(x, np.full_like(x, eigvals[i]), "k--")
    plt.plot(x, eigvals[i] + eigvecs[:, i] * scale, 'k')
    plt.plot(x, eigvals[i] + eigvecs[:, i] * eigvecs[:, i].conj() * scale * scale, 'b')
plt.show()

CONCATENATING CODE

We’re almost the end of our thrilling journey of actually programming the “solver,” but we’re not entirely done yet. Let’s combine all the code together to create our final program.

### IMPORTS ###
import numpy as np
from scipy.sparse import diags #Allows us to construct our matrices
from scipy.sparse.linalg import eigsh #Solves the Eigenvalue problem
import matplotlib.pyplot as plt

### CONSTANTS ###
N = 1000
hbar = 1
m = 0.5
xmin = -1
xmax = 1
x = np.linspace(xmin, xmax, N)
dx = (xmax - xmin)/(N + 1)
L = 1.5
W = 1
U = np.where((x < L/2) & (x > -L/2), 0, W)

### PREPARE MATRICES ###
d2dx2 = diags([1, -2, 1], offsets=[-1, 0, 1], shape=(N, N))/(dx**2)
T = -(hbar**2)/(2 * m) * d2dx2
V = diags(U)
H = T + V

### CALCULATE EIGENSTATES ###
eigvals, eigvecs = eigsh(H, which="SM", k=k)

### PLOTTING ###
plt.plot(x, U)
for i in range(k):
    plt.plot(x, np.full_like(x, eigvals[i]), "k--")
    plt.plot(x, eigvals[i] + eigvecs[:, i] * scale, 'k')
    plt.plot(x, eigvals[i] + eigvecs[:, i] * eigvecs[:, i].conj() * scale * scale, 'b')
plt.show()

Our graphs have four main parts. The massive potential well is fairly self-explanatory, the blue squiggles are our probability distributions, and the black curves are our wavefunction. As mentioned earlier, our dashed black line represents our energy level. Well, as far as I see, our wavefunctions seem to match up extremely well with real solutions. Notice how the probability of being outside the box seems to increase as the energy levels increase on the right-hand model. Thankfully, we find that our “solver” gives us the very same results.

Now, that we’ve got the Finite Potential Well down, let’s test out another classroom favorite: the Harmonic Oscillator.

With the Harmonic Oscillator (U = 150 * x**2), we get the following energy levels:

Just like expected, we see the equally spaced energy levels on our harmonic oscillator. Another good sign is the symmetry (and anti-symmetry) of our probability distributions and wavefunctions.

CONCLUDING REMARKS

Naturally, computational quantum mechanics seems to be quite a journey (I mean, look, we just spent around a thousand words figuring out who to convert operators into matrices). However, it’ll be worthwhile, learning as I explore the field, and the next time I have something cool or unusual, I’ll be sure to continue the series.

If you ever want to learn more about QM, I definitely recommend heading to the MIT OpenCourseWare website. They have plenty of courses available for free, including Quantum Physics I.

If you want to check out other articles by me, feel free to head to my account. I only have one other article though. However, more will definitely be coming!

If you have any suggestions, comments, or recommendations, please let me know in the comments and I’d love some constructive criticism.

Also published at https://arvin-singhk.medium.com/solving-for-eigenstates-the-time-independent-schr%C3%B6dinger-equation-b09278c4f1f1