paint-brush
NumPy Tutorial: Array Computing in Pythonby@kacawi
1,157 reads
1,157 reads

NumPy Tutorial: Array Computing in Python

by Karlijn WillemsJanuary 19th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>This article was originally published at </em><a href="https://www.datacamp.com/community/tutorials/python-numpy-tutorial" target="_blank"><em>https://www.datacamp.com/community/tutorials/python-numpy-tutorial</em></a>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - NumPy Tutorial: Array Computing in Python
Karlijn Willems HackerNoon profile picture

This article was originally published at https://www.datacamp.com/community/tutorials/python-numpy-tutorial

NumPy is, just like SciPy, Scikit-Learn, Pandas, etc. one of the packages that you just can’t miss when you’re learning data science, mainly because this library provides you with an array data structure that holds some benefits over Python lists, such as: being more compact, faster access in reading and writing items, being more convenient and more efficient.

These benefits had already been described in our 18 Most Common Python List Questions blog post, which taught you how to work with Python lists, how to survive some of the tough questions that you might have while working with them and when to opt for other data structures, such as the NumPy array.

Today’s post will exactly focus on this last. This NumPy tutorial will not only show you what NumPy arrays actually are and how you can install Python, but you’ll also learn how to make arrays (even when your data comes from files!), how broadcasting works, how you can ask for help, how to manipulate your arrays and how to visualize them.

If you want to know even more about NumPy arrays and the other data structures that you will need in your data science journey? Consider taking a look at DataCamp’s Intro to Python for Data Science and Intermediate Python for Data Science courses.

What is A NumPy Array?

You already read in the introduction that NumPy arrays are a bit like Python lists, but still very much different at the same time. For those of you who are new to the topic, let’s clarify what it exactly is and what it’s good for.

As the name kind of gives away, a NumPy array is a central data structure of the numpy library. The library’s name is actually short for “Numeric Python” or “Numerical Python”.

This already gives an idea of what you’re dealing with, right?

In other words, NumPy is a Python library that is the core library for scientific computing in Python. It contains a collection of tools and techniques that can be used to solve on a computer mathematical models of problems in Science and Engineering. One of these tools is a high-performance multidimensional array object that is a powerful data structure for efficient computation of arrays and matrices. To work with these arrays, there’s a huge amount of high-level mathematical functions operate on these matrices and arrays.

Then, what is an array?

When you look at the print of a couple arrays, you could see it as grid that contains values of the same type. The array holds and represents any regular data in a structured way.

However, you should know that, on a structural level, an array is basically nothing but pointers. It’s a combination of a memory address, a data type, a shape and strides:

  • The data pointer indicates the memory address of the first byte in the array,
  • The data type or dtype pointer describes the kind of elements that are contained within the array,
  • The shape indicates the shape of the array, and
  • The strides are the number of bytes that should be skipped in memory to go to the next element. If your strides are (10,1), you need to proceed one byte to get to the next column and 10 bytes to locate the next row.

Or, in other words, an array contains information about the raw data, how to locate an element and how to interpret an element.

Enough of the theory. Let’s check this out ourselves:

You can easily test this by exploring the numpy array attributes.

In 2-dimensional arrays, you have rows and columns. The rows are indicated as the “axis 0”, while the columns are the “axis 1”. The number of the axis goes up accordingly with the number of the dimensions: in 3-D arrays, of which you have also seen an example in the previous code chunk, you’ll have an additional “axis 2”. Note that these axes are only valid for arrays that have at least 2 dimensions, as there is no point in having this for 1-D arrays;

These axes will come in handy later when you’re manipulating the shape of your NumPy arrays.

How To Install NumPy

Before you can start to try out these NumPy arrays for yourself, you first have to make sure that you have it installed locally (assuming that you’re working on your pc). If you have the Python library already available, go ahead and skip this section :)

If you still need to set up your environment, you must be aware that there are two major ways of installing NumPy on your pc: with the help of Python wheels or the Anaconda Python distribution.

Make sure firstly that you have Python installed. You can go here if you still need to do this :)

If you’re working on Windows, make sure that you have added Python to the PATH environment variable. Then, don’t forget to install a package manager, such as pip, which will ensure that you’re able to use Python’s open-source libraries.

Note that recent versions of Python 3 come with pip, so double check if you have it and if you do, upgrade it before you install NumPy:

pip install pip --upgrade pip --version

Next, you can go here or here to get your NumPy wheel. After you have downloaded it, navigate to the folder on your pc that stores it through the terminal and install it:

install "numpy-1.9.2rc1+mkl-cp34-none-win_amd64.whl" import numpy numpy.__version__

The two last lines allow you to verify that you have installed NumPy and check the version of the package.

After these steps, you’re ready to start using NumPy!

To get NumPy, you could also download the Anaconda Python distribution. This is easy and will allow you to get started quickly! If you haven’t downloaded it already, go here to get it. Follow the instructions to install and you’re ready to start!

Do you wonder why this might actually be easier?

The good thing about getting this Python distribution is the fact that you don’t need to worry too much about separately installing NumPy or any of the major packages that you’ll be using for your data analyses, such as pandas, scikit-learn, etc.

Because, especially if you’re very new to Python, programming or terminals, it can really come as a relief that Anaconda already includes 100 of the most popular Python, R and Scala packages for data science. But also for more seasoned data scientists, Anaconda is the way to go if you want to get started quickly on tackling data science problems.

What’s more, Anaconda also includes several open source development environments such as Jupyter and Spyder. If you’d like to start working with Jupyter Notebook after this tutorial, go to this page.

In short, consider downloading Anaconda to get started on working with numpy and other packages that are relevant to data science!

How To Create NumPy Arrays

So, now that you have set up your environment, it’s time for the real work. Admittedly, you have already tried out some stuff with arrays in the above DataCamp Light chunks. However, you haven’t really gotten any real hands-on practice with them, because you first needed to install NumPy on your own pc. Now that you have done this, it’s time to see what you need to do in order to run the above code chunks on your own.

Some exercises have been included below so that you can already practice how it’s done before you start on your own!

To make a numpy array, you can just use the np.array() function. All you need to do is pass a list to it and optionally, you can also specify the data type of the data. If you want to know more about the possible data types that you can pick, go here or consider taking a brief look at DataCamp’s NumPy cheat sheet.

There’s no need to go and memorize these NumPy data types if you’re a new user; But you do have to know and care what data you’re dealing with. The data types are there when you need more control over how your data is stored in memory and on disk. Especially in cases where you’re working with large data, it’s good that you know to control the storage type.

Don’t forget that, in order to work with the np.array() function, you need to make sure that the numpy library is present in your environment. The NumPy library follows an import convention: when you import this library, you have to make sure that you import it as np. By doing this, you’ll make sure that other Pythonistas understand your code more easily.

If you would like to know more about how to make lists, go here.

However, sometimes you don’t know what data you want to put in your array or you want to import data into a numpy array from another source. In those cases, you’ll make use of initial placeholders or functions to load data from text into arrays, respectively.

The following sections will show you how to do this.

Creating Empty Arrays

What people often mean when they say that they are creating “empty” arrays is that they want to make use of initial placeholders, which you can fill up afterwards. You can initialize arrays with ones or zeros, but you can also make arrays that get filled up with evenly spaced values, constant or random values.

However, you can still make a totally empty array, too.

Luckily for us, there are quite a lot of functions to make arrays:

  • For some, such as np.ones(), np.random.random(), np.empty(), np.full() or np.zeros() the only thing that you need to do in order to make arrays with ones or zeros is pass the shape of the array that you want to make. As an option to np.ones() and np.zeros(), you can also specify the data type. In case of np.full(), you also have to specify the constant value that you want to insert into the array.
  • With np.linspace() and np.arange() you can make arrays of evenly spaced values. The difference between these two functions is that the last value of the three that are passed in the code chunk above designates either the step value for np.linspace() or number of samples for np.arange(). What happens in the first is that you want, for example, an array of 9 values that lie between 0 and 2. For the latter, you specify that you want an array to start at 10 and per steps of 5, generate values for the array that you’re creating.

Remember that NumPy also allows you to create an identity array or matrix with np.eye() and np.identity(). An identity matrix is a square matrix of which all elements in the principal diagonal are ones and all other elements are zeros. When you multiply a matrix with an identity matrix, the given matrix is left unchanged.

In other words, if you multiply a matrix by an identity matrix, the resulting product will be the same matrix again by the standard conventions of matrix multiplication.

Even though the focus of this tutorial is not on demonstrating how identity matrices work, it suffices to say that identity matrices are useful when you’re starting to do matrix calculations: they can simplify mathematical equations, which makes your computations more efficient and robust.

Creating arrays with the help of initial placeholders or with some example data is a great way of getting started with numpy. But when you want to get started with data analysis, you’ll need to load data from text files.

Loading Data From Files Into Arrays

With that what you have seen up until now, you won’t really be able to do much. Make use of some specific functions to load data from your files, such as loadtxt() or genfromtxt().

Let’s say you have the following text files with data:

# This is your data in the text file # Value1 Value2 Value3 # 0.2536 0.1008 0.3857 # 0.4839 0.4536 0.3561 # 0.1292 0.6875 0.5929 # 0.1781 0.3049 0.8928 # 0.6253 0.3486 0.8791 # Import your data 

x, y, z = np.loadtxt('data.txt', skiprows=1, unpack=True)

In the code above, you use [loadtxt()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html#numpy.loadtxt) to load the data in your environment. You see that the first argument that both functions take is the text file data.txt. Next, there are some specific arguments for each: in the first statement, you skip the first row and you return the columns as separate arrays with unpack=TRUE. This means that the values in column Value1 will be put in x, and so on.

Note that, in case you have comma-delimited data or if you want to specify the data type, there are also the arguments delimiter and dtype that you can add to the loadtxt() arguments.

That’s easy and straightforward, right?

Let’s take a look at your second file with data:

# Your data in the text file # Value1 Value2 Value3 # 0.4839 0.4536 0.3561 # 0.1292 0.6875 MISSING # 0.1781 0.3049 0.8928 # MISSING 0.5801 0.2038 # 0.5993 0.4357 0.7410 

my_array2 = np.genfromtxt('data2.txt', skip_header=1, filling_values=-999)

You see that here, you resort to genfromtxt() to load the data. In this case, you have to handle some missing values that are indicated by the 'MISSING' strings. Since the genfromtxt() function converts character strings in numeric columns to nan, you can convert these values to other ones by specifying the filling_values argument. In this case, you choose to set the value of these missing values to -999.

If, by any chance, you have values that don’t get converted to nan by genfromtxt(), there’s always the missing_values argument that allows you to specify what the missing values of your data exactly are.

But this is not all.

Tip: check out this page to see what other arugments you can add to import your data successfully.

You now might wonder what the difference between these two functions really is.

The examples indicated this maybe implicitly, but, in general, genfromtxt() gives you a little bit more flexibility; It’s more robust than loadtxt().

Let’s make this difference a little bit more practical: the latter, loadtxt(), only works when each row in the text file has the same number of values; So when you want to handle missing values easily, you’ll typically find it easier to use genfromtxt().

But this is definitely not the only reason.

A brief look on the number of arguments that genfromtxt() has to offer will teach you that there is really a lot more things that you can specify in your import, such as the maximum number of rows to read or the option to automatically strip white spaces from variables.

Save Arrays To Files

Once you have done everything that you need to do with your arrays, you can also save them to a file. If you want to save the array to a text file, you can use the savetxt() function to do this:

import numpy as np x = np.arange(0.0,5.0,1.0) np.savetxt('test.out', x, delimiter=',')

Remember that np.arange() creates a NumPy array of evenly-spaced values. The third value that you pass to this function is the step value.

There are, of course, other ways to save your NumPy arrays to text files. Check out the functions in the table below if you want to get your data to binary files or archives:

save() Save an array to a binary file in NumPy .npy format savez() Save several arrays into an uncompressed .npz archive savez_compressed() Save several arrays into a compressed .npz archive

For more information or examples of how you can use the above functions to save your data, go here or make use of one of the help functions that NumPy has to offer to get to know more instantly!

Are you not sure what these NumPy help functions are?

No worries! You’ll learn more about them in one of the next sections!

Inspecting Your Array

Besides the array attributes that have been mentioned above, namely, data, shape, dtype and strides, there are some more that you can use to easily get to know more about your arrays.

These are almost all the attributes that an array can have.

Don’t worry if you don’t feel that all of them are useful for you at this point; This is fairly normal, because, just like you read in the previous section, you’ll only get to worry about memory when you’re working with large data sets.

Now that you have made your array, either by making one yourself with the np.array() or one of the intial placeholder functions, or by loading in your data through the loadtxt() or genfromtxt() functions, it’s time to look more closely into the second key element that really defines the NumPy library: scientific computing.

How Does Broadcasting Work?

Before you go deeper into scientific computing, it might be a good idea to first go over what broadcasting exactly is: it’s a mechanism that allows NumPy to work with arrays of different shapes when you’re performing arithmetic operations.

To put it in a more practical context, you often have an array that’s somewhat larger and another one that’s somewhat smaller. Ideally, you want to use the smaller array multiple times to perform an operation (such as a sum, multiplication, etc.) on the larger array.

To do this, you use the broadcasting mechanism.

However, there are some rules if you want to use it. And, before you already sigh, you’ll see that these “rules” are very simple and kind of straightforward!

  • First off, to make sure that the broadcasting is successful, the dimensions of your arrays need to be compatible. Two dimensions are compatible when they are equal.
  • Two dimensions are also compatible when one of them is 1

Note that if the dimensions are not compatible, you will get a ValueError.

Tip: also test what the size of the resulting array is after you have done the computations! You’ll see that the size is actually the maximum size along each dimension of the input arrays.

  • Lastly, the arrays can only be broadcast together if they are compatible in all dimensions.

In short, if you want to make use of broadcasting, you will rely a lot on the shape and dimensions of the arrays with which you’re working.

But what if the dimensions are not compatible?

What if they are not equal or if one of them is not equal to 1?

You’ll have to fix this by manipulating your array! You’ll see how to do this in one of the next sections.

How To Do Array Mathematics

You’ve seen that broadcasting is handy when you’re doing arithmetic operations. In this section, you’ll discover some of the functions that you can use to do mathematics with arrays.

As such, it probably won’t surprise you that you can just use +, -, *, / or % to add, subtract, multiply, divide or calculate the remainder of two (or more) arrays. However, a big part of why NumPy is so handy, is because it also has functions to do this. The equivalent functions of the operations that you have seen just now are, respectively, np.add(), np.subtract(), np.multiply(), np.divide() and np.remainder().

You can also easily do exponentiation and taking the square root of your arrays with np.exp() and np.sqrt(), or calculate the sines or cosines of your array with np.sin() and np.cos(). Lastly, its’ also useful to mention that there’s also a way for you to calculate the natural logarithm with np.log() or calculate the dot product by applying the dot() to your array.

But there is more.

Check out this small list of aggregate functions here.

Besides all of these functions, you might also find it useful to know that there are mechanisms that allow you to compare array elements. For example, if you want to check whether the elements of two arrays are the same, you might use the == operator. To check whether the array elements are smaller or bigger, you use the < or > operators.

This all seems quite straightforward, yes?

However, you can also compare entire arrays with each other! In this case, you use the np.array_equal() function. Just pass in the two arrays that you want to compare with each other and you’re done.

Note that, besides comparing, you can also perform logical operations on your arrays. You can start with np.logical_or(), np.logical_not() and np.logical_and(). This basically works like your typical OR, NOT and AND logical operations;

In the simplest example, you use OR to see whether your elements are the same (for example, 1), or if one of the two array elements is 1. If both of them are 0, you’ll return FALSE. You would use AND to see whether your second element is also 1 and NOT to see if the second element differs from 1.

Subsetting, Slicing And Indexing

Besides mathematical operations, you might also consider taking just a part of the original array (or the resulting array) or just some array elements to use in further analysis or other operations. In such case, you will need to subset, slice and/or index your arrays.

These operations are very similar to when you perform them on Python lists. If you want to check out the similarities for yourself, or if you want a more elaborate explanation, you might consider checking out DataCamp’s Python list tutorial.

If you have no clue at all on how these operations work, it suffices for now to know these two basic things:

  • You use square brackets [] as the index operator, and
  • Generally, you pass integers to these square brackets, but you can also put a colon : or a comination of the colon with integers in it to designate the elements/rows/columns you want to select.

Besides from these two points, the easiest way to see how this all fits together is by looking at some examples of subsetting. You can find some here.

Something a little bit more advanced than subsetting, if you will, is slicing. Here, you consider not just particular values of your arrays, but you go to the level of rows and columns. You’re basically working with “regions” of data instead of pure “locations”.

You’ll see that, in essence, the following holds:

a[start:end] # items start through the end (but the end is not included!)a[start:] # items start through the rest of the array a[:end] # items from the beginning through the end (but the end is not included!)

Lastly, there’s also indexing. When it comes to NumPy, there are boolean indexing and advanced or “fancy” indexing.

(In case you’re wondering, this is true NumPy jargon, I didn’t make the last one up!)

First up is boolean indexing. Here, instead of selecting elements, rows or columns based on index number, you select those values from your array that fulfill a certain condition.

Note that, to specify a condition, you can also make use of the logical operators | (OR) and & (AND). If you would want to rewrite the condition above in such a way (which would be inefficient, but I demonstrate it here for educational purposes :)), you would get bigger_than_3 = (my_3d_array > 3) | (my_3d_array == 3).

With the arrays that have been loaded in, there aren’t too many possibilities, but with arrays that contain for example, names or capitals, the possibilities could be endless!

When it comes to fancy indexing, that what you basically do with it is the following: you pass a list or an array of integers to specify the order of the subset of rows you want to select out of the original array.

Does this sound a little bit abstract to you? Get some practice here.

Asking For Help

As a short intermezzo, you should know that you can always ask for more information about the modules, functions or classes that you’re working with, especially becauseNumPy can be quite something when you first get started on working with it.

Asking for help is fairly easy.

You just make use of the specific help functions that numpy offers to set you on your way:

  • Use lookfor() to do a keyword search on docstrings. This is specifically handy if you’re just starting out, as the ‘theory’ behind it all might fade in your memory. The one downside is that you have to go through all of the search results if your query is not that specific, as is the case in the code example below. This might make it even less overviewable for you.
  • Use info() for quick explanations and code examples of functions, classes, or modules. If you’re a person that learns by doing, this is the way to go! The only downside about using this function is probably that you need to be aware of the module in which certain attributes or functions are in. If you don’t know immediately what is meant by that, check out the code example below.

Note that you indeed need to know that dtype is an attribute of ndarray. Also, make sure that you don’t forget to put np in front of the modules, classes or terms you’re asking information about, otherwise you will get an error message like this:

Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'ndarray' is not defined

You now know how to ask for help, and that’s a good thing. The next topic that this NumPy tutorial covers is array manipulation.

Not that you can not overcome this topic on your own, quite the contrary!

But some of the functions might raise questions, because, what is the difference between resizing and reshaping?

And what is the difference between stacking your arrays horizontally and vertically?

The next section is all about answering these questions, but if you ever feel in doubt, feel free to use the help functions that you have just seen to quickly get up to speed.

Array Manipulation

Performing mathematical operations on your arrays is one of the things that you’ll be doing, but probably most importantly to make this and the broadcasting work is to know how to manipulate your arrays.

Below are some of the most common manipulations that you’ll be doing.

What transposing your arrays actually does is permuting the dimensions of it. Or, in other words, you switch around the shape of the array.

Tip: if the visual comparison between the array and its transposed version is not entirely clear, inspect the shape of the two arrays to make sure that you understand why the dimensions are permuted.

Note that there are two transpose functions. Both do the same; There isn’t too much difference. You do have to take into account that T seems more of a convenience function and that you have a lot more flexibility with np.transpose(). That’s why it’s recommended to make use of this function if you want to more arguments.

All is well when you transpose arrays that are bigger than one dimension, but what happens when you just have a 1-D array? Will there be any effect, you think?

Tip: try it out here.

Resizing Versus Reshaping Arrays

You might have read in the broadcasting section that the dimensions of your arrays need to be compatible if you want them to be good candidates for arithmetic operations. But the question of what you should do when that is not the case, was not answered yet.

Well, this is where you get the answer!

What you can do if the arrays don’t have the same dimensions, is resize your array. You will then return a new array that has the shape that you passed to the np.resize() function. If you pass your original array together with the new dimensions, and if that new array is larger than the one that you originally had, the new array will be filled with copies of the original array that are repeated as many times as is needed.

However, if you just apply np.resize() to the array and you pass the new shape to it, the new array will be filled with zeros.

Besides resizing, you can also reshape your array. This means that you give a new shape to an array without changing its data. The key to reshaping is to make sure that the total size of the new array is unchanged. If you take the example of array x that was used above, which has a size of 3 X 4 or 12, you have to make sure that the new array also has a size of 12.

Psst… If you want to calculate the size of an array with code, make sure to use the size attribute: x.size.

If all else fails, you can also append an array to your original one or insert or delete array elements to make sure that your dimensions fit with the other array that you want to use for your computations.

Another operation that you might keep handy when you’re changing the shape of arrays is ravel(). This function allows you to flatten your arrays. This means that if you ever have 2D, 3D or n-D arrays, you can just use this function to flatten it all out to a 1-D array.

Pretty handy, isn’t it?

How To Apppend Arrays

When you append arrays to your original array, they are “glued” to the end of that original array. If you want to make sure that what you append does not come at the end of the array, you might consider inserting it. Go to the next section if you want to know more.

Appending is a pretty easy thing to do thanks to the NumPy library; You can just make use of the np.append().

Next to appending, you can also insert and delete array elements. As you might have guessed by now, the functions that will allow you to do these operations are np.insert() and np.delete().

How To Stack And Split Arrays?

You can also ‘merge’ or join your arrays. There are a bunch of functions that you can use for that purpose and most of them are listed below.

You’ll note a few things as you go through the functions:

  • The number of dimensions needs to be the same if you want to concatenate two arrays with np.concatenate(). As such, if you want to concatenate an array with my_array, which is 1-D, you’ll need to make sure that the second array that you have, is also 1-D.
  • With np.vstack(), you effortlessly combine my_array with my_2d_array. You just have to make sure that, as you’re stacking the arrays row-wise, that the number of columns in both arrays is the same. As such, you could also add an array with shape (2,4) or (3,4) to my_2d_array, as long as the number of columns matches. Stated differently, the arrays must have the same shape along all but the first axis. The same holds also for when you want to use np.r[].
  • For np.hstack(), you have to make sure that the number of dimensions is the same and that the number of rows in both arrays is the same. That means that you could stack arrays such as (2,3) or (2,4) to my_2d_array, which itself as a shape of (2,4). Anything is possible as long as you make sure that the number of rows matches. This function is still supported by NumPy, but you should prefer np.concatenate() or np.stack().
  • With np.column_stack(), you have to make sure that the arrays that you input have the same first dimension. In this case, both shapes are the same, but if my_resized_array were to be (2,1) or (2,), the arrays still would have been stacked.
  • np.c_[] is another way to concatenate. Here also, the first dimension of both arrays needs to match.

When you have joined arrays, you might also want to split them at some point. Just like you can stack them horizontally, you can also do the same but then vertically. You use np.hsplit() and np.vsplit(), respectively.

What you need to keep in mind when you’re using both of these split functions is probably the shape of your array.

Lastly, something that will definitely come in handy is to know how you can plot your arrays. This can especially be handy in data exploration, but also in later stages of the data science workflow, when you want to visualize your arrays.

How To Plot Your Arrays

Contrary to what the function might suggest, the np.histogram() function doesn’t draw the histogram but it does compute the occurrences of the array that fall within each bin; This will determine the area that each bar of your histogram takes up.

What you pass to the np.histogram() function then is first the input data or the array that you’re working with. The array will be flattened when the histogram is computed.

You’ll see that as a result, the histogram will be computed: the first array lists the frequencies for all the elements of your array, while the second array lists the bins that would be used if you don’t specify any bins.

If you do specify a number of bins, the result of the computation will be different: the floats will be gone and you’ll see all integers for the bins.

There are still some other arguments that you can specify that can influence the histogram that is computed. You can find all of them here.

But what is the point of computing such a histogram if you can’t visualize it?

Visualization is a piece of cake with the help of Matplotlib, but you don’t need np.histogram() to compute the histogram. plt.hist() does this for itself when you pass it the (flattened) data and the bins:

# Import numpy and matplotlib import numpy as np import matplotlib.pyplot as plt 

# Construct the histogram with a flattened 3d array and a range of bins plt.hist(my_3d_array.ravel(), bins=range(0,13)) 

# Add a title to the plot plt.title('Frequency of My 3D Array Elements') 

# Show the plot plt.show()

The above code will then give you the following (basic) histogram:

Another way to (indirectly) visualize your array is by using np.meshgrid(). The problem that you face with arrays is that you need 2-D arrays of x and y coordinate values. With the above function, you can create a rectangular grid out of an array of x values and an array of y values: the np.meshgrid() function takes two 1D arrays and produces two 2D matrices corresponding to all pairs of (x, y) in the two arrays. Then, you can use these matrices to make all sorts of plots.

np.meshgrid() is particularly useful if you want to evaluate functions on a grid, as the code below demonstrates:

# Import NumPy and Matplotlib import numpy as np import matplotlib.pyplot as plt 

# Create an array points = np.arange(-5, 5, 0.01) 

# Make a meshgrid xs, ys = np.meshgrid(points, points) z = np.sqrt(xs ** 2 + ys ** 2)

# Display the image on the axes plt.imshow(z, cmap=plt.cm.gray) 

# Draw a color bar plt.colorbar() 

# Show the plot plt.show()

The code above gives the following result:

Data Analysis With Python: Continued

Congratulations, you have reached the end of the NumPy tutorial!

You have covered a lot of ground, so now you have to make sure to retain the knowledge that you have gained. Don’t forget to get your copy of DataCamp’s NumPy cheat sheet to support you in doing this!

After all this theory, it’s also time to get some more practice with the concepts and techniques that you have learned in this tutorial. One way to do this is to go back to the scikit-learn tutorial and start experimenting with further with the data arrays that are used to build machine learning models.

If this is not your cup of tea, check again whether you have downloaded Anaconda. Then, get started with NumPy arrays in Jupyter with this Definitive Guide to Jupyter Notebook. Also make sure to check out this Jupyter Notebook, which also guides you through data analysis in Python with NumPy and some other libraries in the interactive data science environment of the Jupyter Notebook.

Lastly, consider checking out DataCamp’s courses on data manipulation and visualization. Especially our latest courses in collaboration with Continuum Analytics will definitely interest you! Take a look at the Manipulating DataFrames with Pandas or the Pandas Foundations courses.

Originally published at www.datacamp.com.