paint-brush
A Power-Packed Light Reading on Data Structuresby@ayoubgholami
208 reads

A Power-Packed Light Reading on Data Structures

by ayoubMay 7th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Big O Notation is a special notation that tells you how fast an algorithm is. Time Complexity is a way to measure the performance of an operation based on the size of the input collection. The time it takes to perform an operation is directly related to the number of items in the collection. O(1) is fast. O(n square) is slow. O('n square'): Quadratic. A slow sorting algorithm. A really slow algorithm, like the traveling salesperson.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - A Power-Packed Light Reading on Data Structures
ayoub HackerNoon profile picture

when we talk about data structure and algorithm it's very important to know the concepts like Big O Notation and Time Complexity
these concepts help us to choose the right data structure and to know
them very vital these are metrics which we use to choose the right data
structure.and things like how memory ,linked list and array works,they
help us to have better understanding of the speed and performance of a
data structure.

Why Do We Need to Know Time Complexities?

A developer must choose the right data structures for the job. Choosing
the right data structure is often ignored and it ends up badly impacting
the performance of the application. at this tutorial, I want to teach
you some important concepts in the data structure.at first, I want to
talk about Big O Notation

What does Big O Notation Mean?

Big O Notation is a special notation that tells you how fast an algorithm is. It’s called Big O Notation because you put a “big O” in front of the number of operations (it sounds like a joke, but it’s true!). Big O Notation of the key operations of data structure

the number of operations is performed in an algorithm. These operations could include iterating over a collection, copying an item or the entire collection, appending an item into a collection, inserting an item at the start or end of a collection, deleting an item or updating an item in a collection. Big O Notation is a way to measure the performance of an operation based on the input size

Common Big O Notation

Let’s consider n to be the size of the input collection

O(1): does not depend on the input size, the time it takes to perform an operation is constant. Example: get the length in python list

>>> numbers  =  [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
>>> len(numbers)
15

O(log n): When the size of a collection increases, the time it takes to perform an operation increases logarithmically. Example: Binary search.

O(n): The time it takes to perform an operation is directly related to the number of items in the collection. Example: max & min operation in python list

>>> numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
>>> max(numbers)
15

O(n square): Quadratic A slow sorting algorithm. Example selection sort

O(n!): A really slow algorithm, Example: the traveling salesperson

Examples to have better understanding:

looking through a list in python for a particular item is O(n). to better understand this concept, let's take look at the following example:

>>> def function(n):
...            for i in range(n):
...                    print(i)

In the preceding function, the print statement will be executed n(input size) times. Loop speed will depend on n , so its complexity that's expressed using the big O notation will be O(n).

If the function has conditions, the correct notation to keep is the highest one, as follows:

>>> def function(n, print_count=False):
...         if print_count:   
...                 print('count: ', n)    
...         else:
...                 for i in range(n):
...                         print(i)
...

In this example, the function could be O(1) or O(n), depending on the value of the print_count argument. The worst case is O(n), so the whole function complexity is O(n).

O(1) is fast. O(n square) is slow. O(n!) is very slow.

Big O Establishes a Worst-Case Run Time

Suppose you’re using simple search to look for a person in the phone book. You know that simple search takes O(n) time to run, which means in the worst case, you’ll have to look through every single entry in your phone book.

In this case, you’re looking for Ayoub. This handsome guy is the first
entry in your phone book. So you didn’t have to look at every entry you
found it on the first try. Did this algorithm take O(n) time? Or did it take O(1) time because you found the person on the first try? Simple search still takes O(n) time. In this case, you found what you were looking for instantly. That’s the best-case scenario. But Big O notation is about the worst-case scenario. So you can say that, in the worst case, you’ll have to look at every entry in the phone book once. That’s O(n) time. It’s a reassurance.you know that simple search will never be slower than O(n) time.

How Memory Works?

Imagine you have a big drawer in your home, and each drawer can hold one
item in it and you want to store two things, so you ask your mother
which drawer you can put your items in and your mom tells you which one
to put them ! This is basically how your computer’s memory works. Your
computer looks like a giant set of drawers, and each drawer has an
address.whenever you want to store an item in memory, you ask the
computer for some spaces, and it gives you an address where you can
store your item. If you want to store multiple items, there are two
basic ways to do so: arrays and lists.
I’ll talk about arrays and lists next, as well as the pros and cons of
each. There isn’t one right way to store items for every use case, so
it’s important to know the differences.

What is Linked-List and Array?

Sometimes you need to store a list of elements in memory. Suppose you’re
writing an app to manage your todos. You’ll want to store the todos as a
list in memory. Should you use an array, or a linked list? Let’s store the todos in an array first, because it’s easier to grasp. Using an array means all your tasks are stored contiguously
(right next to each other) in memory.Now suppose you add three task and
you want to add a fourth task. But the next drawer is taken up by
someone else’s stuff!It’s like going to a movie with your friends and
finding a place to sit but another friend joins you, and there’s no
place for them. You have to move to a new spot where you all fit .In
this case, you need to ask your computer for a different chunk of memory
that can fit four tasks. Then you need to move all your tasks there If
another friend comes by, you’re out of room again and you all have to
move a second time! What a pain. Similarly, adding new items to an array
can be a big pain. If you’re out of space and need to move to a new
spot in memory every time, adding a new item will be really slow. One
easy fix is to “hold seats”: even if you have only 3 items in your task
list, you can ask the computer for 10 slots, just in case. Then you can
add 10 items to your task list without having to move. This is a good
workaround, but you should be aware of a couple of downsides

  • You may not need the extra slots that you asked for, and then that memory will be wasted. You aren’t using it, but no one else can use it either
    • You may add more than 10 items to your task list and have to move anyway. So it’s a good workaround, but it’s not a perfect solution. Linked lists solve this problem of adding items

    Linked Lists

    With linked lists, your items can be anywhere in memory. Each item
    stores the address of the next item in the list. A bunch of random
    memory addresses are linked together. It’s like a treasure hunt. You go
    to the first address, and it says, “The next item can be found at
    address 123.” So you go to address 123, and it says, “The next item can
    be found at address 847,” and so on. Adding an item to a linked list is
    easy: you stick it anywhere in memory and store the address with the
    previous item. With linked lists, you never have to move your items.
    You also avoid another problem. Let’s say you go to a popular movie
    with five of your friends. The six of you are trying to find a place to
    sit, but the theater is packed. There aren’t six seats together. Well,
    sometimes this happens with arrays. Let’s say you’re trying to find
    10,000 slots for an array. Your memory has 10,000 slots, but it doesn’t
    have 10,000 slots together. You can’t get space for your array! A linked
    list is like saying, “Let’s split up and watch the movie.” If there’s
    space in memory, you have space for your linked list. If linked lists
    are so much better at inserts, what are arrays good for?

    Arrays

    Websites with top-10 lists use a scummy tactic to get more page
    views.Instead of showing you the list on one page, they put one item on
    each page and make you click Next to get to the next item in the list.
    For example, Top 10 Best TV Villains won’t show you the entire list on
    one page. Instead, you start at #10 (Newman), and you have to click Next
    on each page to reach #1 (Gustavo Fring). This technique gives the
    websites 10 whole pages on which to show you ads, but it’s boring to
    click Next 9 times to get to #1. It would be much better if the whole
    list was on one page and you could click each person’s name for more
    info. Linked lists have a similar problem. Suppose you want to read the
    last item in a linked list. You can’t just read it, because you don’t
    know what address it’s at. Instead, you have to go to item #1 to get the
    address for item #2. Then you have to go to item #2 to get the address
    for item #3. And so on, until you get to the last item.Linked lists are great if you’re going to read all the items one at a time: you can read one item, follow the address to the next item, and so on.But if you’re going to keep jumping around, linked lists are terrible.Arrays
    are different. You know the address for every item in your array. For
    example, suppose your array contains five items, and you know it starts
    at address 00. What is the address of item #5? Simple math tells you:
    it’s 04.Arrays are great if you want to read random elements, because you can look up any element in your array instantly. With
    a linked list, the elements aren’t next to each other, so you can’t
    instantly calculate the position of the fifth element in memory you have
    to go to the first element to get the address to the second element,
    then go to the second element to get the address of the third element,
    and so on until you get to the fifth element.

    summary

    This article was about the Big-O notation of the key operations of data
    structures . The big-o notation is essentially a way to measure the time
    complexity of an operation. The article also illustrated linked list
    and array and how these manage data in memory.