paint-brush
Getting the Source of a Website with Python's Requests Libraryby@propy
6,770 reads
6,770 reads

Getting the Source of a Website with Python's Requests Library

by ProPySeptember 27th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In this article, we'll explore how to use Python to fetch the source code of a website.
featured image - Getting the Source of a Website with Python's Requests Library
ProPy HackerNoon profile picture

The internet is a treasure trove of information, and sometimes, you might want to access the underlying HTML source code of a website for various purposes like data extraction, analysis, or automation. Python, with its rich ecosystem of libraries, makes web scraping a straightforward task.

In this article, we'll explore how to use Python to fetch the source code of a website.

1. Installing Required Libraries

To follow this article, you will need the following:


  • Python 3.6 or higher
  • The requests library

Python 3.6

Before we dive into web scraping, ensure you have Python installed on your system. You can download the latest version from python.org.

The requests library

The "Requests" library in Python is a popular and widely used library for making HTTP requests to web services, websites, and APIs. It simplifies the process of sending HTTP requests and handling HTTP responses, making it easier for developers to interact with web resources.


To install the "Requests" library in Python, you can use the Python package manager pip. Here are the steps to install Requests:


  1. Open your command prompt or terminal.
  2. Run the following command to install Requests:
pip install requests

2. Writing Python Code

Now that we have Requests installed, let's write a simple Python script to retrieve the source code of a website.

import requests

# Define the URL of the website you want to scrape
url = 'https://example.com'

# Send an HTTP GET request to the URL
response = requests.get(url)

# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Print the HTML source code
    print(response.text)
else:
    print('Failed to retrieve the webpage. Status code:', response.status_code)

In this script:


  1. We import the Requests library with import requests.
  2. We define the url variable with the URL of the website we want to scrape. You can replace 'https://example.com' with the URL of your target website.
  3. We use requests.get(url) to send an HTTP GET request to the specified URL and store the response in the response variable.
  4. We check if the request was successful by examining the HTTP status code. A status code of 200 indicates success.
  5. If the request was successful, we print the HTML source code of the website using response.text.


The following example shows how to use the above code to get the source code of the Google homepage:

import requests

# Define the URL of the website you want to scrape
url = 'https://google.com'

# Send an HTTP GET request to the URL
response = requests.get(url)

# Check if the request was successful (status code 200)
if response.status_code == 200:
    # Print the HTML source code
    print(response.text)
else:
    print('Failed to retrieve the webpage. Status code:', response.status_code)

Output:

<!DOCTYPE html>
<html itemscope="" itemtype="http://schema.org/WebPage" lang="en">
<head>
  <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  <meta content="width=device-width, initial-scale=1.0" name="viewport">
  <title>Google</title>
  ...
</head>
<body>
  ...
</body>
</html>

Conclusion

Web scraping with Python's Requests library is a straightforward and effective way to access the source code of a website. However, it's essential to be aware of website terms of service and legal considerations when scraping websites. Always ensure that your web scraping activities are ethical and comply with the website's policies. With Requests, you have a powerful tool at your disposal to gather data, automate tasks, and explore the vast world of web content. Happy scraping!


Reference: