How to use URL in Python?

Feb 20, 2024 ยท 2 min read

URLs (Uniform Resource Locators) are everywhere on the web. As a Python developer, you'll likely need to work with URLs to scrape web pages, interact with APIs, download files, and more. Here's a practical introduction to using URLs in Python.

Parsing URLs

Python has a built-in urllib.parse module that makes it easy to dissect and manipulate URLs.

For example, we can break a URL down into its individual components:

from urllib.parse import urlparse

url = 'https://www.example.com/path/to/file.html?key1=val1&key2=val2#SomewhereInTheDocument'

parsed_url = urlparse(url)
print(parsed_url)

This prints out a ParseResult object with the different parts of the URL:

ParseResult(scheme='https', netloc='www.example.com', path='/path/to/file.html', params='', query='key1=val1&key2=val2', fragment='SomewhereInTheDocument')

We can access the individual attributes like scheme, netloc, query parameters, etc. This makes working with URLs in Python very straightforward.

Downloading Files from URLs

A common task is to download a file from a URL. We can use the urllib.request module:

import urllib.request

url = 'https://www.example.com/files/report.pdf'
urllib.request.urlretrieve(url, 'report.pdf')  

This downloads the file from the URL and saves it locally as report.pdf.

We can also customize the filename, display a progress bar, handle redirects, catch errors, and much more.

Interacting with APIs

Many web APIs accept URL parameters to filter data and return JSON/XML responses. We can use the requests library to simplify this:

import requests

url = 'https://api.example.com/data?date=20200101&limit=50' 
response = requests.get(url)
print(response.json())

The requests module makes it very easy to call APIs using URLs and get back structured data.

In summary, Python has great URL handling capabilities out of the box. Whether you need to parse URLs, download files, call web APIs, or interact with websites, Python has you covered!

Browse by tags:

Browse by language:

The easiest way to do Web Scraping

Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


Try ProxiesAPI for free

curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

<!doctype html>
<html>
<head>
    <title>Example Domain</title>
    <meta charset="utf-8" />
    <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
...

X

Don't leave just yet!

Enter your email below to claim your free API key: