Downloading Files with Python Requests - Tips, Tricks and Code Example

Oct 31, 2023 ยท 7 min read

Downloading files from the web is a common task in many Python programs. While you can always use bare bones HTTP client libraries like urllib, developers often prefer Requests for its simplicity and versatility.

In this comprehensive guide, you'll learn how to use Python Requests to download files from the web with ease. I'll cover the key features of Requests, walk through code examples, and share insider tips and tricks I've picked up over the years. By the end, you'll be able to use Requests to download files like a pro!

Let's get started...

Why Use Requests for Downloading Files?

Before we dive in, you might wonder - why use Requests instead of other HTTP clients? Here are some key advantages:

  • Simplicity - Requests provides an elegant and simple API for making HTTP calls in Python. Just import and start making requests with an intuitive syntax.
  • Full-featured - While simple to use, Requests offers advanced capabilities like sessions, custom headers, cookies, authentication, streaming downloads, connection pooling and more.
  • User-friendly - Requests removes much of the boilerplate required when using urllib and other HTTP clients. Focus on your application logic rather than nitty-gritty network details.
  • Actively maintained - With over 13 million downloads per month, Requests has a large community behind ongoing support and development.
  • In summary, Requests makes downloading files feel almost as easy as a browser, while exposing all the power of an HTTP client library. Read on to see it in action!

    Getting Started with Requests

    Before downloading anything, you first need to install and import Requests:

    pip install requests
    

    Then import it in your Python script:

    import requests
    

    When working with Requests, it's good practice to use a Session object. This manages things like cookies and connection pooling behind the scenes:

    session = requests.Session()
    

    With that, you're ready to start making requests to download files!

    Making GET Requests to Download Files

    The foundation of downloading files with Requests is making a GET request to a URL and accessing the response.

    Let's walk through a simple example:

    import requests
    
    url = '<https://myfiletosite.com/example.zip>'
    
    response = session.get(url)
    

    This makes a GET request to the URL. If it's successful, you get back a Response object containing the file data.

    Now let's look at how to actually download and save the file.

    Saving Downloaded Files

    To download a file from a GET request, you access the response content and write it to a local file:

    with open('example.zip', 'wb') as f:
      f.write(response.content)
    

    The key things to notice:

  • Open the file in binary write mode ('wb')
  • Use response.content to access the raw file bytes from the response
  • Write the content to the opened file object
  • And that's it - you've downloaded the file! The same pattern works for images, documents, zip archives, videos, or any other downloadable file.

    Pretty straightforward right? Now let's look at some insider tips for downloading files like a pro.

    Pro Tip: Stream Downloads for Large Files

    When downloading large files, you'll want to stream the response body instead of loading it all into memory at once.

    Here's an example streaming a large video file download:

    response = session.get(video_url, stream=True)
    
    with open('python_tutorial.mp4', 'wb') as f:
      for chunk in response.iter_content(chunk_size=1024*1024):
        if chunk:
          f.write(chunk)
    

    Setting stream=True makes Requests return a generator for response.iter_content() rather than loading the full response body. We then write 1024 KB chunks to the file.

    Streaming saves memory and allows resuming partial downloads if they're interrupted.

    Pro Tip: Speed Up Downloads with Async Requests

    When downloading multiple files, you can use asynchronous requests to speed up the process. This uses Python's asyncio module to make non-blocking requests in parallel.

    Here's an example with three file downloads:

    import asyncio
    import aiohttp
    
    async def download_file(url):
      async with aiohttp.ClientSession() as session:
        async with session.get(url) as response:
          # Save response contents
    
    urls = ['file1.zip', 'file2.zip', 'file3.zip']
    
    asyncio.run(asyncio.gather(*[download_file(url) for url in urls]))
    

    By running the downloads asynchronously, you can achieve parallelism and higher throughput when fetching multiple files.

    Real-World Example: Scraping an Image Gallery

    Now let's look at a real-world example of downloading all images from a gallery page:

    from bs4 import BeautifulSoup
    
    page = requests.get('<http://example.com/gallery>')
    soup = BeautifulSoup(page.text, 'html.parser')
    
    img_tags = soup.find_all('img')
    
    urls = [img['src'] for img in img_tags]
    
    for url in urls:
      # Parse filename from URL
      filename = url.split('/')[-1]
    
      response = session.get(url)
    
      with open(filename, 'wb') as f:
        f.write(response.content)
    

    Here we first scrape the page to find all tags, then extract the src URLs, and finally download each image - saving it with the filename parsed from the URL.

    This demonstrates how Requests can power file downloads in web scraping and data collection projects.

    Authenticating to Download Protected Files

    Some files you want to download may be behind authentication. Requests makes it easy to log in and access protected resources.

    For basic auth, just provide a tuple of username/password when making the request:

    response = session.get(url, auth=('user', 'password123'))
    

    For token-based OAuth authentication, you can include the token in request headers:

    token = 'abc123token'
    
    headers = {'Authorization': f'Bearer {token}'}
    
    response = session.get(url, headers=headers)
    

    Requests supports any authentication scheme - just plug in the credentials or tokens when creating the request.

    Managing State with Sessions

    I briefly mentioned Sessions earlier, but why are they useful?

    Mainly because they offer stateful persistence between requests. This includes things like:

  • Cookie handling - storing cookies from responses
  • Connection pooling - reusing connections to the same host
  • Cached TLS sessions - avoiding expensive TLS handshakes
  • Plus, Sessions provide transactional semantics, allowing you to rollback a series of requests if one fails.

    For most programs, using a Session provides efficiency and convenience benefits without much extra work.

    Debugging Requests Problems

    Despite its simplicity, you may occasionally run into issues using Requests:

  • Connection errors
  • HTTP errors like 403 Forbidden
  • Incorrect redirects
  • Changing APIs
  • Luckily, Requests provides powerful debugging tools to help identify and resolve problems.

    If a request fails, you can check response.status_code to see the HTTP status. response.reason gives the status text. And response.headers shows the full headers with additional context.

    Network-level errors raise requests.exceptions.RequestException - you can catch this and check error.response for details.

    Finally, enabling full debugging with requests.get(url, debug=True) logs all request parameters, headers, and response information.

    With judicious debugging, you can diagnosis most issues that crop up when downloading files.

    Choosing the Right Requests Approach

    Hopefully by now you have a solid grasp of downloading files with Requests!

    As we wrap up, I want to share guidance on how to choose the right Requests approach:

  • For small, one-off downloads, the basic API is perfect
  • When downloading giant files, enable streaming
  • If downloading multiple files in parallel, use asynchronous requests
  • Handle redirects and authentication as needed for your use case
  • Leverage sessions for performance and reliability
  • Debug carefully - Requests provides the tools!
  • Adopting these best practices will ensure you use Requests most effectively.

    Key Takeaways

    Here are the key things to remember:

  • Use GET requests to retrieve files from URLs with Requests
  • Access response.content and write binary data to download files
  • Enable streaming for efficient large file downloads
  • Make requests asynchronous to speed up multiple file downloads
  • Utilize sessions for performance, cookies and TLS connection pooling
  • Debug meticulously with status codes and logging to fix problems
  • That wraps up this comprehensive guide on downloading files with Python Requests!

    For more techniques, be sure to check out the official Requests documentation.

    Happy downloading!

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: