Setting Cookies in aiohttp Requests

Mar 3, 2024 ยท 2 min read

When making requests with the Python aiohttp library, you may want to set cookies to handle sessions, authorization, or preferences. Cookies allow the server to store small pieces of data in the client and retrieve it later to recognize the client or maintain state.

To set a cookie in an aiohttp request, you need to create a cookie jar and attach it to the request. Here is an example:

import aiohttp

cookie_jar = aiohttp.CookieJar(unsafe=True)
async with aiohttp.ClientSession(cookie_jar=cookie_jar) as session:
    session.cookie_jar.update_cookies({'user': 'john'})
    async with session.get('http://example.com') as response:
        print(response.cookies['user']) # 'john'

The key steps are:

  1. Create a CookieJar object and set unsafe=True to allow insecure cookies over HTTP (default is HTTPS only)
  2. When creating the ClientSession, pass the cookie jar to attach it
  3. Update the cookie jar directly using update_cookies() and pass a dict of key/values
  4. The cookies are automatically sent in requests and can be accessed on the response

Some tips when working with cookies:

  • Check the server docs for exact cookie names expected
  • Use sessions for temporary cookies and expirations for persistent ones
  • Watch out for clearing out the wrong cookies accidentally
  • Use cookie authentication only for public API access, not admin sites
  • Here is an example fetching a site that requires cookie login:

    jar.update_cookies({'sessionid': 'abcdef12345'}) 
    async with session.get('http://example.com/private') as resp:
        if resp.status == 200:
            print("Login succeeded!")
        else: 
            print("Cookie authentication failed")

    This allows easy automation and scripting for sites using cookie based sessions.

    Setting cookies appropriately is important for testing sites that rely on them for user tracking or access control. aiohttp handles cookies seamlessly so you can focus on using the right values for your use case.

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: