Making the Most of asyncio.run_until_complete()

Mar 25, 2024 ยท 3 min read

The asyncio.run_until_complete() method is very useful for running asyncio code, but it has some nuances worth understanding to use it effectively.

What run_until_complete Does

The run_until_complete() method takes an asyncio coroutine object, runs it until it completes, and returns the result. For example:

import asyncio

async def my_coro():
    return "result"

result = asyncio.run_until_complete(my_coro())
print(result) # Prints "result"

This allows you to run a coroutine to completion without needing to manually create an event loop.

Common Pitfalls

However, there are two common pitfalls when using run_until_complete():

1. Calling blocking code

run_until_complete() runs on the asyncio event loop thread. So if your coroutine contains blocking code like file I/O or CPU-intensive work, it will block the event loop and prevent execution of other asynchronous tasks:

# Blocks the event loop
asyncio.run_until_complete(do_blocking_io()) 

To avoid this, use asyncio.to_thread() or asyncio.create_task() so the blocking code runs in a thread pool instead.

2. Forgetting to await coroutines

Any coroutines you launch need to be awaited, otherwise run_until_complete() will return before they finish:

async def main():
    asyncio.create_task(some_coro()) 

asyncio.run_until_complete(main()) # Exits without awaiting some_coro

Always await launched coroutines or use asyncio.wait() to wait on them explicitly.

Tips for Effective Usage

Here are some tips:

  • Use run_until_complete() for short running scripts. For long running apps, create the event loop manually.
  • Make sure to await all spawned coroutines before exiting.
  • Use asyncio.create_task() rather than bare some_coro() to spawn background tasks.
  • Catch and handle exceptions from coroutines.
  • Use return rather than sys.exit() to exit, so finally blocks run.
  • Practical Example

    Here is an example fetching two web pages "correctly":

    import asyncio
    
    async def fetch(url):
        print(f"Fetching {url}")
        # Pretend we do I/O here
        await asyncio.sleep(2) 
        return f"{url} result"
    
    async def main():
        print("Starting")
        task1 = asyncio.create_task(fetch("url1"))
        task2 = asyncio.create_task(fetch("url2"))
    
        results = await asyncio.gather(task1, task2)
        print(results)
    
        print("Done")
    
    if __name__ == "__main__":
       asyncio.run(main())

    This runs both fetches concurrently, waits for them to complete, and exits cleanly.

    The key points are:

  • Coroutines launched with create_task()
  • Coroutines awaited with await gather()
  • run() used instead of run_until_complete() for a long running app
  • Clean exit using return from main()
  • Hopefully this gives you a better understanding of how to effectively use asyncio.run_until_complete()

    Browse by tags:

    Browse by language:

    The easiest way to do Web Scraping

    Get HTML from any page with a simple API call. We handle proxy rotation, browser identities, automatic retries, CAPTCHAs, JavaScript rendering, etc automatically for you


    Try ProxiesAPI for free

    curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>
        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
    ...

    X

    Don't leave just yet!

    Enter your email below to claim your free API key: