Skip to content

maerteijn/django-sync-or-async

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Django sync or async, that's the question

Test the performance and concurrency processing of a Django view calling an "external" API with the following servers:

  • uWSGI (WSGI)
  • uWSGI with Gevent (WSGI)
  • Gunicorn (gthread) (WSGI)
  • Gunicorn with Gevent (WSGI)
  • Uvicorn (ASGI)

View calling an "external" API

We are testing a Django view which will call an "external" API several times.

We'll use the httpx package for this, as it provides both a sync and async API.

The "external" API, which runs locally with uvicorn using asyncio.sleep to simulate latency selects a random country from a predefined list:

exampledata = json.load(open(f"{current_dir}/assets/countries.json"))
API_ENDPOINT = "http://localhost:5000"
async def api(request, ms=300):
await asyncio.sleep(delay=ms / 1000)
return JsonResponse(random.choice(exampledata))

Overview

We will test the performance implications with the following configurations:


                         ┌─────────────────────────────┐
    ┌────────────────────┤ uwsgi-2-threads (:8000)     │
    │                    │ (1 process, 2 threads)      │
    │                    └─────────────────────────────┘
    │                    ┌─────────────────────────────┐
    │  ┌─────────────────┤ uwsgi-100-threads (:8001)   │
    │  │                 │ (1 process, 100 threads)    │
    │  │                 └─────────────────────────────┘
    │  │                 ┌─────────────────────────────┐
    ▼  ▼                 │ uwsgi-gevent (:8002)        │
 ┌───────────┐   ┌───────┤ (1 process, 100 "workers")  │
 │API (:5000)│◄──┘       └─────────────────────────────┘
 │ (uvicorn) │◄──┐       ┌─────────────────────────────┐
 └───────────┘   └───────┤ gunicorn-100-threads (:8003)│
    ▲   ▲                │ (1 process, 100 threads)    │
    │   │                └─────────────────────────────┘
    │   │                ┌─────────────────────────────┐
    │   └────────────────┤ gunicorn-gevent (:8004)     │
    │                    │(1 process, 100 "workers")   │
    │                    └─────────────────────────────┘
    │                    ┌─────────────────────────────┐
    └────────────────────┤ uvicorn (:8005)             │
                         │ (1 process)                 │
                         └─────────────────────────────┘

Concurrency

The view which will be benchmarked calls our "really slow" external API three times so we can also test these calls are done in parallel instead of sequential. The slowest response time is around 600ms, so this is about the longest time it takes the API should generate a response because the API calls should be executed in parallel. To achieve this we use a ThreadPoolExecutor for the sync view:

def sync_view(request, ms=300):
api_urls = (
f"{API_ENDPOINT}/{ms}/",
f"{API_ENDPOINT}/{ms*2}/",
f"{API_ENDPOINT}/{int(ms/2)}/",
)
with timeit() as t:
client = httpx.Client()
with ThreadPoolExecutor() as executor:
futures = executor.map(lambda url: client.get(url), api_urls)
country = next(futures).json()
return render(
request,
"django_sync_or_async/index.html",
dict(country=country, time=t),
)

Note

The standard ThreadPoolExecutor with actual system threads is used when using the uwsgi-2-threads, uwsgi-100-threads and gunicorn-100-threads configurations. When using gevent, threads are monkey patched to be cooperative, so new greenlets will be spawned when using the ThreadPoolExecutor.

For the uvicorn version (ASGI), the parallel calls are implemented with asyncio.gather:

async def async_view(request, ms=300):
api_urls = (
f"{API_ENDPOINT}/{ms}/",
f"{API_ENDPOINT}/{ms*2}/",
f"{API_ENDPOINT}/{int(ms/2)}/",
)
with timeit() as t:
client = httpx.AsyncClient()
results = await asyncio.gather(*[client.get(url) for url in api_urls])
await client.aclose()
country = results[0].json()
return render(
request,
"django_sync_or_async/index.html",
dict(country=country, time=t),
)

Installation

Requirements

  • Python 3.12 (minimum)
  • virtualenv (recommended)

Install the packages

First create a virtualenv in your preferred way, then install all packages with:

make install

Running the services

Make sure you are allowed to have many file descriptions open:

ulimit -n 32768

Now run the supervisor daemon which will start all services:

$ supervisord

This will start the API and all the different uwsgi / asgi services. Press ctrl+c to stop it.

Run the benchmarks

You can run the benchmarks for each individual server by selecting the relevant port number so comparison can be made after running them:

For uwsgi:

make locust HOST=http://localhost:8000

This will start a locust interface, accessible via http://localhost:8089

For uwsgi-100-threads:

make locust HOST=http://localhost:8001

Etcetera, see the port numbers in the overview.

uwsgitop

You can see detailed information during the benchmarks for the uWSGI processes using uwsgitop:

uwsgitop http://localhost:3030  # <-- for the 1 process 2 threads variant
uwsgitop http://localhost:3031  # <-- for the 1 process 100 threads variant
uwsgitop http://localhost:3032  # <-- for the 1 process gevent variant

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks