How to profile Django's bottlenecks for scaling?

841 views Asked by At

I am using django and tastypie for REST API.

For profiling, I am using django-silk and below is a summary of requests:

enter image description here

How do I profile the complete flow? Time taken except for database queries is (382 - 147) ms on average. How do I figure out the bottleneck and optimize/scale? I did use @silk_profile() for the get_object_list method for this resource, but even this method doesn't seem to be bottleneck.

I used caching for decreasing response time, but that didn't help much, what are the other options?

When testing using loader.io, the peak the server can handle is 1000 requests per 30 secs (which seems very low). Other than caching (which I already tried) what might help?

2

There are 2 answers

0
Tommaso Barbugli On

Here's a bunch of suggestions:

  1. bring the query per request at least below 5 per request (34 per request is really bad)
  2. install django toolbar and have a look where the time is spent
  3. use gunicorn or uwsgi behind a reverse proxy (NGINX)
0
dbf On
  • You have too much queries, even if they are relatively fast you spend some time to reach database etc. Also if you have external cache storage (for example, redis) it could take some time to connect there.
  • To investigate slow parts of the code you have two options:

    • Use a profiler - profiling at local PC could make no sense if you have distributed system deployed to several machines
    • Add tracing points to your code that will record some message and current time (something like https://gist.github.com/dbf256/0f1d5d7d2c9aa70bce89). Deploy this patched code and test it with your load-testing tool and check logs.