Django-Filebrowser (Mezzanine) fails to load large Amazon S3 directories on nginx production server

546 views Asked by At

I'm having some issues with Django-Filebrowser and from what I can tell they seem to be related to Nginx. The primary issue I'm having is that Django-Filebrowser fails to load directories with large amounts of Amazon S3 files in the mezzanine admin. I have multiple directories with 400+ large audio files (several hundred MB each) hosted on S3 that when I attempt to load in mezzanine admin/media-library, my server returns an nginx 500 (bad gateway) error. I didn't have any issues with this until the directories started getting bigger.

It's probably worth noting a few things:

  1. This project is built on the Mezzanine CMS which uses a modified django-filebrowser package
  2. I only use Amazon S3 to serve the media files for the project, all static files are served locally through nginx.
  3. All django-filebrowser functionality works correctly in directories that will actually load.
  4. I created a test directory with 1000 small files and django-filebrowser loads correctly.
  5. In the nginx.conf settings listed below (proxy buffer size, proxy_connect_timeout, etc), I've tested multiple values, multiple times and I can never get the pages to consistently load.
  6. I've tried adding an additional location in my nginx conf for "admin/media-library/" with increased timeouts, and other settings I've tried... but it nginx still did not load these large directories correctly.

I believe my primary issue, large S3 directories not loading in admin, is an nginx issue as I have no trouble loading these directories in a local environment without nginx. My nginx error log throws the following error:

2014/11/24 15:53:25 [error] 30816#0: *1 upstream prematurely closed connection while reading response header from upstream, client: xx.xxx.xxx.xxx, server: server, request: "GET /admin/media-library/browse/ HTTP/1.1", upstream: "http://127.0.0.1:8001/admin/media-library/browse/", host: "server name, referrer: "https://example/admin/"

I've researched that error which led me to add these lines to my nginx conf file.

proxy_buffer_size       128k;
proxy_buffers 100       128k;
proxy_busy_buffers_size 256k;
proxy_connect_timeout   75s;
proxy_read_timeout      75s;
client_max_body_size    9999M;
keepalive_timeout       60s;

Despite trying multiple nginx timeout configurations, I'm still stuck exactly where I started. My production server will not load large directories from Amazon S3 through django-filebrowser.

Here are some other lines from settings/conf files that are relevant.

Settings.py

DEFAULT_FILE_STORAGE = 's3utils.S3MediaStorage'
AWS_S3_SECURE_URLS = True     # use http instead of https
AWS_QUERYSTRING_AUTH = False     # don't add complex authentication-related query parameters for requests
AWS_PRELOAD_METADATA = True
AWS_S3_ACCESS_KEY_ID = 'key'     # enter your access key id
AWS_S3_SECRET_ACCESS_KEY = 'secret key' # enter your secret access key
AWS_STORAGE_BUCKET_NAME = 'bucket'
AWS_S3_CUSTOM_DOMAIN = 's3.amazonaws.com/bucket'
S3_URL = 'https://s3.amazonaws.com/bucket/'
MEDIA_URL = S3_URL + 'media/'
MEDIA_ROOT = 'media/uploads/'
FILEBROWSER_DIRECTORY = 'uploads'

/etc/nginx/sites-enabled/production.conf

    upstream name {
        server 127.0.0.1:8001;
    }

    server {
        listen 80;
        server_name www.example.com;
        rewrite ^(.*) http://example.com$1 permanent;
    }

    server {

        listen 80;
         listen 443 default ssl;
        server_name example.com;
        client_max_body_size 999M;
        keepalive_timeout    60;

        ssl on;
        ssl_certificate      /etc/nginx/ssl/cert.crt;
        ssl_certificate_key  /etc/nginx/ssl/key.key;
        ssl_session_cache    shared:SSL:10m;
        ssl_session_timeout  10m;
        ssl_ciphers RC4:HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers on;

        location / {
            proxy_redirect      off;
            proxy_set_header    Host                    $host;
            proxy_set_header    X-Real-IP               $remote_addr;
            proxy_set_header    X-Forwarded-For         $proxy_add_x_forwarded_for;
            proxy_set_header    X-Forwarded-Protocol    $scheme;
            proxy_pass          http://example;
            add_header          X-Frame-Options         "SAMEORIGIN";
            proxy_buffer_size       128k;
            proxy_buffers 100       128k;
            proxy_busy_buffers_size 256k;
            proxy_connect_timeout   75s;
            proxy_read_timeout      75s;
            client_max_body_size    9999M;
            keepalive_timeout       60s;
        }

        location /static/ {
            root            /path/to/static
        }

        location /robots.txt {
            root            /path/to/robots;
            access_log      off;
            log_not_found   off;
        }

        location /favicon.ico {
            root            /path/to/favicon;
            access_log      off;
            log_not_found   off;
    }

}

Is this even an nginx issue? If so, does anyone have any suggestions for resolving this error? If not, what am I missing that would cause timeouts only on these large directories?

Is there a better way to approach this problem than my current setup?

Any help would be greatly appreciated.

Thanks

0

There are 0 answers