Connection reset by peer when using s3, boto, django-storage for static files

15.7k views Asked by At

I'm trying to switch to use amazon s3 to host our static files for our django project. I am using django, boto, django-storage and django-compressor. When I run collect static on my dev server, I get the error

socket.error: [Errno 104] Connection reset by peer 

The size of all of my static files is 74MB, which doesnt seem too large. Has anyone seen this before, or have any debugging tips?

Here is the full trace.

Traceback (most recent call last):
  File "./manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 443, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 382, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv
    self.execute(*args, **options.__dict__)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute
    output = self.handle(*args, **options)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 371, in handle
    return self.handle_noargs(**options)
  File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 163, in handle_noargs
    collected = self.collect()
  File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 113, in collect
    handler(path, prefixed_path, storage)
  File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 303, in copy_file
    self.storage.save(prefixed_path, source_file)
  File "/usr/local/lib/python2.7/dist-packages/django/core/files/storage.py", line 45, in save
    name = self._save(name, content)
  File "/usr/local/lib/python2.7/dist-packages/storages/backends/s3boto.py", line 392, in _save
    self._save_content(key, content, headers=headers)
  File "/usr/local/lib/python2.7/dist-packages/storages/backends/s3boto.py", line 403, in _save_content
    rewind=True, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1222, in set_contents_from_file
    chunked_transfer=chunked_transfer, size=size)
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 714, in send_file
    chunked_transfer=chunked_transfer, size=size)
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 890, in _send_file_internal
    query_args=query_args
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 547, in make_request
    retry_handler=retry_handler
  File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 966, in make_request
    retry_handler=retry_handler)
  File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 927, in _mexe
    raise e
socket.error: [Errno 104] Connection reset by peer

UPDATE: I don't have the answer to how to debug this error, but later this just stopped happening which makes me think it may have to do with something on S3.

4

There are 4 answers

0
Luca Gibelli On

tl;dr

If your bucket is not in the default region, you need to tell boto which region to connect to, e.g. if your bucket is in us-west-2 you need to add the following line to settings.py:

 AWS_S3_HOST = 's3-us-west-2.amazonaws.com'

Long explanation:

It's not a permission problem and you should not set your bucket permissions to 'Authenticated users'.

This problem happens if you create your bucket in a region which is not the default one (in my case I was using us-west-2).

If you don't use the default region and you don't tell boto in which region your bucket resides, boto will connect to the default region and S3 will reply with a 307 redirect to the region where the bucket belongs.

Unfortunately, due to this bug in boto:

https://github.com/boto/boto/issues/2207

if the 307 reply arrives before boto has finished uploading the file, boto won't see the redirect and will keep uploading to the default region. Eventually S3 closes the socket resulting into a 'Connection reset by peer'.

It's a kind of race condition which depends on the size of the object being uploaded and the speed of your internet connection, which explains why it happens randomly.

There are two possible reasons why the OP stopped seeing the error after some time:

- he later created a new bucket in the default region and the problem went away by itself. 
- he started uploading only small files, which are fast enough to be fully uploaded by the time S3 replies with 307
0
Brendan W On

I just had this issue trying to set up a second S3 bucket to use for testing/devel and what helped was deploying an older version of the application.

I have no clue why that would help, but for those of you reading this way after the fact (like me, a couple hours ago), it's worth trying to deploy a different application version.

3
t_io On

You have to set your bucket permissions to Authenticated Users List + Upload/Delete or you can create a specific user in IAM section of amazon and setup the access rights only for that specific user

This helped me some times ago: Setup S3 for Django

0
pitaside On

This is issue someties occurs when you create a new bucket the first time, you have to wait for some hours or mins before you start uploading. I don't know why s3 behave like that. To prove that try creating a new bucket, point your django storage to it and u will see it show connection peer reset when u try to upload any thing from your django project, but wait for couple of hour or min try again it will work. Repeat the same step and see.