s3fs: AWS Message: Access Denied Ubuntu 11.10

2.5k views Asked by At

i installe s3fs as it is described here http://code.google.com/p/s3fs/wiki/InstallationNotes

then in i create user bucket_user

then put his accessKeyId:secretAccessKey in /etc/passwd-s3fs

them is S3 i create a bucket super_bucket

and set its policy:

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AddCanned",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::234234234234:user/bucket_user"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::super_bucket/*"
        }
    ]
}

then on my server /usr/bin/s3fs super_bucket /mnt/s3/

and recieve answer:

s3fs: CURLE_HTTP_RETURNED_ERROR

s3fs: HTTP Error Code: 403

s3fs: AWS Error Code: AccessDenied

s3fs: AWS Message: Access Denied

Version of s3fs being used (s3fs --version): 1.61

Version of fuse being used (pkg-config --modversion fuse): 2.8.4

System information (uname -a): Linux Ubuntu-1110-oneiric-64-minimal 3.0.0-14-server #23-Ubuntu SMP Mon Nov 21 20:49:05 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux

Distro (cat /etc/issue): Ubuntu 11.10 \n \l

s3fs syslog messages (grep s3fs /var/log/syslog): empty

so i start from the begining

on server

nano ~/.passwd-s3fs

cmd+v accessKeyId:secretAccessKey

chmod 600 ~/.passwd-s3fs

in bucket policy

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::super_bucket/*",
                "arn:aws:s3:::super_bucket"
            ]
        }
    ]
}

"save"

/usr/bin/s3fs super_bucket /mnt/s3/

and again receive

s3fs: AWS Message: Access Denied

3

There are 3 answers

0
fullpipe On BEST ANSWER

and no one said that i need to set User Policy in AWS IAM

4
Steffen Opel On

Update

Analysis

Apparently s3fs has issues regarding IAM support up to and including the most recent stable version 1.61 you are using, please review IAM user permissions issue for details, specifically comment 4:

Evidently there is a call to [ListAllMyBuckets()] that is required to determine if the bucket requested exists before attempting to mount.

Now, ListAllMyBuckets() is an operation on the service rather than a bucket or an object, which are the only entities your Resource statement currently targets, thus using ListAllMyBuckets() is effectively denied by your current policy.

Solution

As outlined in comment 4 as well, you must add an additional policy fragment to address this requirement for your version of s3fs accordingly:

"Statement": [
    {
        "Effect": "Allow",
        "Action": "s3:ListAllMyBuckets",
        "Resource": "arn:aws:s3:::*"
    }
]

Alternatively you could build s3fs version 1.61 from source after applying the patch provided in comment 9, which is supposedly addressing the issue (I haven't tested the patch myself though). Obviously a later version might include a fix for this as well, see comment 11 ff.

Good luck!


Given the intended functionality (i.e. Mount a bucket as a local file system read/write), s3fs presumably requires access to the bucket itself as well, not only the objects contained therein, which is handled separately - try to replace your Resource statement with the following:

"Resource": [
    "arn:aws:s3:::super_bucket",
    "arn:aws:s3:::super_bucket/*",
]

The first resource targets the bucket, while the latter targets the object contained therein.

0
rotarydial On

I was able to get this working by specifying ListBucket permissions on the bucket itself, and Put/Get/DeleteObject permissions on the bucket contents. I was following this CloudAcademy guide by way of the shell script in this repo that attempts to package it nicely. This is the working policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::super_bucket"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::super_bucket/*"
            ]
        }
    ]
}

Prior to defining it this way, I had a bunch of permissions just on the bucket and not its contents, and while I was able to log in to my FTP instance, when I attempted put test.txt, I got a 553 Could not create file. message.

I saw this in the logs during the failed put attempts. It was debug output from this debug command:

sudo /usr/local/bin/s3fs super_bucket \
-o use_cache=/tmp,iam_role="super_ftp_user",allow_other /home/super_ftp_user/ftp/files \
-o dbglevel=info -f \
-o curldbg \
-o url="https://s3-us-east-1.amazonaws.com" \
-o nonempty

Output:

[CURL DBG] * Connection #8 to host super_bucket.s3-us-east-1.amazonaws.com left intact
[INF]       curl.cpp:RequestPerform(2267): HTTP response code 200
[INF]     s3fs.cpp:create_file_object(918): [path=/test.txt][mode=100644]
[INF]       curl.cpp:PutRequest(3127): [tpath=/test.txt]
[INF]       curl.cpp:PutRequest(3145): create zero byte file object.
[INF]       curl_util.cpp:prepare_url(250): URL is https://s3-us-east-1.amazonaws.com/super_bucket/test.txt
[INF]       curl_util.cpp:prepare_url(283): URL changed is https://super_bucket.s3-us-east-1.amazonaws.com/test.txt
[INF]       curl.cpp:PutRequest(3225): uploading... [path=/test.txt][fd=-1][size=0]
[INF]       curl.cpp:insertV4Headers(2598): computing signature [PUT] [/test.txt] [] []
[INF]       curl_util.cpp:url_to_host(327): url is https://s3-us-east-1.amazonaws.com
[CURL DBG] * Found bundle for host super_bucket.s3-us-east-1.amazonaws.com: 0x7fa0d00d3c60 [can pipeline]
[CURL DBG] * Re-using existing connection! (#8) with host super_bucket.s3-us-east-1.amazonaws.com
[CURL DBG] * Connected to super_bucket.s3-us-east-1.amazonaws.com (123.456.789.012) port 443 (#8)
[CURL DBG] > PUT /test.txt HTTP/1.1
[CURL DBG] > Host: super_bucket.s3-us-east-1.amazonaws.com
[CURL DBG] > User-Agent: s3fs/1.88 (commit hash ***; OpenSSL)
[CURL DBG] > Accept: */*
[CURL DBG] > Authorization: xxxxxxx
[CURL DBG] > Content-Type: application/octet-stream
...
[CURL DBG] > Content-Length: 0
[CURL DBG] >
[CURL DBG] < HTTP/1.1 403 Forbidden
[CURL DBG] < x-amz-request-id: 1234567890
[CURL DBG] < x-amz-id-2: ******
[CURL DBG] < Content-Type: application/xml
[CURL DBG] < Transfer-Encoding: chunked
[CURL DBG] < Date: Tue, 19 Jan 2021 04:30:15 GMT
[CURL DBG] < Server: AmazonS3
[CURL DBG] * HTTP error before end of send, keep sending
[CURL DBG] <
[CURL DBG] * Connection #8 to host super_bucket.s3-us-east-1.amazonaws.com left intact
[ERR] curl.cpp:RequestPerform(2287): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>1234567890</RequestId><HostId>***</HostId></Error>
[INF]       cache.cpp:DelStat(578): delete stat cache entry[path=/test.txt]