I tried to make a zip file in the mounted directory with goofys, but failed with these error messages:
$ su - foo-user
$ zip hoge.zip hoge
updating: hoge
zip I/O error: Operation not supported
zip error: Input file read failure (was zipping hoge)
Are there any clue to solve this problem?
What I tried
Making a zip file in the other directory and copying it in the mount point succeeds. So it seems not like the permission/authorization issue.
$ zip /tmp/hoge.zip hoge
adding: hoge (stored 0%)
$ ll /tmp/hoge.zip
-rw-rw-r-- 1 foo-user foo-user 163 Apr 4 17:52 hoge.zip
$ cp /tmp/hoge.zip (path of the mount-point)
$ ll
total 5
-rw-r--r-- 1 foo-user foo-user 5 Mar 26 10:56 hoge
-rw-r--r-- 1 foo-user foo-user 163 Apr 4 17:48 hoge.zip
System configurations
- OS: Amazon Linux (EC2)
- Goofys version: 0.19.0-use
The permission of the mount point:
drwxr-xr-x 2 foo-user foo-user 4096 Apr 4 17:48 s3
The permission of the input file:
-rw-r--r-- 1 foo-user foo-user 5 Mar 26 10:56 hoge
Setting of /etc/fstab
:
(path of goofys installed)/goofys#(s3-bucket-name) (path of the mount point) fuse _netdev,allow_other,--file-mode=0644,--uid=502,--gid=502 0 0
Uid/gid of foo-user
:
$ id
uid=502(foo-user) gid=502(foo-user) groups=502(foo-user)
S3 is not a filesystem. Goofys tries (admirably) to bridge the gap between filesystem and object store, but there is an unsurmountable impedance mismatch that requires compromises or limitations. Goofys has chosen the path of optimum performance:
Zip file creation uses random writes. That would explain why using the
-b
option resolves the issue. By creating a temp file and then copying it, the random writes to the bucket are avoided.Random writes to S3 can only be accomplished by dramatically deferring writes or by repeatedly overwriting the object with each random write, which wouldn't perform well and could sacrifice reliability, durability, or consistency.