Recently, I tried to improve upload and download experience, and I found that s3manager.Uploader
is so amazing to improve the upload experience for larger objects by parallelizing them. The below code works well. We can make full use of our bandwidth (far better than PutObject
).
uploader := s3manager.NewUploaderWithClient(s.Client, func(u *s3manager.Uploader) {
u.Concurrency = 128
})
So I tried to use the similar way to improve the download speed, but it seems that whatever concurrency is used it can't accelarate the download speed. The download speed is always below 15 MB/s.
By the way, GetObject
API works pretty good with very large objects, it makes full use of our bandwidth when we download large objects, GetObject
can reach bandwidth limit about 1250 MB/s.
So I tried to do some benchmark test, and the above image is what go tool pprof
shows.
func (b *WriteAtBuffer) WriteAt(p []byte, pos int64) (n int, err error) {
pLen := len(p)
expLen := pos + int64(pLen)
b.m.Lock()
defer b.m.Unlock()
if int64(len(b.buf)) < expLen {
if int64(cap(b.buf)) < expLen {
if b.GrowthCoeff < 1 {
b.GrowthCoeff = 1
}
newBuf := make([]byte, expLen, int64(b.GrowthCoeff*float64(expLen)))
copy(newBuf, b.buf)
b.buf = newBuf
}
b.buf = b.buf[:expLen]
}
copy(b.buf[pos:], p)
return pLen, nil
}
Here's my question, Why is s3manager.Downloader
so slow, is data race slowing down the WriteAt
interface? What's the recommended way to use s3manager.Downloader
?