Carrierwave + S3 Storage + Counter Cache Taking too Long

653 views Asked by At

I have a simple app that receives POSTed images via an API and sends them to S3, via Carrierwave. My Photos table has a counter_cache as well.

80% of the time my transaction time is HUGE, like 60 seconds or more, and more than 90% of this time is spent uploading the image to S3 and updating counter_cache.

Does anybody have a clue about why this uploading time is so big and why counter cache queries are taking so long?

New Relic report

Transaction Trace

SQL Trace

Just added some photos on http://carrierwave-s3-upload-test.herokuapp.com

Behavior was similar: enter image description here

Just removed counter_cache from my code and did some more uploading.... Odd behavior again. enter image description here


EDIT 1

Logs from last batch upload. EXCON_DEBUG is set to True: https://gist.github.com/rafaelcgo/561f516a85823e30fbad


EDIT 2

My logs weren't showing any EXCON output info. So I realized I was using fog 1.3.1. Updated to Fog 1.19.0 (which uses a newer version of the excon gem) and everything works nicely now. enter image description here

Tips.. If you need to debug external connections, use the newer version of excon and set the env EXCON_DEBUG=true in order to see some verbose, like this: https://gist.github.com/geemus/8097874


EDIT 3

Guys, updated the gem fog and now it's sweet. Don't know why old versions of fog and excon have this odd performance.

1

There are 1 answers

1
Taavo On

Three clues, but not the whole story:

  1. CarrierWave transfers the file to s3 inside your database transaction. Because counter_cache's update also occurs inside the transaction, it's possible that your benchmarking code thinks the update is taking forever, when really it's the file transfer that's taking forever.

  2. Last I checked it wasn't even possible for a heroku application dyno to sustain a connection as long as you're seeing. You should be seeing H12 or H15 errors in your logs if you've got synchronous uploads going past about 30 seconds. More on heroku timeouts here.

  3. Have you tried updating fog? 1.3.1 is a year and a half old, and they've probably fixed a bug or two since then.

Past that, the only thing that comes to mind is that you're uploading a file of epic scale. I've been disappointed in both the latency and throughput I've been able to achieve from heroku to s3, so that also could be involved.

Obligatory: You aren't letting users upload directly to your dyno, are you?