Amazon S3 - ColdFusion's fileExists breaks when file was deleted by s3cmd

604 views Asked by At

I'm running a site on ColdFusion 9 that stores cached information on Amazon S3.

The ColdFusion app builds the files and puts them into Amazon S3. Every N hours, the cache gets flushed with a bash script that executes s3cmd del, because it's much more efficient than ColdFusion's fileDelete or directoryDelete.

However, after the file has been deleted by s3cmd, ColdFusion will still flag it as an existing file, even though it won't be able to read its contents.

For the ColdFusion app, I provide the S3 credentials on Application.cfc, and they are the same authentication keys used by s3cmd, so I don't think it's a user permission issue.

Let's run through the process:

// Create an S3 directory with 3 files
fileWrite( myBucket & 'rabbits/bugs-bunny.txt', 'Hi there, I am Bugs Bunny' );
fileWrite( myBucket & 'rabbits/peter-rabbit.txt', 'Hi there, I am Peter Rabbit' );
fileWrite( myBucket & 'rabbits/roger-rabbit.txt', 'Hi there, I am Roger Rabbit' );

 

writeDump( var = directoryList(myBucket & 'rabbits/', 'true', 'name' ), label = 'Contents of the rabbits/ folder on S3' );

enter image description here

 

// Delete one of the files with ColdFusion's fileDelete
fileDelete( myBucket & 'rabbits/roger-rabbit.txt' );

 

writeDump( var = directoryList(myBucket & 'rabbits/', 'true', 'name' ), label = 'Contents of the rabbits/ folder on S3' );

enter image description here

 

// Now, let's delete a file using the command line:
[~]$ s3cmd del s3://myBucket/rabbits/peter-rabbit.txt
File s3://myBucket/rabbits/peter-rabbit.txt deleted

 

writeDump( var = directoryList(myBucket & 'rabbits/', 'true', 'name' ), label = 'Contents of the rabbits/ folder on S3' );

enter image description here

 

// So far, so good!
// BUT!... ColdFusion still thinks that peter-rabbit.txt exists, even
// though it cannot display its contents

writeOutput( 'Does bugs-bunny.txt exist?: ' & fileExists(myBucket & 'rabbits/bugs-bunny.txt') );
writeOutput( 'Then show me the content of bugs-bunny.txt: ' & fileRead(myBucket & 'rabbits/bugs-bunny.txt') );

writeOutput( 'Does peter-rabbit.txt exist?: ' & fileExists(myBucket & 'rabbits/peter-rabbit.txt') );
writeOutput( 'Then show me the content of peter-rabbit.txt: ' & fileRead(myBucket & 'rabbits/peter-rabbit.txt') );
// Error on fileRead(peter-rabbit.txt) !!!

enter image description here

2

There are 2 answers

1
Xevi Pujol On BEST ANSWER

I agree with the comment by @MarkAKruger that the problem here is latency.

Given that ColdFusion can't consistently tell whether a file exists, but it DOES consistently read its up-to-date contents (and consistently fails to read them when they are not available), I've come up with this solution:

string function cacheFileRead(
    required string cacheFileName
){
    var strContent = '';

    try{
        strContent = FileRead( ARGUMENTS.cachefileName );
    }catch(Any e){
        strContent = '';
    }

    return strContent;
}
1
Mark A Kruger On

This answer assumes latency is your problem as I have asserted in the comments above.

I think I would keep track of when s3cmd is run. If you are running it via CFEXECUTE then store a timestamp in the Application scope or a file or DB table. Then, when checking for a file if the command has been run in the last N number of minutes (you'll have to experiment to figure out what makes sense) you would recache automatically. When N minutes have passed you can rely on your system of checks as reliable.

If your are not running s3cmd from cfexecute, try creating a script that updates the timestamp in the application scope and then add a curl command to your s3cmd script that hits your cf script - keeping the 2 processes in synch.

Your other option is to constantly use fileExists() (not a good idea - very expensive) or keep track of what is cached or not cached some other way that can be updated in real time - a DB table for example. You would then need to clear the table from your s3cmd script (perhaps using mysql command line).

I may think of something else for you. That's all I have for now. :)