SilverStripe behind load balancer

781 views Asked by At

I've got an instance of SilverStripe running on two servers behind an AWS load balancer. To share the session information I'm running Elasticache Redis server. I'm setting my php session store info as such:

ini_set('session.save_handler', 'redis');
ini_set('session.save_path', 'tcp://127.0.0.1:6379');

After I've signed into the admin section of the CMS I can jump between servers and it remembers me, however when switching between sections in the CMS the main section doesn't render (an AJAX call). From what I can tell the other server doesn't realise (which ever one you request from second) you already have the CMS admin loaded and in the response headers says to load a new version of JS dependencies which then wigs out the admin and it doesn't load.

Reading into the docs SilverStripe is using Zend_Cache for some extra information. I figure if I load the admin interface, then delete the cache directory it would replicate the problem. It doesn't.

I then tried to use this module to change the storage engine that Zend_Cache is using. I added:

SS_Cache::add_backend(
    'primary_redis', 
    'Redis',
    array(
        'servers' => array(
            'host' => 'localhost', 
            'port' => 6379, 
            'persistent' => true, 
            'weight' => 1, 
            'timeout' => 5,
            'retry_interval' => 15, 
            'status' => true, 
            'failure_callback' => null
        )
    )
);
SS_Cache::pick_backend('primary_redis', 'any', 10);

To my mysite/_config.php and this is storing some cms information in redis like for the key CMSMain_SiteTreeHints9b258b19199db9f9ed8264009b6c351b, however this still doesn't fix the problem of changing between servers in the load balanced environment.

Where else could SilverStripe be storing cache data? Have I implemented the module correctly?

2

There are 2 answers

4
djfg On BEST ANSWER

The default admin interface (assuming you're using 3.x) uses a javascript library called jquery.ondemand - this tracks files that have already been included (a rather ancient sort of precursor to the likes of 'require.js' - only without the AMD, and with CSS support).

To this end the likelihood of this having anything to do with the CMS itself is minimal - considering that the web by nature is stateless, and that the method you're using to save state is shared across your servers (both database and session data).

What is not shared across individual instances in your HA cluster is physical files. The cause here is likely (but not definitely going) to be the mtime stamp on the end of the URIs supplied to ondemand - originally intended to avoid issues with browser caching in respect to theme alterations (developer made or otherwise automated).

The headers as you've no doubt inspected include (always, no matter the endpoint chosen by HAProxy, nginx, ELB, or whatever) X-Include-CSS and X-Include-JS - of which an example looks like:

X-Include-JS:/framework/thirdparty/jquery/jquery.js?m=1481487203,/framework/javascript/jquery-ondemand/jquery.ondemand.js?m=1481487186,/framework/admin/javascript/lib.js?m=1481487181[...]

This is on each request, to which ondemand can inspect and see what is already included, and what needs to be added.

(Incidentally the size of these headers are what cause nginx header buffer issues causing 502 in a 'default' setup.)

So, what do?

The static files should be keeping the same mtime between balanced instances if you are deploying static code - but this is something to check. Generated files on the other hand (such as with Requirements::combine_files) will need to be synced on (re)generation between all instances as with all /assets for your site, in which case the mtime should persist. Zend_cache is quite unlikely to have any affect here, although APC may be a factor. Of course the first thing to check in any case is whether or not my premise holds true - e.g. to run the header responses from both back-ends through a diff tool.

0
Rudiger On

To help those who might come across this and need a solution that hooks into the CMS here is what I did:

class SyncRequirements_Backend extends Requirements_Backend implements Flushable {

    protected static $flush = false;

    public static function flush() {
        static::$flush = true;
    }

    public function process_combined_files() {
        // You can write your own or copy from framework/view/Requirements.php
        // Do the required syncing like rsync at the appropriate spot like on successfulWrite
    }
}

Add Requirements::set_backend(new SyncRequirements_Backend()); to your _config.php (mine is a seperate extension but mysite will work too).

The issue with this solution is if the core Requirements_Backend updates you'll be running an older version of code however it is very unlikely to break anything, you've just implemented your own Requirements backend that uses the same code. You could just call the parent instead of doing it all yourself however I couldn't find a way to run the sync only on file write, it would run every time a combined file was requested.