Temporary tiles cache for Mapserver

1.1k views Asked by At

I was searching on Google and StackOverflow to see if anyone have solution for my problem, but didn't found anyone with same problems.

So, currently I'm running Debian machine with Mapserver installed on it. The server also run webserver for displaying map data over the browser. The generation of map is dynamic, based on layers definition in database I built mapfile in PHP and based on that generated PHP the map is shown to user. The data is defined in database and as a SHP files (both combined in single mapfile).

It is fully dynamic, what I mean with that is that user can enable/disable any of layers or click inside polygon (select some points on map) it color the selection (generate new mapfile based on selection and re-generate tiles).

So the execution of all that code from selecting some area to coloring selected items somtimes take too much time for good user experience.

For solution I'd like to use some kind of temporary tiles cache, that can be used for single user, and to be able to delete it's content when user select some items on map or enable/disable one of the layers.

P.S. I already did all the optimizations provided from Mapserver documentation.

Thanks for any help.

1

There are 1 answers

5
Hal Mueller On

It sounds to me like your problem is not going to be helped by server-side caching. If all of the tiles depend on user selections, then you're going to be generating a bunch of new tiles every time there's an interaction.

I've been using MapCache to solve a similar problem, where I am rendering a tileset in response to a user query. But I've broken up my tiles into multiple logical layers, and I do the compositing on the browser side. This lets me cache, server-side, the tiles for various queries, and sped up performance immensely. I did seed the cache down to zoom level 12, and I needed to use the BerkeleyDB cache type to keep from running out of inodes.

I'm using Leaflet.js for the browser-side rendering, but you should also consider OpenLayers.


After looking at the source code, I have some other ideas.

It looks like you're drawing each layer the same way each time. Is that right? That is, the style and predicate of a particular layer never change. Each user sees the image for that layer the same way, if they have selected the layer. But the combination of layers you show does change, based on OpenLayers control? If that's the case, you don't need per-user caching on the server. Instead, use per-layer caching, and let the user's browser figure out the client side caching.

A quick technique for finding slow layers is to turn them all of. Then reenable them one by one to find the culprit. Invoke Mapserver from the command line, and time the runs, for greater precision than you'll get by running it from your webserver.

You mentioned you're serving the images in Google 3857 while the layers are in Gauss-Kruger/EPSG 3912. Reprojecting this on the fly is expensive. Reprojecting the rasters on the fly is very expensive. If you can, you should reproject them ahead of time, and store them in 3857 (add an additional geometry column).

I don't know what a DOF file is--maybe Digital Obstacle File? Perhaps preload the DOF file into PostGIS too? That would eliminate the two pieces you think are problematic.

Take a look at the SQL queries that PostGIS is performing, and make sure those are using indexes

In any case, these individual layers should go into MapCache, in my opinion. Here is a video of a September 2014 talk by the MapCache project leader.