Some websites like for example http://www.idealo.co.uk seem to serve only static html
, although their content is dynamic.
For example if I navigate through a certain category, I get a link to a static html
page:
http://www.idealo.co.uk/cat/5666/electric-guitars.html
Now if I apply a custom filter, again I get a link to something that seems to be static html
:
http://www.idealo.co.uk/cat/5666F456496-735760-1502100/electric-guitars.html
How is this achieved? Are there any frameworks out there that help to "pre-generate" all possible dynamic pages, in such way that whenever a new input is given, the page already exists (i.e. the static html
is already available)?
Background: we run a small search engine for real estate offers. Offers are updated by our scraper once a day (the content is static through the day). The content is searchable on a Ruby-on-Rails website.
As the traffic increases, performance is becoming an issue. I'm wondering if there is any framework / tool that could batch-generate all our searches so that we could serve static html
.
Their site isn't dynamic. They're using URL rewriting (e.g.
mod_rewrite
) to translate the input URLs into a request that can be satisfied by a script.For example:
Might be rewritten to:
A quick trick to test this is to go to
/cat/5666/foo.html
The use of
.html
in this case is probably to hide what kind of scripting is used on their site, as a weak security-through-obscurity measure.In response to your problem - no, there's no (easy) way to generate all possible results into static HTML files. You're looking at potentially billions of permutations. If you're having performance issues, look into performance profiling, caching, query optimisation, etc.