I'm loading the contents of certain external pages to monitor mentions on particular topics. I use file_get_contents for this, and this works fine, most of the time.
However, this fails for a particular URL. If I switch to cURL, I am able to see what is loaded, and notice that the site uses javascript to, it seems, load remote content. Is there a way to work around this, and, presumably, only get the contents after the remote content has loaded?
Here's an example URL of which I can not capture the contents with either file_get_contents nor cURL:
I suspect, but have not tried, that it might be possible to get this to work with something like PhantomJS. But, I prefer a PHP-based solution, and PhantomJS seems to have been abandoned some 5 years ago.