PHP Parsing with simple_html_dom, please check

1.6k views Asked by At

I made a simple parser for saving all images per page with simple html dom and get image class but i had to make a loop inside the loop in order to pass page by page and i think something is just not optimized in my code as it is very slow and always timeouts or memory exceeds. Could someone just have a quick look at the code and maybe you see something really stupid that i made?

Here is the code without libraries included...

$pageNumbers = array(); //Array to hold number of pages to parse

$url = 'http://sitename/category/'; //target url
$html = file_get_html($url);


//Simply detecting the paginator class and pushing into an array to find out how many pages to parse placing it into an array
foreach($html->find('td.nav .str') as $pn){
    array_push($pageNumbers, $pn->innertext);               
}

// initializing the get image class
$image = new GetImage;
$image->save_to = $pfolder.'/'; // save to folder, value from post request.

//Start reading pages array and parsing all images per page.
foreach($pageNumbers as $ppp){

    $target_url = 'http://sitename.com/category/'.$ppp; //Here i construct a page from an array to parse.
    $target_html = file_get_html($target_url); //Reading the page html to find all images inside next.

    //Final loop to find and save each image per page.
    foreach($target_html->find('img.clipart') as $element) {
        $image->source = url_to_absolute($target_url, $element->src);
        $get = $image->download('curl'); // using GD
        echo 'saved'.url_to_absolute($target_url, $element->src).'<br />';           
    }

}

Thank you.

2

There are 2 answers

8
ZoFreX On

You are doing quite a lot here, I'm not surprised the script times out. You download multiple web pages, parse them, find images in them, and then download those images... how many pages, and how many images per page? Unless we're talking very small numbers then this is to be expected.

I'm not sure what your question really is, given that, but I'm assuming it's "how do I make this work?". You have a few options, it really depends what this is for. If it's a one-off hack to scrape some sites, ramp up the memory and time limits, maybe chunk up the work to do a little, and next time write it in something more suitable ;)

If this is something that happens server-side, it should probably be happening asynchronously to user interaction - i.e. rather than the user requesting some page, which has to do all this before returning, this should happen in the background. It wouldn't even have to be PHP, you could have a script running in any language that gets passed things to scrape and does it.

3
akeane On

I suggest making a function to do the actual simple html dom processing. I usually use the following 'template'... note the 'clear memory' section. Apparently there is a memory leak in PHP 5... at least I read that someplace.

function scraping_page($iUrl)
{
    // create HTML DOM
    $html = file_get_html($iUrl);

    // get text elements
    $aObj = $html->find('img');

    // do something with the element objects

    // clean up memory (prevent memory leaks in PHP 5)
    $html->clear();  // **** very important ****
    unset($html);    // **** very important ****

    return;  // also can return something: array, string, whatever
}

Hope that helps.