Can Goutte/Guzzle be forced into UTF-8 mode?

13.5k views Asked by At

I'm scraping from a UTF-8 site, using Goutte, which internally uses Guzzle. The site declares a meta tag of UTF-8, thus:

<meta http-equiv="Content-Type" content="text/html; charset=utf-8">

However, the content type header is thus:

Content-Type: text/html

and not:

Content-Type: text/html; charset=utf-8

Thus, when I scrape, Goutte does not spot that it is UTF-8, and grabs data incorrectly. The remote site is not under my control, so I can't fix the problem there! Here's a set of scripts to replicate the problem. First, the scraper:

<?php

require_once realpath(__DIR__ . '/..') . '/vendor/goutte/goutte.phar';

$url = 'http://crawler-tests.local/utf-8.php';
use Goutte\Client;

$client = new Client();
$crawler = $client->request('get', $url);
$text = $crawler->text();
echo 'Whole page: ' . $text . "\n";

Now a test page to be placed on a web server:

<?php
// Correct
#header('Content-Type: text/html; charset=utf-8');

// Incorrect
header('Content-Type: text/html');
?>  
<!DOCTYPE html>
<html>
    <head>
        <title>UTF-8 test</title>
        <meta charset="utf-8" />
    </head>
    <body>
        <p>When the Content-Header header is incomplete, the pound sign breaks:

        £15,216</p>
    </body>
</html>

Here's the output of the Goutte test:

Whole page: UTF-8 test When the Content-Header header is incomplete, the pound sign breaks: £15,216

As you can see from the comments in the last script, properly declaring the character set in the header fixes things. I've hunted around in Goutte to see if there is anything that looks like it would force the character set, but to no avail. Any ideas?

4

There are 4 answers

8
Peter On BEST ANSWER

The issue is actually with symfony/browser-kit and symfony/domcrawler. The browserkit's Client does not examine the HTML meta tags to determine the charset, content-type header only. When the response body is handed over to the domcrawler, it is treated as the default charset ISO-8859-1. After examining the meta tags that decision should be reverted and the DomDocument rebuilt, but that never happens.

The easy workaround is to wrap $crawler->text() with utf8_decode():

$text = utf8_decode($crawler->text());

This works if the input is UTF-8. I suppose for other encodings something similar can be achieved with iconv() or so. However, you have to remember to do that every time you call text().

A more generic approach is to make the Domcrawler believe that it deals with UTF-8. To that end I've come up with a Guzzle plugin that overwrites (or adds) the charset in the content-type response header. You can find it at https://gist.github.com/pschultz/6554265. Usage is like this:

<?php

use Goutte\Client;


$plugin = new ForceCharsetPlugin();
$plugin->setForcedCharset('utf-8');

$client = new Client();
$client->getClient()->addSubscriber($plugin);
$crawler = $client->request('get', $url);

echo $crawler->text();
0
geek-merlin On

Guzzle is true about what it gets, so the best way is to do the conversion like this:

  // $client = \Drupal::httpClient();
  $client = new \GuzzleHttp\Client();
  $response = $client->get($remoteUrl);
  if ($response->getStatusCode() !== 200) {
    return NULL;
  }
  $originalBody = $response->getBody()->getContents();
  $contentTypeHeader = $response->getHeader('content-type');
  $originalEncoding = \GuzzleHttp\Psr7\Header::parse($contentTypeHeader)[0]['charset'] ?? NULL;
  $body = !$originalEncoding ? $originalBody :
    mb_convert_encoding($originalBody, 'UTF-8', $originalEncoding);

Of course if the response lies about its encoding, you're lost until you work around or fix that.

0
mushroom On

Crawler tries detect charset from <meta charset tag but frequently it's missing and then Crawler uses charset by default (ISO-8859-1) - it is source of problem described in this thread.

When we are passing content to Crawler through constructor we miss Content-Type header that usually contains charset.

Here's how we can handle it:

$crawler = new Crawler();
$crawler->addContent(
    $response->getBody()->getContents(), 
    $response->getHeaderLine('Content-Type')
);

With this solution we are using correct charset from server response and don't bind our solution to any single charset and of course after that we don't need decode every single received line from Crawler (using utf8_decode() or somehow else).

2
halfer On

I seem to have been hitting two bugs here, one of which was identified by Peter's answer. The other was the way in which I am separately using the Symfony Crawler class to explore HTML snippets.

I was doing this (to parse the HTML for a table row):

$subCrawler = new Crawler($rowHtml);

Adding HTML via the constructor, however, does not appear to give a way in which the character set can be specified, and I assume ISO-8859-1 is again the default.

Simply using addHtmlContent gets it right; the second parameter specifies the character set, and it defaults to UTF-8 if it is not specified.

$subCrawler = new Crawler();
$subCrawler->addHtmlContent($rowHtml);