Expat-based xml parsing script not working on Linux, work on Windows

5.1k views Asked by At

I'm writing a set of tool in python to extract data from some xml files that are generated by a traffic simulation software. As the resulting files can be quite big I use the xml.parsers.expat to parse them.

The issue is, when I run my scripts at work on a Windows XP machine it work perfectly but at home, on Ubuntu 10.10, on the very same file I get the following error :
ExpatError: not well-formed (invalid token): line 1, column 0

The file was originally encoded in utf-8 and the encoding declared in the tag was ascii so try to change it to utf-8 (or UTF8 or utf8) without success. As the BOM was absent I tryed to write it, still without success. I also tried to replace Windows line break (CR/LF) by Unix ones (CR).Without any success too.

Also the python's version at work is 2.7.1, on my Ubuntu box it's 2.6.6, but don't think my issue is related that : I upgraded my work computer's Python from 2.6 to 2.7 a few weeks ago without trouble.

As I'm not an expert here, I'm running out of idea, any hint ?

Edit: After further investigation (I got an headache now, I hate Unicode related trouble) it look like the issue was solved by setting properly the system environment variable LANG, LC_ALL and LANGUAGE to (in my case) "fr_FR.utf-8". I don't understand why they weren't at first neither why now, it work...

I thank you guys for the hand !

2

There are 2 answers

1
John Machin On BEST ANSWER

Excerpts from the documentation:

xml.parsers.expat.XML_ERROR_INVALID_TOKEN
Raised when an input byte could not properly be assigned to a character; for example, a NUL byte (value 0) in a UTF-8 input stream.

ExpatError.lineno
Line number on which the error was detected. The first line is numbered 1.

ExpatError.offset
Character offset into the line where the error occurred. The first column is numbered 0.

The above tends to indicate that you have a problem with the very first byte in your file.

Start with the original file, the one that worked on Windows. Edit your question to show the results of doing this:

python -c "print repr(open('win_ok_file.xml', 'rb').read(200))"

which will show unambiguously what is in the first 200 bytes in your file.

Also show us a cut-down version of your code that you have checked will work on Windows to get past the initial error, but reproduces the problem on Linux.

Some assertions, for what they are worth:

  • "The file was originally encoded in utf-8 and the encoding declared in the tag was ascii" ... If the encoding in the XML declaration is "ascii" but there are non-ASCII characters in the file, complying parsers should raise an exception. Are you sure of what you report?

  • The default encoding for XML documents is UTF-8. In other words, if the encoding is not mentioned in the XML declaration, or there is no XML declaration at all, the parser is required to decode using UTF-8.

  • Putting a UTF-8 BOM at the start is more likely to hinder than help.

  • The XML standard requires parsers to accept CR as a valid byte in an XML document and then immediately pretend it didn't exist (except maybe in an element with xmlns:space="preserve"). Changing CR LF to LF is not a good idea.

And some questions: How many bytes in a "quite big" file? Have you considered using iterparse() from xml.etree.cElementTree or lxml?

0
Arthurmed On

I had the same problem, and, instead of trying to parse directly the file like this:

document = xmltodict.parse("myfile.xml") # Parse the read document string

I parsed it indirectly, by opening previosly the xml document through a object, like this:

document_file = open("myfile.xml", "r") # Open a file in read-only mode
original_doc = document_file.read() # read the file object
document = xmltodict.parse(original_doc) # Parse the read document string

and it worked.