I'm trying to parse HTML page using lxml in Python.

In HTML have this structure:

   <p>Some text <b>with</b> <i>other tags</i>.</p>
   <p>More text.</p>
   <p>More text[2].</p>



   and so on...

I need to parse this HTML to following JSON:

      "title": "Title",
      "text": "Some text with other tags.\nMore text.\nMore text[2].",
      "title": "Title[2]",
      "text": "Description.",
      "title": "Title[3]",
      "text": "Description[1].\nDescription[2]",

I can read all h5 tags with titles and write them into JSON using this code:

array = []
for title in tree.xpath('//h5/text()'):
    data = {
        "title" : title,
        "text" : ""

with io.open('data.json', 'w', encoding='utf8') as outfile:
    str_ = json.dumps(array,
                      indent=4, sort_keys=True,
                      separators=(',', ' : '), ensure_ascii=False)

The problem is, I don't know how to read all of these paragraphs contents between <h5> headings and put em into text JSON field.

2 Answers

Tomalak On

To get all text "between" two elements, for example between two headings, there is no other way than this:

  • walk the entire tree (we'll use .iterwalk() because we must make a distinction between start and end of elements)
  • create a data item for each heading one encounters (let's call it the current_heading)
  • collect into a list all the individual text bits of any other element that comes by
  • every time a new heading is encountered, store the data collected so far and begin a new data item

Every element in ElementTree element can have a .text and a .tail:

<b>This will be the .text</b> and this will be the .tail

We must collect both, otherwise text will be missing from the output.

The following keeps track of where we are in the HTML tree using a stack, so .head and .tail of nested elements are collected in the right order.

collected_text = []
data = []
stack = []
current_heading = {
    'title': '',
    'text': []
html_headings = ['h1', 'h2', 'h3', 'h4', 'h5', 'h6']

def normalize(strings):
    return ''.join(strings)

for event, elem in ET.iterwalk(tree, events=('start', 'end')):
    # when an element starts, collect its .text
    if event == 'start':

        if elem.tag in html_headings:
            # reset any collected text, b/c now we're starting to collect
            # the heading's text. There might be nested elements in it.
            collected_text = []

        if elem.text:

    # ...and when it ends, collect its .tail
    elif event == 'end' and elem == stack[-1]:

        # headings mark the border between data items
        if elem.tag in html_headings:
            # normalize text in the previous data item
            current_heading['text'] = normalize(current_heading['text'])

            # start new data item
            current_heading = {
                'title': normalize(collected_text),
                'text': []
            # reset any collected text, b/c now we're starting to collect
            # the text after the the heading
            collected_text = []

        if elem.tail:

        current_heading['text'] = collected_text

# normalize text in final data item
current_heading['text'] = normalize(current_heading['text'])

When I run this against your sample HTML, I get this output (JSON formatted):

        "text" : "\n   Some text with other tags.\n   More text.\n   More text[2].\n\n   ",
        "title" : "Title"
        "text" : "\n   Description.\n\n   ",
        "title" : "Title[2]"
        "text" : "\n   Description[1].\n   Description[2].\n\n   ***\n   and so on...\n   ***\n",
        "title" : "Title[3]"

My normalize() function is very simple and retains all the newlines and other whitespace that is part of the HTML source code. Write a more sophisticated function if you want a nicer result.

pguardiario On

There's a simpler way to do this, just keep track of the position of the next h5 and make sure you select p's with a lower position:

data = []

for h5 in doc.xpath('//h5'):
  more_h5s = h5.xpath('./following-sibling::h5')
  position = int(more_h5s[0].xpath('count(preceding-sibling::*)')) if len(more_h5s) > 0 else 999
  ps = h5.xpath('./following-sibling::p[position()<' + str(position) + ']')
    "title": h5.text,
    "text": "\n".join(map(lambda p: p.text_content(), ps))

It might even be simpler still to just "follow" the following-sibling::* until it's no longer a p