I am new to the world of html scraping and am having difficulty pulling in paragraphs under particular headings, using rvest in R.
I want to scrape info from multiple sites that all have a relatively similar set up. They all have the same headings but the number of paragraphs under a heading can change. I was able to scrape specific paragraphs under a heading with the following code:
unitCode <- data.frame(unit = c('SLE010', 'SLE115', 'MAA103'))
html <- sapply(unitCode, function(x) paste("http://www.deakin.edu.au/current-students/courses/unit.php?unit=",
x,
"&return_to=%2Fcurrent-students%2Fcourses%2Fcourse.php%3Fcourse%3DS323%26version%3D3",
sep = ''))
assessment <- html[3] %>%
html() %>%
html_nodes(xpath='//*[@id="main"]/div/div/p[3]') %>%
html_text()
The 'xpath' element pulls in the first paragraph under the assessment heading. Some of the pages have multiple paragraphs under the assessment heading which I can i get if I change the 'xpath' variable to specify them specifically, e.g. p[4] or p[5]. Unfortunately I want to iterate this process over hundreds of pages, so changing the xpath each time isn't appropriate, and I don't even know how many paragraphs there will be in every page.
I think that pulling all < p > after the heading I am interested in is the best option considering the uncertainty around the set-up of the pages.
I was wondering if there is a way to scrape all < p > after < h3 >Assessment< h3 > using rvest or some other R scraping package?
I expanded this out only for demo purposes. You should be able to apply it to your original code. It's really not a good idea to overwrite names in namespaces you end up using. Also note that I'm using the latest (github/devtools version of)
rvest
which usesxml2
and deprecatedhtml
.The key is
xpath="//h3[contains(., 'Assessment')]/following-sibling::p"
, thusly:You can probably use that
<p style="margin-top: 2em;">
as a marker to stop, too. You should check outxml2
'sas_list
to help.