<- read_html("https://www.nih.gov/news-events/news-releases") nih
Web scraping
Learning goals
After this lesson, you should be able to:
- Use CSS Selectors and the Selector Gadget tool to locate data of interest within a webpage
- Use the
html_elements()
andhtml_text()
functions within thervest
packages to scrape data from webpage using CSS selectors
You can download a template Quarto file to start from here. Put this file in a folder called data_acquisition
within a folder for this course.
Web scraping
We have talked about how to acquire data from APIs. Whenever an API is available for your project, you should default to getting data from the API. Sometimes an API will not be available, and web scraping is another means of getting data.
Web scraping describes the use of code to extract information displayed on a web page. In R, the rvest
package offers tools for scraping. (rvest
is meant to sound like “harvest”.)
Additional readings:
Scraping ethics
robots.txt
robots.txt
is a file that some websites will publish to clarify what can and cannot be scraped and other constraints about scraping. When a website publishes this file, this we need to comply with the information in it for moral and legal reasons.
We will look through the information in this tutorial and apply this to the NIH robots.txt file.
From our investigation of the NIH robots.txt
, we learn:
User-agent: *
: Anyone is allowed to scrapeCrawl-delay: 2
: Need to wait 2 seconds between each page scraped- No
Visit-time
entry: no restrictions on time of day that scraping is allowed - No
Request-rate
entry: no restrictions on simultaneous requests - No mention of
?page=
,news-events
,news-releases
, orhttps://science.education.nih.gov/
in theDisallow
sections. (This is what we want to scrape today.)
Further considerations
The article Ethics in Web Scraping describes some good principles to ensure that we are valuing the labor that website owners invested to provide data and creating good from the information we do scrape.
HTML structure
HTML (hypertext markup language) is the formatting language used to create webpages. Let’s look at the core parts of HTML from the rvest vignette.
Finding CSS Selectors
In order to gather information from a webpage, we must learn the language used to identify patterns of specific information. For example, on the NIH News Releases page, we can see that the data is represented in a consistent pattern of image + title + abstract.
We will identify data in a web page using a pattern matching language called CSS Selectors that can refer to specific patterns in HTML, the language used to write web pages.
For example:
- Selecting by tag:
"a"
selects all hyperlinks in a webpage (“a” represents “anchor” links in HTML)"p"
selects all paragraph elements
- Selecting by ID and class:
".description"
selects all elements withclass
equal to “description”- The
.
at the beginning is what signifiesclass
selection. - This is one of the most common CSS selectors for scraping because in HTML, the
class
attribute is extremely commonly used to format webpage elements. (Any number of HTML elements can have the sameclass
, which is not true for theid
attribute.)
- The
"#mainTitle"
selects the SINGLE element with id equal to “mainTitle”- The
#
at the beginning is what signifiesid
selection.
- The
<p class="title">Title of resource 1</p>
<p class="description">Description of resource 1</p>
<p class="title">Title of resource 2</p>
<p class="description">Description of resource 2</p>
Warning: Websites change often! So if you are going to scrape a lot of data, it is probably worthwhile to save and date a copy of the website. Otherwise, you may return after some time and your scraping code will include all of the wrong CSS selectors.
Although you can learn how to use CSS Selectors by hand, we will use a shortcut by installing the Selector Gadget tool.
- There is a version available for Chrome–add it to Chrome via the Chome Web Store.
- Make sure to pin the extension to the menu bar. (Click the 3 dots > Extensions > Manage extensions. Click the “Details” button under SelectorGadget and toggle the “Pin to toolbar” option.)
- There is also a version that can be saved as a bookmark in the browser–see here.
Let’s watch the Selector Gadget tutorial video before proceeding.
Head over to the NIH News Releases page. Click the Selector Gadget extension icon or bookmark button. As you mouse over the webpage, different parts will be highlighted in orange. Click on the title of the first news release. You’ll notice that the Selector Gadget information in the lower right describes what you clicked on.
Scroll through the page to verify that only the information you intend (the description paragraph) is selected. The selector panel shows the CSS selector (.teaser-title
) and the number of matches for that CSS selector (10). (You may have to be careful with your clicking–there are two overlapping boxes, and clicking on the link of the title can lead to the CSS selector of “a”.)
Exercise: Repeat the process above to find the correct selectors for the following fields. Make sure that each matches 10 results:
- The publication date
- The article abstract paragraph (which will also include the publication date)
Retrieving Data Using rvest
and CSS Selectors
Now that we have identified CSS selectors for the information we need, let’s fetch the data using the rvest
package.
Once the webpage is loaded, we can retrieve data using the CSS selectors we specified earlier. The following code retrieves the article titles:
# Retrieve and inspect course numbers
<- nih %>%
article_titles html_elements(".teaser-title") %>%
html_text()
head(article_titles)
[1] "Short-term incentives for exercise can lead to sustained increases in activity "
[2] "Scientists discover potential treatment approaches for polycystic kidney disease"
[3] "Drug shows promise for slowing progression of rare, painful genetic disease"
[4] "Irregular sleep and late bedtimes associated with worse grades for high school students"
[5] "NIH selects Dr. Kathleen Neuzil as director of the Fogarty International Center and NIH associate director for international research"
[6] "Analysis of social media language using AI models predicts depression severity for white Americans, but not Black Americans"
Exercise: Our goal is to get article titles, publication dates, and abstract text for news releases across several pages of results. Before doing any coding, plan your approach. What functions will you write? What arguments will they have? How will you use your functions? Consult with your peers and compare plans.
Write a few observations about your comfort/confidence in this planning exercise. As you proceed through the implementation of this plan in the next steps, make notes about any places you struggled, were uncertain, or benefited from peer input.
Exercise: Carry out your plan to get the article title, publication date, and abstract text for the first 5 pages of news releases in a single data frame. You will need to write at least one function, and you will need iteration–use both a for
loop and appropriate map_()
functions from purrr
. Notes:
- Mouse over the page buttons at the very bottom of the news home page to see what the URLs look like.
- The abstract should not have the publication date–use
stringr
and regular expressions to remove the publication date. - Include
Sys.sleep(2)
in your function to respect theCrawl-delay: 2
in the NIHrobots.txt
file. - Recall that
bind_rows()
fromdplyr
takes a list of data frames and stacks them on top of each other.
Solution
# Helper function to reduce html_elements() %>% html_text() code duplication
<- function(page, css_selector) {
get_text_from_page %>%
page html_elements(css_selector) %>%
html_text()
}
<- function(url) {
scrape_page Sys.sleep(2)
<- read_html(url)
page <- get_text_from_page(page, ".teaser-title")
article_titles <- get_text_from_page(page, ".date-display-single")
article_dates <- get_text_from_page(page, ".teaser-description")
article_abstracts <- str_remove(article_abstracts, "^.+—") %>% trimws()
article_abstracts
tibble(
title = article_titles,
date = article_dates,
abstract = article_abstracts
) }
Using a for-loop:
<- vector("list", length = 5)
pages
for (i in 1:5) {
<- "https://www.nih.gov/news-events/news-releases"
base_url if (i==1) {
<- base_url
url else {
} <- str_c(base_url, "?page=", i-1)
url
}<- scrape_page(url)
pages[[i]]
}
<- bind_rows(pages)
df_articles head(df_articles)
Using purrr::map()
:
# Create a character vector of URLs for the first 5 pages
<- "https://www.nih.gov/news-events/news-releases"
base_url <- c(base_url, str_c(base_url, "?page=", 1:4))
urls_all_pages
<- purrr::map(urls_all_pages, scrape_page)
pages2 <- bind_rows(pages2)
df_articles2 head(df_articles2)
Example 2: NIH STEM Teaching Resources
Let’s look at a more complex example with the NIH STEM Teaching Resources webpage.
- Using Selector Gadget to select the resource titles ends up being tricky because we can only get one resource title at a time.
- In Chrome, you can right click part of a web page and click “Inspect”. This opens up Chrome’s Developer Tools. Mousing over the HTML in the top right panel highlights the corresponding part of the web page.
- For non-Chrome browsers, use the Help menu to search for Developer Tools.
The underlying HTML used to create a web page is also called the page source code or page source. We learn from this that the resource titles are <h4>
headings that have the class resource-title
. We can infer from this that .resource-title
would be the CSS selector for the resource titles.