Say I have a list of counties with varying amounts of spelling errors or other issues that differentiate them from the 2010 FIPS dataset (code to create fips
dataframe below), but the states in which the misspelled counties reside are entered correctly. Here's a sample
of 21 random observations from my full dataset:
tomatch <- structure(list(county = c("Beauregard", "De Soto", "Dekalb", "Webster",
"Saint Joseph", "West Feliciana", "Ketchikan Gateway", "Evangeline",
"Richmond City", "Saint Mary", "Saint Louis City", "Mclean",
"Union", "Bienville", "Covington City", "Martinsville City",
"Claiborne", "King And Queen", "Mclean", "Mcminn", "Prince Georges"
), state = c("LA", "LA", "GA", "LA", "IN", "LA", "AK", "LA", "VA",
"LA", "MO", "KY", "LA", "LA", "VA", "VA", "LA", "VA", "ND", "TN",
"MD")), .Names = c("county", "state"), class = c("tbl_df", "data.frame"
), row.names = c(NA, -21L))
county state
1 Beauregard LA
2 De Soto LA
3 Dekalb GA
4 Webster LA
5 Saint Joseph IN
6 West Feliciana LA
7 Ketchikan Gateway AK
8 Evangeline LA
9 Richmond City VA
10 Saint Mary LA
11 Saint Louis City MO
12 Mclean KY
13 Union LA
14 Bienville LA
15 Covington City VA
16 Martinsville City VA
17 Claiborne LA
18 King And Queen VA
19 Mclean ND
20 Mcminn TN
21 Prince Georges MD
I've used adist
to create a fuzzy string matching algorithm that matches around 80% of my counties to the county names in fips
. However, sometimes it will match two counties with similar spelling, but from different states (e.g., "Webster, LA" gets matched to "Webster, GA" rather than "Webster Parrish, LA").
distance <- adist(tomatch$county,
fips$countyname,
partial = TRUE)
min.name <- apply(distance, 1, min)
matchedcounties <- NULL
for(i in 1:nrow(distance)) {
s2.i <- match(min.name[i], distance[i, ])
s1.i <- i
matchedcounties <- rbind(data.frame(s2.i = s2.i,
s1.i = s1.i,
s1name = tomatch[s1.i, ]$county,
s2name = fips[s2.i, ]$countyname,
adist = min.name[i]),
matchedcounties)
}
Therefore, I want to restrict fuzzy string matching of county to the correctly spelled versions with matching state.
My current algorithm makes one big matrix which calculates standard Levenshtein distances between both sources and then selects the value with the minimum distance.
To solve my problem, I'm guessing I'd need to create a function that could be applied to each 'state' group by ddply
, but I'm confused as to how I should indicate that the group value in the ddply
function should match another dataframe. A dplyr
solution or solution using any other package would be appreciated as well.
Code to create FIPS dataset:
download.file('http://www2.census.gov/geo/docs/reference/codes/files/national_county.txt',
'./nationalfips.txt')
fips <- read.csv('./nationalfips.txt',
stringsAsFactors = FALSE, colClasses = 'character', header = FALSE)
names(fips) <- c('state', 'statefips', 'countyfips', 'countyname', 'classfips')
# remove 'County' from countyname
fips$countyname <- sub('County', '', fips$countyname, fixed = TRUE)
fips$countyname <- stringr::str_trim(fips$countyname)
Here's a way with dplyr. I first join the
tomatch
data.frame with the FIPS names by state (allowing only in-state matches):Next, I noticed that a lot of counties don't have 'Saint' but 'St.' in the FIPS dataset. Cleaning that up first should improve the results obtained.
Then, group this data.frame by county, and calculate the distance with adist:
Note that I took the diagonal from the resulting matrix as adist returns an n x m matrix with n representing the x vector and m representing the y vector (it calculates all of the combinations). Optionally, you could add the agrep result:
Then filter as you did before, take the minimum distance:
Note that this could result in more than one row returned for each of the input rows in
tomatch
.Alternatively, do it all in one run (I usually change code to this format once I'm confident it's doing what it's supposed to do):
The result in both cases: