Fuzzy-match and extract words repeated across turns in conversation

75 views Asked by At

I'm working on speech in conversational speaking turns and want to extract words that are repeated across turns. The task I'm grappling with is to extract words that inexactly repeated.

Data:

X <- data.frame(
  speaker = c("A","B","A","B"),
  speech = c("i'm gonna take a look you okay with that", 
             "sure looks good we can take a look you go first",
             "okay last time I looked was different i think that is it yeah",
             "yes you're right i think that's it"), stringsAsFactors = F
)

I have a for loop that successfully extracts exact repetitions:

# initialize vectors:
pattern1 <- c()
extracted1 <- c()

# run `for` loop:
library(stringr)
for(i in 2:nrow(X)){
  # define each 'speech` element as a pattern for the next `speech` element:
  pattern1[i-1] <- paste0("\\b(", paste0(unlist(str_split(X$speech[i-1], " ")), collapse = "|"), ")\\b")
  # extract all matched words:
  extracted1[i] <- str_extract_all(X$speech[i], pattern1[i-1])
}
# result:
extracted1
[[1]]
NULL

[[2]]
[1] "take" "a"    "look" "you" 

[[3]]
character(0)

[[4]]
[1] "i"     "think" "that"  "it"

However, I also want to extract inexact repetitions. For example, looks in row #2 is an inexact repetition of look in row #1, looked in row #3 fuzzily repeats looks in row #2, and yes in row #4 is an approximate match of yeah in row #3. I've recently come across agrep, which is used for approximate matching, but I don't know how to use it here or whether it's the right way to go at all. Any help is greatly appreciated.

Note that the actual data comprises thousands of speaking turns with highly unpredictable content so that it's not possible to define a list of all possible variants beforehand.

1

There are 1 answers

2
JBGruber On BEST ANSWER

I think this can be done really well using a tidy approach. The problem you already solved can be done (probably much quicker) using tidytext:

library(tidytext)
library(tidyverse)
# transform text into a tidy format
x_tidy <- X %>% 
  mutate(id = row_number()) %>% 
  unnest_tokens(output = "word", input = "speech")

# join data.frame with itself just moved by one id
x_tidy %>% 
  mutate(id_last = id - 1) %>% 
  semi_join(x_tidy, by = c("id_last" = "id", "word" = "word"))
#>     speaker id  word id_last
#> 2.5       B  2  take       1
#> 2.6       B  2     a       1
#> 2.7       B  2  look       1
#> 2.8       B  2   you       1
#> 4.3       B  4     i       3
#> 4.4       B  4 think       3
#> 4.6       B  4    it       3

But of course what you want to do is a bit more complex. The example words you highlight are not exactly the same but have a Levenshtein distance of up to 2:

adist(c("look", "looks", "looked"))
#>      [,1] [,2] [,3]
#> [1,]    0    1    2
#> [2,]    1    0    2
#> [3,]    2    2    0
adist(c("yes", "yeah"))
#>      [,1] [,2]
#> [1,]    0    2
#> [2,]    2    0

There is a great package for this following the same tidyverse logic. Unfortunately, the by argument in the respective function does not seem to be able to handle two columns (or it applies a fuzzy logic to both columns so 0 and 2 are treated as the same?), so this does not work:

x_tidy %>% 
  mutate(id_last = id - 1) %>% 
  fuzzyjoin::stringdist_semi_join(x_tidy, by = c("word" = "word", "id_last" = "id"), max_dist = 2)

However, using a loop we can implement the missing function anyway:

library(fuzzyjoin)
map_df(unique(x_tidy$id), function(i) {
  current <- x_tidy %>% 
    filter(id == i)
  last <- x_tidy %>% 
    filter(id == i - 1)
  
  current %>%
    fuzzyjoin::stringdist_semi_join(last, by = "word", max_dist = 2)
})
#>      speaker id   word
#> 2.1        B  2  looks
#> 2.2        B  2   good
#> 2.3        B  2     we
#> 2.4        B  2    can
#> 2.5        B  2   take
#> 2.6        B  2      a
#> 2.7        B  2   look
#> 2.8        B  2    you
#> 2.9        B  2     go
#> 3.2        A  3   time
#> 3.3        A  3      i
#> 3.4        A  3 looked
#> 3.5        A  3    was
#> 3.7        A  3      i
#> 3.10       A  3     is
#> 3.11       A  3     it
#> 4          B  4    yes
#> 4.3        B  4      i
#> 4.4        B  4  think
#> 4.5        B  4 that's
#> 4.6        B  4     it

Created on 2021-04-22 by the reprex package (v2.0.0)

I'm not sure how ideal the distance is in your case and if you consider the results correct. Alternatively you can try stemming or lemmatization before matching, which might work better. I also wrote a new function for the package implementing a stringsim_join version, which takes into account the length of the words you are trying to match. But the PR hasn't been approved yet.