I am doing some computationally demanding operations in R, so I am looking for the most efficient way to do them. My question is:
- Why is creating a data.frame seemingly faster than creating a matrix? In my understanding, the general agreement is that matrices are faster than data.frames if all data is the same type. Here they are not.
library(dplyr)
library(igraph)
library(bench)
set.seed(123)
edgelist <- data.frame(
node1 = sample(1:2000, 11000, replace = T),
node2 = sample(1:2000, 11000, replace = T),
weight = runif(11000, min = 0, max = 5)
)
g <- graph_from_data_frame(edgelist, directed = F)
#Data.frame
dat <- function() {
dm <- distances(g, weight = E(g)$weight)
UTIndex <- which(upper.tri(dm), arr.ind = T)
df1 <- data.frame(
verticeA = as.numeric(rownames(dm)[UTIndex[, 1]]),
verticeB = as.numeric(colnames(dm)[UTIndex[, 2]]),
path_length = as.numeric(dm[UTIndex])
)
}
#Matrix
mat <- function() {
dm <- distances(g, weight = E(g)$weight)
UTIndex <- which(upper.tri(dm), arr.ind = T)
df1 <- cbind(
verticeA = as.numeric(rownames(dm)[UTIndex[, 1]]),
verticeB = as.numeric(colnames(dm)[UTIndex[, 2]]),
path_length = as.numeric(dm[UTIndex])
)
}
####
results <- bench::mark(
dat = dat(),
mat = mat(),
check = F
)
t1 <- system.time({
df1 <- dat()
})
rm(df1)
t2 <- system.time({
df1 <- mat()
})
rm(df1)
Here are the output of t1, t2, and results:
> results
# A tibble: 2 × 13
expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc total_time
<bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl> <bch:tm>
1 dat 2.89s 2.89s 0.346 269MB 1.39 1 4 2.89s
2 mat 2.83s 2.83s 0.353 315MB 1.41 1 4 2.83s
# ℹ 4 more variables: result <list>, memory <list>, time <list>, gc <list>
> t1
user system elapsed
2.78 0.04 2.81
> t2
user system elapsed
3.12 0.08 3.21
When you benchmarking you are not sufficiently isolating the elements you are investigating (difference between the data.frame() and cbind()) function calls; the vast majority of computation in either test you ran is everything else but data.frame and cbind() and you are doing relatively few test comparisons which means you can mistake a random variation as being a significant difference.
In the following I isolate out the common parts, which are irrelevant to benchmark, and retain only the relevant parts; but I go further to open a discussion on R memory management.
lets start with code rewrite :
We can see from this that data.frame() call itself is order of magnitude faster than cbind. I think the answer as to why is memory layout; for data.frame() R simply needs a list/data.frame() and to associate existing vectors with the names; R is copy on write only and otherwise works by reference, so the data.frame construction is essentially a trivial change over metadata. Whereas cbind is creating a matrix, which is essentially a single vector, and therefore must copy the data and layit out.
I add a variation where after the initial pure data.frame and cbind calls we radically alter the objects (set every entry to 1) now in both cases R has to write to memory and both are slowed. data.frame performs worse.