library(duckdb)
library(dplyr, warn.conflicts = FALSE)
library(tidyr, warn.conflicts = FALSE)
DuckDB + dplyr (R)
Use a familiar R frontend
This is just a direct copy of the resources from Grant McDermott. Thus, I do not have any credit for it. It is solely for the archive purpose.
Load libraries
Create a database connection
For the d(b)plyr
workflow, the connection step is very similar to the pure SQL approach. The only difference is that, after instantiating the database connection, we need to register our parquet dataset as a table in our connection via the dplyr::tbl()
function. Note that we also assign it to an object (here: nyc
) that can be referenced from R.
## Instantiate the in-memory DuckDB connection
= dbConnect(duckdb(), shutdown = TRUE)
con
## Register our parquet dataset as table in our connection (and that assign it
## to an object that R understands)
# nyc = tbl(con, "nyc-taxi/**/*.parquet") # works, but safer to use the read_parquet func)
= tbl(con, "read_parquet('nyc-taxi/**/*.parquet', hive_partitioning = true)") nyc
First example
This next command runs instantly because all computation is deferred (i.e., lazy eval). In other words, it is just a query object.
= nyc |>
q1 summarize(
mean_tip = mean(tip_amount),
.by = passenger_count
)
.by
versus group_by
In case you weren’t aware: summarize(..., .by = x)
is a shorthand (and non-persistent) version of group_by(x) |> summarize(...)
. More details here.
We can see what DuckDB’s query tree looks like by asking it to explain the plan
explain(q1)
Similarly, to show the SQL translation that will be implemented on the backend, using show_query
.
show_query(q1)
Note that printing the query object actually does enforce some computation. OTOH it’s still just a preview of the data (we haven’t pulled everything into R’s memory).
q1
To actually pull all of the result data into R, we must call collect()
on the query object
= Sys.time()
tic = collect(q1)
dat1 = Sys.time()
toc
dat1- tic toc
Aggregation
Here’s our earlier filtering example with multiple grouping + aggregation variables…
= nyc |>
q2 filter(month <= 3) |>
summarize(
across(c(tip_amount, fare_amount), mean),
.by = c(month, passenger_count)
) q2
Aside: note the optimised query includes hash groupings and projection (basically: fancy column subsetting, which is a suprisingly effective strategy in query optimization)
explain(q2)
And our high-dimensional aggregation example. We’ll create a query for this first, since I’ll reuse it shortly again
= nyc |>
q3 group_by(passenger_count, trip_distance) |>
summarize(
across(c(tip_amount, fare_amount), mean),
) collect(q3)
Pivot (reshape)
# library(tidyr) ## already loaded
|>
q3 pivot_longer(tip_amount:fare_amount) |>
collect()
Joins (merges)
= nyc |> summarise(mean_tips = mean(tip_amount), .by = month)
mean_tips = nyc |> summarise(mean_fares = mean(fare_amount), .by = month) mean_fares
Again, these commands complete instantly because all computation has been deferred until absolutely necessary (i.e.,. lazy eval).
left_join(
mean_fares,
mean_tips|>
) collect()
Windowing
If you recall from the native SQL API, we sampled 1 percent of the data before creating decile bins to reduce the computation burden of sorting the entire table. Unfortunately, this approach doesn’t work as well for the dplyr frontend because the underlying SQL translation uses a generic sampling approach (rather than DuckDB’s optimised USING SAMPLE
statement.)
Close connection
dbDisconnect(con)
Footnotes
“Similar” might be a better description than “integrated”, since DuckdB does not use the Arrow memory model. But they are both columnar-orientated (among other things) and so the end result is pretty seamless integration.↩︎