Reputation: 227
I'm creating an RMarkdown file with this general structure:
The script runs just fine for my analysis, but I do get an error output underneath the SQL chunks dedicated to merging tables and exporting to CSV. Also, when I try to knit to PDF, it halts on those chunks and outputs a similar error. Below is a simplified sample of the my work and the errors they produce. Please help me figure out how to best handle this error to knit the file. After doing some research, I'm still not quite sure what it means.
Chunk that merges my tables and gets everything ready to export:
{sql, connection = db}
DROP TABLE IF EXISTS food_table;
WITH subquery AS (
SELECT
menu.recipes,
menu.menu_price AS price
FROM ingredients
INNER JOIN menu on ingredients.id = menu.recipeid
UNION
SELECT
portion * unitp AS price,
substitute AS recipes
FROM menu
)
SELECT
recipes.id
ingredients.id
INTO food_table
FROM menu
Chunk that exports into CSV for later use:
{sql, connection = db}
COPY food_table TO '/Users/Name/Desktop/Project/food_table.csv' DELIMITER ',' CSV HEADER
Error output when running either of these chunks: Error in postgresqlFetch(res, n, ...) : RS-DBI driver: (resultSet does not correspond to a SELECT statement) Failed to execute SQL chunk
Error output when running knit PDF: Error in postgresqlFetch(res, n, ...) : RS-DBI driver: (resultSet does not correspond to a SELECT statement) CALLS: ... -> fetch -> fetch -> postgresqlFetch -> .Call Execution halted
Upvotes: 1
Views: 1659
Reputation: 2345
The knitr
SQL engine currently looks for the following keywords (INSERT
, UPDATE
, DELETE
or CREATE
) at the start of the query to see whether it should be run in the background and otherwise defaults to a SELECT
query with DBI::dbFetch
. In other words, judging from your error it expects that your query will contain a resultSet
(ie return data) and fails when it doesn't.
For instance, if you were to read food_table
directly from the database then the chunk would work. Replace your second chunk with:
```{sql read_food_table, connection = db, output.var = "food_table"}
SELECT * FROM food_table
```
and the object food_table
is now available as an R data object.
Depending on the package you're using, DBI::dbGetQuery
may work with your queries which you can force knitr
to call instead by setting the max.print
chunk option:
{sql, connection = db, max.print = 0}
Here is an alternative workflow if you want to run and display queries that don't return data:
MySQLCode.sql
-- ---- create_food_table
DROP TABLE IF EXISTS food_table;
WITH subquery AS (
SELECT
menu.recipes,
menu.menu_price AS price
FROM ingredients
INNER JOIN menu on ingredients.id = menu.recipeid
UNION
SELECT
portion * unitp AS price,
substitute AS recipes
FROM menu
)
SELECT
recipes.id
ingredients.id
INTO food_table
FROM menu
-- ---- export_food_table
COPY food_table TO '/Users/Name/Desktop/Project/food_table.csv' DELIMITER ',' CSV HEADER
RNotebook.Rmd
Load the SQL chunks from an external file invisibly.
```{r read_sql, include=FALSE, eval=TRUE, cache=FALSE}
knitr::read_chunk("MySQLCode.sql"))
```
Create a DB connection using the appropriate library.
```{r create_con}
library(...)
con <- dbConnect(...)
```
For each SQL chunk, display (but do not evaluate) the chunk with the `sql` engine.
Subsequently run (but do not show) the chunk with the `r` engine
```{sql create_food_table, eval=FALSE}
```
```{r execute_create_food_table, echo=FALSE}
dbExecute(con, knitr:::knit_code$get("create_food_table"))
```
```{sql export_food_table, eval=FALSE}
```
```{r execute_export_food_table, echo=FALSE}
dbExecute(con, knitr:::knit_code$get("export_food_table"))
```
The purpose of naming the code chunks, breaking them up and using knitr::read_chunk
means that they do not have to be rewritten for either chunk in which they're used.
It's up to you whether to split the code into chunks or just dbExecute
the entire file at once. You may not need to rely on an external file at all.
Make sure you install the latest version of knitr
(>= 1.18) as supporting SQL code chunks was added quite recently.
Upvotes: 3