jeffrey
jeffrey

Reputation: 3354

How can I scale my R Shiny app for bigger data inputs?

I am making an R shiny app that uses ggplot2. This app takes in user-uploaded csv files and uses ggplot2 to graph them.

My app works well for small csv inputs (I'm talking up to 20 rows/columns). I'm trying to make my app useful for visualization of data for files in the 2MB+ range.

At my current state, however,my graphs are useless for analysis with big data. I will post some of my code and link to the relative csv files so you can reproduce the problem.

Here is an example dataset: http://seanlahman.com/baseball-archive/statistics/, picking anything from Version 5.9.1 – comma-delimited version

Try graphing 'YearID' for X and 'playerID' for Y in Appearances.csv and you will see what I mean.

ui.R

library(shiny)

dataset <- list('Upload a file'=c(1))

shinyUI(pageWithSidebar(

  headerPanel(''),

  sidebarPanel(
     wellPanel(
         radioButtons('format', 'Format', c('CSV', 'TSV', 'XLSX')),
         uiOutput("radio"),
         fileInput('file', 'Data file')           
      ),

      wellPanel(
          selectInput('xLine', 'X', names(dataset)),
          selectInput('yLine', 'Y', names(dataset),  multiple=T)
      )
  ),
  mainPanel( 
      tabsetPanel(

          tabPanel("Line Graph", plotOutput('plotLine', height="auto"), value="line"),   
          id="tsp"            #id of tab
      )
   )
))

server.R

library(reshape2)
library(googleVis)
library(ggplot2)
library(plyr)
library(scales)
require(xlsx)
require(xlsxjars)
require(rJava)


options(shiny.maxRequestSize=-1)


shinyServer(function(input, output, session) {

data <- reactive({

    if (is.null(input$file))
      return(NULL)
    else if (identical(input$format, 'CSV'))
      return(read.csv(input$file$datapath))
    else if (identical(input$format, 'XLSX'))
      return(read.xlsx2(input$file$datapath, input$sheet))
    else
      return(read.delim(input$file$datapath))
  })

  output$radio <- reactiveUI(function() {
    if (input$format == 'XLSX') {
        numericInput(inputId = 'sheet',
                     label = "Pick Excel Sheet Index",1)
    }
  })

  observe({
    df <- data()
    str(names(df))
    if (!is.null(df)) {


      updateSelectInput(session, 'xLine', choices = names(df))
      updateSelectInput(session, 'yLine', choices = names(df))


    }
  })

output$plotLine <- renderPlot(height=650, units="px", {

    tempX <- input$xLine
    tempY <- input$yLine

    if (is.null(data()))
      return(NULL)
    if (is.null(tempY))
      return(NULL)

    widedata <- subset(data(), select = c(tempX, tempY))
    melted <- melt(widedata, id = tempX)
    p <- ggplot(melted, aes_string(x=names(melted)[1], y="value", group="variable", color="variable")) + geom_line() + geom_point()
    p <- p + opts(axis.text.x=theme_text(angle=45, hjust=1, vjust=1))
    p <- p + labs(title=paste("",tempX," VS ",tempY,""))

    print(p)
  })
})

Upvotes: 3

Views: 1673

Answers (1)

Paul Hiemstra
Paul Hiemstra

Reputation: 60924

When a plot is very crowded with data, there are some things you can do:

  • Aggregate your data, e.g. mean per year.
  • Subset your data, limit your data to variables/time span which interests you. Or subsample your data, randomly taking, say, 1%.
  • Rethink your graph. Try to come up with an alternative visualisation which covers your hypothesis, but does not clutter your graph. With complicated datasets (although 8 MB for the baseball dataset is by no means large), smart visualisation is the way to go.

Upvotes: 2

Related Questions