Testing Shiny Apps with shinytest2

· 6 min read · Updated March 11, 2026 · intermediate
r shiny testing testthat automation

Manual testing of Shiny apps is tedious. You click through the same paths, verify the same outputs, and hope nothing broke when you added that new feature. This process doesn’t scale. As your app grows, you need a way to catch regressions automatically.

The shinytest2 package solves this problem. It provides a streamlined toolkit for testing Shiny applications and integrates with the testthat framework. Instead of clicking through your app manually, you write tests that run automatically.

Why Test Your Shiny Apps?

Every time you modify a Shiny app, you risk breaking existing functionality. A simple change to a reactive expression might cause outputs to fail silently. Without tests, you only discover these bugs when users report them.

Automated tests give you confidence to refactor, add features, and upgrade dependencies. When a test fails, you know exactly what broke. When all tests pass, you can deploy with confidence.

The shinytest2 package uses chromote to render your app in a headless Chrome browser. This lets you interact with your app programmatically: click buttons, set inputs, and verify outputs.

Installing shinytest2

Install the package from CRAN:

install.packages("shinytest2")

You also need Chrome or Chromium installed on your system. The chromote package uses it to run the browser. On most systems, installing Chrome is sufficient.

If you want the development version, install from GitHub:

devtools::install_github("rstudio/shinytest2")

Creating Your First Test

The easiest way to start is by recording your interactions. Create a simple Shiny app first:

library(shiny)

ui <- fluidPage(
  textInput("name", "Enter your name"),
  actionButton("greet", "Greet"),
  textOutput("greeting")
)

server <- function(input, output, session) {
  observeEvent(input$greet, {
    output$greeting <- renderText({
      paste0("Hello, ", input$name, "!")
    })
  })
}

shinyApp(ui, server)

Save this as app.R. Now create a test file:

library(shinytest2)

# Create the test file by recording your actions
test_app <- function() {
  app <- AppDriver$new()
  
  # Record interactions
  app$set_inputs(name = "World")
  app$click("greet")
  
  # Get the output and verify
  output <- app$get_value(output = "greeting")
  testthat::expect_equal(output, "Hello, World!")
  
  app$stop()
}

Run the test:

test_app()

This works, but it’s not integrated with the testthat workflow. Let’s fix that.

Writing testthat Tests

The real power of shinytest2 comes from integrating with testthat. Create a test file in the tests/ directory:

# tests/testthat/test-app.R
library(shinytest2)
library(testthat)

test_that("greeting works correctly", {
  app <- AppDriver$new()
  
  # Set input and trigger action
  app$set_inputs(name = "Alice")
  app$click("greet")
  
  # Verify output
  expect_equal(app$get_value(output = "greeting"), "Hello, Alice!")
  
  app$stop()
})

test_that("empty name shows generic greeting", {
  app <- AppDriver$new()
  
  app$set_inputs(name = "")
  app$click("greet")
  
  expect_equal(app$get_value(output = "greeting"), "Hello, !")
  
  app$stop()
})

Run the tests:

devtools::test()

The AppDriver$new() method launches your app automatically. It looks for app.R in the current directory or accepts a path to a specific app.

Recording Tests Interactively

For complex apps, recording interactions is easier than writing them by hand. Use the record_test() function:

library(shinytest2)

# This opens your app in a browser
record_test()

Interact with your app as a user would. The package records every click and input. When you’re done, stop recording. The package generates test code that you can copy into your test files.

This is useful for capturing the current behavior of your app. You can then modify the generated code to add assertions.

Testing UI Elements

Beyond simple inputs and outputs, you often need to verify UI state. The AppDriver provides methods for this:

test_that("button is disabled when name is empty", {
  app <- AppDriver$new()
  
  # Check button exists
  expect_true(app$exists(id = "greet"))
  
  # Check button is enabled (no disabled attribute)
  button_html <- app$get_html(id = "greet")
  expect_false(grepl("disabled", button_html))
  
  app$stop()
})

You can also check for specific UI elements or output content:

test_that("output container exists", {
  app <- AppDriver$new()
  
  # Verify output div exists
  expect_true(app$exists(name = "greeting"))
  
  # Get full HTML of an output
  html <- app$get_html(name = "greeting")
  expect_type(html, "character")
  
  app$stop()
})

Handling Dynamic UI

Shiny apps often create UI elements dynamically. You need to wait for elements to appear before interacting:

test_that("dynamic input appears after button click", {
  app <- AppDriver$new()
  
  # Click a button that creates dynamic UI
  app$click("show-options")
  
  # Wait for the dynamic element to appear
  app$wait_for_idle(timeout = 5000)
  
  # Now interact with it
  app$set_inputs(`dynamic-select` = "Option A")
  
  app$stop()
})

The wait_for_idle() method waits until Shiny is idle (all reactive calculations finished). This is essential for testing apps with delayed UI updates.

Best Practices for Stable Tests

Tests that break easily are worse than no tests. Here are patterns that make your tests maintainable.

Use data-testid Instead of Raw IDs

Input IDs change when you refactor or redesign your app. Tests that use raw IDs break when selectors change. The solution is to add test-specific attributes to your UI elements:

library(htmltools)

my_text_input <- function(inputId, label, testid = NULL) {
  tagList(
    textInput(inputId, label),
    # Add data-testid for testing
    tags$script(HTML(paste0(
      "$(document).ready(function() {",
      "$('#", inputId, "').attr('data-testid', '", testid, "');",
      "});"
    )))
  )
}

Then in your tests:

test_that("testid-based selection works", {
  app <- AppDriver$new()
  
  # Get element by testid
  testid <- app$get_js("$('[data-testid=name-input]').attr('id')")
  app$set_inputs(!!testid := "Test")
  
  app$stop()
})

This keeps tests aligned with business logic, not code structure.

Wrap Common Actions in Functions

Instead of repeating interaction sequences, create helper functions:

greet_user <- function(app, name) {
  app$set_inputs(name = name)
  app$click("greet")
}

test_that("greeting displays correctly", {
  app <- AppDriver$new()
  
  greet_user(app, "Bob")
  expect_equal(app$get_value(output = "greeting"), "Hello, Bob!")
  
  app$stop()
})

This makes tests readable and easier to maintain.

Running Tests in CI

Automate your tests in continuous integration. Add a step that runs the testthat test suite:

# .github/workflows/test.yml
name: Test

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: r-lib/actions/setup-r@v2
      - uses: r-lib/actions/setup-renv@v2
      - run: devtools::test()
        shell: Rscript {0}

The CI pipeline runs your tests on every push. You catch regressions before they reach production.

Common Issues and Solutions

Chrome Not Found

If you get an error about Chrome not being found, set the path explicitly:

options(chromote.browser = "/path/to/chromium")

On Linux, you might need to install Chromium:

sudo apt-get install chromium

Tests Timeout

If tests timeout waiting for UI updates, increase the timeout:

app <- AppDriver$new(timeout = 30000)  # 30 seconds

Or use explicit waits:

app$wait_for_idle(timeout = 10000)

Flaky Tests

If tests sometimes pass and sometimes fail, look for timing issues. Add explicit waits after setting inputs:

app$set_inputs(name = "Test")
app$wait_for_value(output = "greeting")

Automated testing transforms Shiny development. You stop manually clicking through your app to verify everything works. Instead, you run a command and trust that your tests catch any regressions. The initial investment pays off quickly as your app grows and evolves.

Start small: write tests for your most critical features. Expand coverage as you add functionality. Your future self will thank you when a test catches a bug before deployment.

See Also