The Crew Pkg -

library(crew) controller <- crew_controller_local( name = "my_cluster", workers = 4, tasks_max = 100 # Auto-restart workers after 100 tasks ) Start the workers controller$start()

Furthermore, crew requires that your worker sessions be fully self-contained. Any library, function, or data object must be loaded or passed explicitly. There is no "magic" global environment inheritance. crew is the industrial-grade conveyor belt that the R ecosystem has been missing. It doesn't try to be the flashiest parallel package; instead, it focuses on being the most reliable .

But crew (which stands for oordinated R esource E xecution W orker) isn't just another entry in the parallel-processing catalog. Created by William Landau, the author of the targets package, crew is a fundamental rethink of how R should talk to background jobs. the crew pkg

For HPC users: Replace crew_controller_local() with crew_controller_slurm() and define your job submission template. The API remains identical.

Because workers auto-restart after a memory threshold or crash, that file that causes a segmentation fault only kills its worker. The other seven keep humming along, and a new worker spins up to retry the bad file. crew is not for every use case. If you are doing interactive, exploratory work where you need to inspect every object in the global environment immediately, stick with lapply or furrr . crew is the industrial-grade conveyor belt that the

But the real magic happens when you pair crew with targets . In a _targets.R file, changing the controller is a one-line edit:

tar_option_set( controller = crew_controller_local(workers = 10) ) Suddenly, your pipeline is running across a fleet of auto-healing workers without changing a single analysis step. crew is not a parallel engine itself. It is a controller specification that leverages two incredibly fast lower-level packages: mirai (for asynchronous task execution) and nanonext (for low-level networking). Created by William Landau, the author of the

It is, in essence, a . And it changes the game for production-level R code. The Problem crew Solves (That You Didn't Know You Had) Traditional parallel backends in R share a common flaw: they are often too "chatty" or too fragile. foreach with doParallel works, but it forks processes, which can crash on Windows or with large objects. future is elegant, but its nested parallelism and persistent-worker logic can be tricky to debug.