A Data-Driven Era: HPC Resource Center Powers a New Generation of Scientific Computing

 

The HPC Resource Center team

The HPC Resource Center team

With a growing community of scientists pushing the boundaries of data-intensive research, Rockefeller University’s High-Performance Computing (HPC) Resource Center is undertaking its most ambitious infrastructure upgrade since its inception in 2016. Under the leadership of Jason Banfelder, the center’s director, the transformation addresses a key challenge in modern science: how to empower researchers of all computational backgrounds to harness large-scale computing and artificial intelligence effectively.

“We’re seeing researchers from across the spectrum engaging with HPC in new ways, thanks to tools that lower the barrier to entry,” says Banfelder. “These upgrades are about more than infrastructure — they’re about inclusivity and innovation.”

More Than a Hardware Refresh

While the center is replacing aging equipment, the upgrade is driven by more than just lifecycle maintenance. The real motivator? A dramatic shift in the scale and scope of computational demands across disciplines. With next-gen sequencing, proteomics, light-sheet microscopy, and high-throughput behavioral rigs generating massive datasets, researchers now rely on AI and advanced analytics not just to process data once — but also to mine and re-analyze multiple datasets to extract new layers of meaning.

Adding to that is a cultural shift: the rise of AI coding assistants and co-pilots is enabling more scientists, including those without formal computer science and software engineering training, to integrate the cluster into their workflows.

Key Upgrades: Fair Access and Fearless Experimentation

To support this evolution, the center is introducing a number of forward-looking features:

  • Equitable access with FairShare scheduling
    A shift to a new batch scheduling algorithm now ensures that lighter or occasional users aren’t locked out by heavy cluster workloads. This opens the door for new users and levels the playing field.
  • Improved job sandboxing
    Memory, CPU, and GPU isolation between jobs allows researchers to test early-stage code without fear of crashing shared resources. It’s an environment built for bold ideas.
  • Interactive IDE support
    Long-standing support for tools like Jupyter and RStudio is now joined by support for VS Code and Cursor, making the cluster more accessible to users accustomed to AI-first development environments.
  • 12-petabyte storage expansion
    A massive increase in storage capacity supports the demand for “AI-ready” data that needs to remain accessible and cost-efficient for model training and inference.

Ship of Theseus: Continuous Uptime, Continuous Improvement

In contrast to the disruptive, wholesale hardware overhauls that many academic centers perform every five years or so, the HPC team is taking a “Ship of Theseus” approach—gradually replacing and upgrading components while keeping the cluster fully operational. This strategy not only minimizes downtime but also creates opportunities for fine-tuning user workflows along the way.

One such opportunity emerged during a collaboration with the Lyu Lab, known for its high-throughput docking protocols. As the lab transitioned to the new Slurm system, the HPC team identified that millions of micro-jobs were being submitted daily—placing strain on the scheduler. By consolidating these into fewer, longer-running jobs, the lab achieved a 10–20% boost in throughput with just a few lines of scripting.

“The shift not only improved our performance but simplified our workflow,” says Jiankun Lyu, Assistant Professor and Head of Laboratory. “We were able to scale up without compromising stability—just by rethinking how our jobs interact with the system.”

To learn more about the High-Performance Computing Resource Center or to discuss a project, visit the public HPC webpage or, for RU community members, the internal HPC site.
Reach out to it_hpc@rockefeller.edu to get started.

 

To read more news about the Scientific Resource Centers, visit the Resource Center News page.