SSCC News October 2023

Welcome to the SSCC

We want to extend a warm welcome to all the new members of the Social Science Computing Cooperative, whether you’re a new faculty member, staff member, or graduate student who will use our resources for research, or an undergraduate taking a class that uses SSCC resources.

What is the SSCC?

The SSCC provides servers, software, training, and consulting to support researchers (and future researchers) who do statistical analysis. If you didn’t attend an orientation session, feel free to email the SSCC Help Desk, tell us about yourself, and ask what we can do for you.

What is SSCC News?

SSCC News is one of our main ways of getting information to our members. It comes out about once every two months. Please look over the email when you get it and then read the articles that will affect you.

If you’d rather not receive SSCC News, email and they can take care of that for you. If you’re no longer interested in SSCC News because you no longer use your SSCC account, they can close it for you.

October SSCC Training

In October, SSCC’s training moves beyond the basics. R topics include data visualization and categorical variables, while Stata topics include loops and macros, text data, and a special short session on new features in Stata 18.

Still need to learn the basics of R, Stata, or Python? The schedule hasn’t been set yet, but we’ll teach them again in January, between semesters, and online so it’s okay if you’re not back in Madison yet.

Slurm Update

Slurm usage has increased significantly in the last two months, to the point that sometimes (not very often) jobs have to wait in the queue for a server to open up before they can start. This is normal and expected: Slurm is designed to maximize the amount of work it gets done, not to finish jobs as quickly as possible.

We’ve created a new web app to help you identify what resources are currently available: visit We have also tweaked Slurm’s priority calculations to put more weight on recent usage. When someone has a large number of jobs running this will give other users a higher priority, so their jobs will be next up when a server becomes available.

Some tips for getting your job to run sooner and making the whole cluster more efficient:

  • Only reserve as much memory as you know you need. Memory, not cores, is almost always the bottleneck that restricts the number of jobs Slurm can run. Don’t reserve more than 250GB of memory unless you absolutely have to, or your job will need one of the high-memory servers.
  • Use more cores! We don’t think it has sunk in for many people that most of the Slurm servers have 128 cores. It’s especially painful to see jobs reserving 1000GB of memory and just 16 cores, leaving 112 cores idle. If you’re using a lot of memory and your job can benefit from more than one core, use lots of cores (probably all of them) so you make that memory available to others as quickly as possible. Our Guide to Research Computing at the SSCC contains information on what programs are able to utilize multiple cores.
  • Use the short partition. Those servers are reserved for jobs that take less than 6 hours.
  • If Slurm is full and you need some computing power right now, use Linstat. When Slurm gets busy jobs have to wait. When Linstat gets busy it slows down, but you can always get some CPU time although your job may run slowly using shared resources!

On the other hand, most of the time there is a significant amount of computing power in the Slurm cluster just waiting for you to put it to work. We’re still encouraging all our researchers to think bigger.

Summer Tech Update

Some highlights from the recently completed summer tech update include:

Coming soon: a “desktop” graphical user interface for Linstat powered by Open OnDemand.