Skip to main content

What's New?

Have you not used SCINet in a while, or just want to make sure you haven’t missed any changes? See below for a history of SCINet developments such as added software features, user experience improvements, login procedure updates, policy changes, and new working groups.

For general announcements (e.g., training opportunities and system status updates), please see the SCINet Forum Announcements page. For past and upcoming events, please see the SCINet Events Calendar.


  • Increased memory on Ceres nodes

    32 TB of memory from the recently decommissioned ceres19 nodes were installed in 76 operational ceres20 nodes to increase overall available memory. The minimum amount of memory per node on Ceres has now increased from 384 to 768 GB. See the Ceres technical overview for current detailed hardware specifications.

    • Wednesday, August 6, 2025
  • New GPU node limits on Atlas

    To help promote resource sharing, jobs submitted to each of the GPU partitions (“gpu-v100”, “gpu-a100”, “gpu-a100-mig7”, and “gpu-l40s”) now have a maximum running time of 2 days and a maximum number of GPUs of 25% of all GPUs of the same type (e.g., there are 48 L40S GPUs, so one user can be using up to 12 L40S GPUs at once).

    • Tuesday, July 8, 2025
  • New Atlas storage system

    Atlas’s July 8 maintenance included replacing Atlas’s original data storage hardware with a completely new data storage system. This new system brings several important improvements, including: nearly 2.7 times more storage space (increased from ~2.2 PB to ~6 PB); all solid-state storage that provides read/write performance superior to the previous storage system, which is especially beneficial for data-intensive GPU workloads; and significant power and space savings compared to the previous storage system.

    • Tuesday, July 8, 2025
  • New GPUs available on Atlas

    Twelve new GPU nodes with a total of 48 GPUs are now available to all SCINet users on SCINet’s Atlas supercomputer. These nodes each have 4 NVIDIA L40S GPUs, 64 CPU cores, and 1.5 TB of memory. The L40S GPU features 48 GB of GPU memory and was designed for general-purpose workloads, including deep learning and AI applications. These new nodes are available in the “gpu-l40s” partition/queue on Atlas.

    • Wednesday, June 25, 2025
  • Partition/queue changes on Ceres

    During Ceres’ June maintenance, legacy community partitions (“short”, “medium”, “long”, “long60”, etc.) were removed, and nearly all compute jobs on Ceres should now be submitted to the “ceres” partition. Note that compute jobs in the “ceres” partition are limited to 3 weeks of run time by default. If you require very long run times, you may submit jobs using the “long” QOS (“quality of service”) specification. This will allow jobs to run for up to 60 days, but they will be limited to using no more than 144 CPU cores. See the February 18 update below for more information about the “ceres” partition and its benefits.

    • Friday, June 20, 2025
  • Job submission changes on Ceres

    When submitting jobs, it is now required to explicitly specify a Slurm account (Slurm accounts have the same names as SCINet projects). For example, to use the account/project “myproject” when submitting an sbatch script, add the following line to the script file: #SBATCH –account=myproject

    • Friday, June 20, 2025
  • New image annotation software on Atlas

    You can now easily create computer vision model training datasets directly on Atlas without needing to transfer image files to and from local laptops or workstations for annotation. Labelme, an image annotation application, is now available as an interactive app on Open OnDemand on Atlas. Labelme supports a variety of annotation types, including bounding boxes and custom polygons, and it also supports automated and semi-automated annotation using pretrained, general computer vision AI models.

    • Monday, March 31, 2025
  • CLC Workbench and Server upgraded

    CLC Workbench and Server, a bioinformatics application available on the Ceres cluster’s Open OnDemand interface, has been upgraded to version 25.

    • Tuesday, February 18, 2025
  • New "ceres" partition

    During the February 2025 maintenance, a new partition, named “ceres”, was added to the cluster. This new partition includes all community nodes, and its addition is the first step towards simplifying the community partitions on Ceres (“short”, “medium”, “long”, “mem”, “mem768”, and “debug”). In the future, some or all of the legacy community partitions will be removed.

    • Tuesday, February 18, 2025
  • New Animal Behavior AI Working Group

    This SCINet working group aims to explore the potential benefits of Artificial Intelligence (AI) in animal behavior research. To learn more about the working group, please see the working group description here.

    • Tuesday, February 4, 2025
  • AlphaFold 3 Available on SCINet Clusters

    The AlphaFold 3 software, databases, and model weights are now available on both Ceres and Atlas! To learn more about how to use AlphaFold 3 on SCINet clusters, please see instructions here.

    • Tuesday, January 28, 2025
  • Streamlined web-based login

    SCINet users will no longer see the SCINet login page that asks for a username and password when they access SCINet systems. Instead, SCINet logins immediately redirect to USDA’s eAuthentication (eAuth) site, at which point they continue to authenticate as usual using either their LincPass/AltLinc card or Login.gov credentials.

    • Tuesday, January 28, 2025
  • LincPass login for café machines

    The SCINet café machines offer high-speed data transfer capabilities to and from SCINet’s supercomputers and data storage infrastructure. These machines have been updated to support streamlined logins with a LincPass.

    • Wednesday, January 15, 2025