Skip to content

Why did you develop a new, distinct infrastructure?

We had several goals in mind when deciding to move to a new infrastructure:

  1. Modernization
  2. Ease of use
  3. Maintainability

Several software packages in place on the 'old' infrastructure have no clear path for upgrades and/or are end-of-life (EOL) from the maintainers, including python (2.7 and 3.7) and GCC (4.8.5). Other software, such as singularity, have patchy availability due to the heterogenous nature of both hardware and software on the 'old' infrastructure.

We are upgrading all compute nodes to a modern operating system (Rocky 9) that supports modern python versions, compiler versions, singularity, among other benefits, including up-to-date security fixes.

We also implemented the Slurm queuing system along side the old Son of/Some Grid Engine.

When will my machines be moved over?

Everyone will eventually be moved over to the new infrastructure. Individual labs will lose access to their machines for a short amount of time (~24h at most) while the upgrade and migration takes place.

The goal is to have as small an interruption as possible so that your research will have minimal delays.

Folks will be given the opportunity to test out the new infrastructure prior to taking their machines down to confirm that their software pipelines will continue to work.

Where will my files be?

Your files and folders on the nfs volumes, excluding nfs1, will be present in the same location they are currently. The networked file storage is available on both the old and new infrastructure for the time being.

Your home directories will undergo a one-time sync, after which they will not be connected to both infrastructures. Therefore, any important work should be done on the networked drives such that the updated output will be available after the migration process is complete.