Software on the Wildwood HPC Cluster¶
One of the goals in the new HPC infrastructure is to make accessing software even easier, regardless of cpu architecture, queuing system, or operating system.
We recognize that there will eventually be new OS releases and the system software versions on Rocky 9 will be out-of-date. In order to combat this, we have designed a new software install system that reduces dependencies on system software and libraries.
We have also found that users are interested in using containerized systems,
such as docker
and singularity
. Resources will be forthcoming on using
containerized systems on the new infrastructure.
Software locations¶
Legacy software will be located in /local/cluster/bin
and
/local/cluster/software-name
for the time-being. As updated versions of
these pieces of software are requested for installation, the updated versions
will be installed in other locations, including locations specific for
cqls
, ceoas
, and coe
users.
Install locations may vary depending on which group one is a part of.
Alternative CPU architectures and GPUs¶
Users have increasingly requested software that is GPU-enabled and/or enabled on other architectures, including ARM and IBM Power. Certain software stacks have been available on these architectures at the CQLS in the past. We are actively developing systems to allow software to be accessible across all architecture types and on GPUs, when possible.
To this end, users may find themselves using software in
/local/cqls/software/x86_64/...
on a certain machine and
/local/cqls/software/aarch64/...
on another. Our desire to have these
capabilities is to enable research that is faster and easier for all users,
even those not familiar with these alternative CPU architectures.