Skip to main content

Submit Help Request

Launch GNomEx

Launch CORE Browser

Must use campus WiFi or VPN to access

Launch cBioPortal

Must use campus WiFi or VPN to access

To enable processing of investigators’ genomic data, we maintain numerous high-performance Linux-based compute resources, both locally in HCI and at CHPC.

HCI Resources

HCI maintains a number of Linux-based servers within our data center for use in processing bioinformatics data. For information on how to gain access to these compute resources, please contact us. These servers include the following:

  • Uinta, an interactive server with 48 cores and 440 GB memory
  • Alta, an interactive server with 32 cores and 512 GB memory
  • Moab, an interactive server with 24 cores and 64 GB memory
  • A dedicated 20-core server for Illumina sequence processing and de-multiplexing
  • A high performance 28 TB network file server for temporary, working storage
  • A 128 TB archival network storage to support the GNomEx LIMS database

CHPC Resources

In addition to local resources, we own a number of compute nodes on clusters at the University of Utah Center for High Performance Computing (CHPC). Users of our local computing servers can submit jobs to our nodes on these clusters by using our pysano service, or by arranging guest accounts to access the nodes directly through CHPC; please contact us for more information.

The nodes we own include the following:

  • Fourteen nodes (32-48 cores, 64-128 GB RAM each) on the Kingspeak cluster
  • Thirty-eight nodes (56 core, 128-256 GB RAM each) in the HIPAA/PHI-compliant protected environment Redwood cluster at CHPC. These are the nodes used for working with human-derived data.

Application and Data Resources

For the interactive HCI Linux servers, all machines mount the same network resources. Some common paths to remember include the following. Some directories are mirrored on our CHPC nodes for use with our pysano service.

  • /scratch is for fast, temporary file storage without storage quotas. Make your own directory and use it when launching pysano jobs.
  • /tomato/dev/job is a historical location for launching pysano jobs. Make your own directory to use when launching pysano jobs.
  • /home/BioApps is a repository of bioinformatic applications not shared with CHPC nodes.
  • /tomato/dev/app is a repository of bioinformatic applications shared with CHPC.
  • /tomato/dev/data is a repository of genomic reference data shared with CHPC.
  • Some application packages are installed under Modules. Use module avail and module load to list available modules and load a specific module, respectively.
  • A RStudio server is available for running R scripts and software. Email CBI for instructions on how to access it.