Skip to main content

MyCoE

Computing resources for faculty & staff

The Dean’s office recognizes the growing need for a range of computing power and support across all departments in the College. Outlined below are some resources and guidelines for computing options available to College of Engineering (CoE) faculty. The University of Washington eScience Institute is available to provide guidance on various research computing topics.

Hyak

Hyak is a shared, high-performance computer cluster dedicated to research computing at UW. Faculty purchase nodes on Hyak as an alternative to deploying and operating their own high-performance systems. Hyak is located in the UW Tower data center.

Requirements

  • Must purchase 1 or more nodes
  • Users purchase equipment
  • 3-year deployments

Features of Hyak

  • Low latency and high bandwidth communication among nodes
  • Shared high-speed scratch space
  • Archive data storage
  • High bandwidth external connections

Recommended Uses

  • Long run times: Users with workloads benefiting from long, uninterrupted run times on 100 to 1,000 CPU cores.
  • Fast calculations: Users performing iterative analysis of large data sets or repeated "what if" calculations at large scales.
  • Prep for petascale: Use as a development platform for users preparing applications for very large scale runs on petascale systems operated by the national supercomputer centers.
  • Keep the pipeline flowing: High speed transfer of data from instruments is possible, assuming the remote end of the connection is fast enough. Hyak has 10Gbs connections to campus and the research Internet.

Hyak is not recommended for users who keep their CPUs busy less than half the time; these cases might be better served by the commercial cloud. Users needing few nodes should investigate the purchase of shared memory systems that provide the same throughput as four Hyak blades, such as these IBM systems.

For questions concerning new Hyak accounts, contact help@engr.washington.edu.

National Supercomputer Computer Centers

National supercomputer centers operate very large parallel computers for the benefit of the U.S. academic and government research communities. CPU time is free, as is the transfer of data to and from these systems. While these systems are uniquely suited to large scale parallel applications, many users benefit from access to Teragrid or NERSC systems for jobs requiring only a few hundred cores, even when these jobs are loosely coupled.

Users requiring sustained access to ~1,000 CPU cores or fewer might be better served by Hyak or EC2 Cluster options.

Requirements

  • Faculty must submit a proposal to get access. The eScience Institute provides assistance in applying for time on NSF and DOE super computer systems. Contact info@escience.washington.edu for more information.
  • For jobs running on Teragrid or NERSC systems:
    • Capable of running in batch mode on Unix-based systems with minimal software stack
    • Uninterrupted run times of less than 24 hours
    • At least 30,000 core-hours of computing per year (roughly equal to one year on a four-core workstation)

Recommended Uses

  • Workloads with more than 1,000 tightly coupled CPU cores
  • Very large shared memory platform

Commercial Cloud, Amazon EC2 or Microsoft Azure

Commercial cloud systems offer pay-as-you-go flexibility and complete control of the software stack from the OS upwards.

Recommended Uses

Commercial cloud systems are most appropriate for workloads with intermittent, or "spiky" demands, particularly if they can benefit from having up to a few hundred CPU-cores available for short periods and for workloads with the following requirements:

  • Complete control of the software stack from the OS upwards
  • MS Windows as the host OS
  • Little data transfer to or from the cloud
  • Relatively little long term data storage
  • Web-based

For information on developing a proposal for a research program relying on commercial cloud platforms, contact info@escience.washington.edu.

See also

  • Amazon Elastic Compute Cloud
  • Elasticfox Getting Started Guide. A great tutorial for getting started with EC2. Just follow the instructions and you'll be up and running your app on EC2 in less than an hour. Setting up a production environment requires a little more effort, but the basics are very simple.

Department Computer Rooms

For users needing few nodes, relatively large shared memory systems can be purchased that provide the same throughput as four Hyak blades, such as these IBM systems. A system this size is a good fit for deployment in a departmental computer closet.

Most of the existing department computer rooms, however, are already limited by space, power, or cooling. Departments considering adding more computers should surplus old, inefficient computers (5 years or older) and replace them with new efficient computers.

The CoE will not support any additional services to computer rooms populated with a significant number of old, inefficient computers.

Also, with the implementation of activity based budgeting, the actual facilities costs of running computer rooms (electricity, cooling, etc.) will become the responsibility of the unit operating those facilities. Thus, it is important, both in terms of operating costs and carbon footprint, to minimize the facilities costs.