ORCD Resources List

The tables below list resources that researchers are finding useful. ORCD is directly involved in enabling and supporting some of these. Others are independent services that people are finding useful. Members of the ORCD team is familiar with most of the resources listed and we are happy to help members of the MIT community understand which might ones might be most useful for their work.


MIT Campus Wide Resources

Campus wide resources are enabled and supported through ORCD. These resources have base capacity that is available for research and teaching use by anybody in the MIT community. They also provide PI group priority resources that are available for general opportunist use when they are not in use for priority work.

Name/URLDetails
Engaging
https://engaging-web.mit.edu
The Engaging cluster is open to everyone on campus. It has around 80,000 x86 CPU cores and 300 GPU cards ranging from K80 generation to recent Voltas. Hardware access is through the Slurm resource scheduler that supports batch and interactive workloads and allows dedicated reservations. The cluster has a large shared file system for working datasets. Additional compute and storage resources can be purchased by PIs. A wide range of standard software is available and the Docker compatible Singularity container tool is supported. User-level tools like Anaconda for Python, R libraries and Julia packages are all supported. A range of PI group maintained custom software stacks are also available through the widely adopted environment modules toolkit. A standard, open-source, web-based portal supporting Jupyter notebooks, R studio, Mathematica and X graphics is available at https://engaging-ood.mit.edu. Further information and support is available from engaging-support@techsquare.com.
SuperCloud
https://supercloud.mit.edu
The Supercloud system is a collaboration with MIT Lincoln Laboratory on a shared facility that is optimized for streamlining open research collaborations with Lincoln Laboratory. The facility is open to everyone on campus. The latest Supercloud system has more than 16,000 x86 CPU cores and more than 850 NVidia Volta GPUs in total. Hardware access is through the Slurm resource scheduler that supports batch and interactive workload and allows dedicated reservations. A wide range of standard software is available and the Docker compatible Singularity container tool is supported. A custom, web-based portal supporting Jupyter notebooks is available at https://txe1-portal.mit.edu/. Further information and support is available at supercloud@mit.edu.
Satori
https://mit-satori.github.io
Satori is an IBM Power 9 large memory node system. It is open to everyone on campus and has optimized software stacks for machine learning and for image stack post-processing for MIT.nano Cryo-EM facilities. The system has 256 NVidia Volta GPU cards attached in groups of four to 1TB memory nodes and a total of 2560 Power 9 CPU cores. Hardware access is through the Slurm resource scheduler that supports batch and interactive workload and allows dedicated reservations. A wide range of standard software is available and the Docker compatible Singularity container tool is supported. A standard web based portal https://satori-portal.mit.edu with Jupyter notebook support is available. Additional compute and storage resources can be purchased by PIs and integrated into the system. Further information and support is available at satori-support@techsquare.com.
subMIT
http://submit04.mit.edu/submit-users-guide/index.html
subMIT.mit.edu is a new batch submission service operated by MIT Laboratory for Nuclear Science . It is intended to be available for anyone on campus and provides some access to on- and off-campus through the Open Science Grid. Further information is available from pra@mit.edu.
AMD GPU cluster
http://amdmit.mit.edu
AMD and MIT are partnering on a supercomputing resources for machine learning applications from the AMD HPC Fund for COVID-19 research. The system will provide 1PFlop/s of floating point capability through 160 AMD MI50 GPU cards in fall 2020. The AMD GPU cluster is integrated into the Satori cluster for access.
c3ddb
https://c3ddb01.mit.edu/request_account
The C3DDB cluster is open to everyone on campus for research in the general area of life-sciences, health-sciences, computational biology, biochemistry and biomechanics. It has around 8000 x86 CPU cores and 100 K80 generation GPU cards. Hardware access is through the Slurm resource scheduler that supports batch and interactive workload and allows dedicated reservations. A wide range of standard software is available and the Docker compatible Singularity container tool is supported. Further information and support is available from c3ddb-admin@techsquare.com. The c3ddb system is scheduled to be retired in June 2023.

DLC shared hardware

A number of DLC groups operate dedicated computational resources for their own communities. Over time the number of DLC focussed resources has been falling with growing participation in campus wide shared resources with PI priority capabilities. However, there are still some sizable DLC resources available to some groups.

Name/URLDetails
TIG Shared Computing
https://tig.csail.mit.edu/shared-computing
The CSAIL infrastructure group (TIG) operates an Openstack cluster and a Slurm cluster for general use by members of CSAIL. The Openstack environment supports full virtual machines. The Slurm cluster supports Singularity as a container engine for Docker containers. Additional compute and storage resources can be purchased by PIs to support group specific needs. Further information and support is available at help@csail.mit.edu.
Openmind
https://openmind.mit.edu
Openmind is a shared cluster for Brain and Cognitive Science research at MIT. It has several hundred GPU cards and just under 2000 CPU cores. The cluster resources are managed by the Slurm scheduler which provides support for batch, interactive and reservation based use. The Singularity container system is available for executing custom Docker images. Further information and support is available from neuro-admin@techsquare.com.
LNS Computing
http://rc.lns.mit.edu
The Laboratory for Nuclear Science in Physics operates computing resources that are available to researchers within LNS. Further information and support is available from pra@mit.edu.
Kavli Computing
https://space.mit.edu/research/high-performance-computing/
The MIT Kavli Institute operates a cluster for astrophysics research. The cluster uses the Slurm resource scheduler and is available for use by Kavli researchers.
Koch Bioinformatics
https://ki.mit.edu/sbc/bioinformatics
Koch operates a bioinformatics facility which specializes in processing needs of computational biologists.

Commercial Cloud Based Resources

There are numerous other cloud services available. The table lists ones that appear to be widely used at MIT.

The ORCD team is also experimenting with a simplified AWS system called RONIN, designed to support university research and teaching needs. If you are interested in participating in testing of RONIN please feel free to reach out to us.

Name/URLDetails
MIT Cost Object Cloud Accounts
https://cloud-accounts.mit.edu
Provides a central location for creating standard AWS, Google or Microsoft commercial cloud provider accounts that are tied to cost objects and projects. Enables tracking of expenses for different projects.
Google Colab
https://colab.research.google.com
Google colab provides a free base service that can be very useful for modest workloads, for example in classes.
My Binder
https://mybinder.org
Binder provides free virtual machines that can be flexibly configured by providing a Github repository with software setup instructions.
Codeocean
https://codeocean.com/
Code ocean is another highly customizable cloud based virtual machine system. Machines in Code Ocean are direct charged and performance is more predictable than free services.

Major cloud provider standard credits programs

Most cloud providers provide useful compute and data credits programs for educational and research needs. Links for some of these programs are shown here.

Provider/URLDetails
AWS
https://aws.amazon.com/grants
Provides cloud credit grants for both research and education projects.
AWS
https://aws.amazon.com/opendata
Hosts open data for sharing with others. AWS has a process for applying to have datasets considered for hosting.
Azure
https://azure.microsoft.com/en-us/education
Provides Azure cloud credits for education.
Azure
https://azure.microsfoft.com/en-us/services/open-datasets
Hosts standard datasets for general use including machine learning. Additional datasets for inclusion can be nominated.
Google
https://edu.google.com/programs/students, https://edu.google.com/programs/faculty, https://edu.google.com/programs/researchers
Offers free credits and technical resources for education and research.