The ORCD team regularly offers classes on subjects such as high-performance computing, parallel programming, and using the Engaging cluster.
-
Course Description
The MIT Office of Research Computing's Engaging Cluster is available to the MIT community for running computational workloads that don't run well on your own computer. This hands-on tutorial walks you through the basics of using Engaging for your research. We will cover:
- How to access Engaging
- Transferring files
- Using and installing software
- Running jobs, including batch and interactive jobs requesting a variety of resources
Prerequisites/Requirements
- Attendees must have an Engaging account (instructions here).
- We ask that you read through our Getting Started Tutorial to familiarize yourself with the concepts beforehand.
- Attendees should also bring a laptop for the hands-on component.
Learning Objectives
After this tutorial attendees will
- Be able to log in and navigate the cluster
- Know how to use modules and have a plan for installing any additional software needed
- Be familiar with running jobs and requesting different types of resources
Schedule
This course is being run regularly this Spring; the same material will be covered in each instance of the class. We will update this page as dates are added.
- Tuesday, March 24: 1-3PM
- Wednesday, April 15: 2-4PM
- Tuesday, April 28: 2-4PM
- Tuesday, May 12: 1-3PM
- Thursday, May 28: 1-3PM
Location
On-campus at ORCD's offices in NE36
How to Sign Up
Sign up for any instance of the class by reserving a spot using this Calendly link. Only sessions offered within the next 30 days will be visible, if signing up for a later date please wait until closer to 30 days before the date to sign up.
-
Course Description
Parallel computing has been an important research topic in science and technology for decades. Thanks to the fast-developing field of deep learning in recent years, parallel computing is being used for more broad interests. In this class, concepts of parallel computing will be introduced. Attendees will learn not only the basics of high-performance computing (HPC) clusters and GPU accelerators but also programming skills with OpenMP, MPI, CUDA, Pytorch, and Deepspeed. Examples and hands-on exercises will be provided in several programming languages including C, Fortran, and Python. These parallel programming skill sets are useful for researchers to accelerate their computer programs and helpful for students to be prepared for a career in information technology.
Prerequisites/Requirements
- Attendees should have minor programming experience in one of these languages: C, Fortran, Python, or Julia
- Attendees should also bring a laptop.
Learning Objectives
- Concepts of parallel computing and knowledge of accelerating computer programs on an HPC cluster.
- Parallel programming skills for CPU (OpenMP, MPI).
- Parallel programming skills for GPU (CUDA, Pytorch, Deepspeed).
Schedule
The topics on the two days are independent. Attendees can choose which session(s) they would like to attend.
- Day One (Tuesday, April 7):
- Parallel Programming with OpenMP:10 AM - 12 PM
- Distributed Computing with MPI: 2 - 4 PM
- Day Two (Wednesday, April 8):
- GPU Programming with CUDA: 10 AM - 12 PM
- Distributed Deep Learning: 2 - 4 PM
Location
In Building 13, room 13-2137 (also known as the von Hippel room).
How to Sign Up
To request the signup link, email the instructor, Shaohao Chen.