Courant has a number of systems that can be used for CPU intensive jobs and programming assignments for courses and research. These machines are restricted to logins from within the Courant network, so if you are coming from outside of CIMS, you will have to first login to one of the remote access servers, then use ssh to get to them.

Please do not use access.cims.nyu.edu for cpu-intensive jobs or course work.

 

These systems should be used for class programming assignments.  For mysql database related usage, see the creating a MySQL database instructions:

Hostname CPU Memory SSH Fingerprint
courses2.cims.nyu.edu Two AMD Opteron (1.1 Ghz) (8 cores) 15 GB Verify Host Identity 
courses3.cims.nyu.edu Two  AMD Opteron (1.1 Ghz) (8 cores) 15 GB Verify Host Identity

 

The following Linux systems should be used only for CPU and memory intensive processes:

Hostname CPU Memory SSH Fingerprint
crunchy1.cims.nyu.edu Four AMD Opteron 6272 (2.1 GHz) (64 cores) 256 GB Verify Host Identity
crunchy3.cims.nyu.edu Four AMD Opteron 6136 (2.4 GHz) (32 cores) 128 GB Verify Host Identity
crunchy5.cims.nyu.edu Four AMD Opteron 6272 (2.1 GHz) (64 cores) 256 GB Verify Host Identity
crunchy6.cims.nyu.edu Four AMD Opteron 6272 (2.1 GHz) (64 cores) 256 GB Verify Host Identity

 

The following Linux systems should be used for generic number crunching:

Hostname CPU Memory SSH Fingerprint
crackle1.cims.nyu.edu Two Intel Xeon E5630 (2.53 GHz) (16 cores) 64 GB Verify Host Identity
crackle2.cims.nyu.edu Two Intel Xeon E5630 (2.53 GHz) (16 cores) 64 GB Verify Host Identity
crackle3.cims.nyu.edu Two Intel Xeon E5630 (2.53 GHz) (16 cores) 64 GB Verify Host Identity
crackle4.cims.nyu.edu Two Intel Xeon E5630 (2.53 GHz) (16 cores) 64 GB Verify Host Identity
crackle5.cims.nyu.edu Two Intel Xeon E5630 (2.53 GHz) (16 cores) 16 GB Verify Host Identity

 

The following Linux systems should be used for generic number crunching:

Hostname CPU Memory SSH Fingerprint
snappy1.cims.nyu.edu Two Intel Xeon E5-2680 (2.80 GHz) (20 cores) 128 GB Verify Host Identity
snappy2.cims.nyu.edu Two Intel Xeon E5-2680 (2.80 GHz) (20 cores) 128 GB Verify Host Identity
snappy3.cims.nyu.edu Two Intel Xeon E5-2680 (2.80 GHz) (20 cores) 128 GB Verify Host Identity
snappy4.cims.nyu.edu Two Intel Xeon E5-2680 (2.80 GHz) (20 cores) 128 GB Verify Host Identity
snappy5.cims.nyu.edu Two Intel Xeon E5-2680 (2.80 GHz) (20 cores) 128 GB Verify Host Identity

 

The following Solaris systems should be used only for CPU and memory intensive processes:

Hostname CPU Memory SSH Fingerprint
crunchy12.cims.nyu.edu 4x900MHz UltraSparcIII 16GB Verify Host Identity

 

 Web Service Computing

 The following machines are good for both internal and external web based development and deployment

Hostname CPU Memory SSH Fingerprint
linserv1.cims.nyu.edu Two AMD Opteron (1.1 Ghz) (8 cores) 16 GB

Verify Host Identity

linserv2.cims.nyu.edu Four AMD Opteron (2.1 Ghz) (4 cores) 8 GB Verify Host Identity

 

We have some general instructions for setting up an Apache Webserver, Apache Tomcat, MySQL, and other services.

 

GPU Computing - NVIDIA/CUDA
(See current status)

To use these systems please see the instructions.  Host nodes are:

Hostname CPU GPU System Memory SSH Fingerprint
cuda1.cims.nyu.edu Two Intel Xeon E5-2680 (2.50 Ghz) (24 cores) Two GeForce GTX TITAN Black (6 GB memory each) 256 GB  Verify Host Identity
cuda2.cims.nyu.edu Two Intel Xeon E5-2660 (2.60 Ghz) (40 cores) One GeForce GTX TITAN Z (12 GB memory), Two GeForce GTX Titan X (12 GB memory each) 256 GB  Verify Host Identity
cuda3.cims.nyu.edu Two Intel Xeon E5630 (2.53 Ghz) (16 cores) Two Tesla T10s (4 GB memory each) -- shared with cuda4 16 GB Verify Host Identity
cuda4.cims.nyu.edu Two Intel Xeon E5630 (2.53 Ghz) (16 cores) Two Tesla T10s (4 GB memory each) -- shared with cuda3 16 GB Verify Host Identity
cuda5.cims.nyu.edu Two Intel Xeon E5-2650 (2.60 Ghz) (16 cores) Two GeForce GTX TITAN Z (12 GB memory each) 64 GB Verify Host Identity



Other Machines
 

There are also systems associated with research groups. They are restricted to users who have been approved by a faculty member associated with the relevant research group.

One such group of systems belongs to the Center for Atmosphere Ocean Science (CAOS).