CARC systems information
Parallel computing, storage, and visualization
The UNM Center for Advanced Research Computing supports supercomputing systems, high-throughput clusters, and large-scale disk storage for use by university researchers. CARC currently has over 3000 CPU cores and 92k NVIDIA Tesla K40M CUDA cores spanning a variety of distributed and shared-memory architectures. Online working NAS and nearline storage
CARC Supercomputer and Cluster Resources
Xeon E5-2698 V4
Dell PowerEdge R620; Intel Xeon E5-2670, 2.6 GHz
Silicon Mechanics A422.v3 Shared-Memory
AMD Opteron 6272, 2.1 GHz
Dell Optiplex GX620 /Intel Pentium D/ 2.13 GHz; Dell Optiplex 745/ Intel Core2/ 2.8 GHz / 3.0 GHz
PowerEdge R730 /Intel Xeon CPU E5-2640/ 2.6
Linux Operating System
ConnectX-3 IB FDR
GB Ethernet (Beowulf cluster)
200 (including 16-node Hadoop subsystem)
64 (32 + 32 FP co-processors)
256 GB shared
4GB, 32GB, 96GB
Filesystem disk only
80-140 GB; 1 TB on Hadoop nodes
Peak FLOPS (theoretical), in TFLOPS
Intel Xeon Broadwell
Intel Pentium D, Intel Core 2
Intel Xeon CPU E7-2640, Intel Xeon CPU E7-4809 (Haswell)
Local scratch space (TB)
Xena Supercomputer Specs
|Cores per Node||32||32||16||16|
|Memory per Node||1 TB||3 TB||64 GB||64 GB|
|Processor Architecture||Intel Xeon CPU E7-4809||Intel Xeon CPU E7-4809||Intel Xeon CPU E5-2640||Intel Xeon CPU E5-2640|
|GPU||N/A||N/A||2 x Nvidia Tesla K40M / node||1 x Nvidia Tesla K40M / node|
Research Group-Dedicated Machines
CARC also hosts dedicated research computing resources for faculty representing several colleges and schools at UNM (Arts & Sciences, Engineering, Fine Arts, Architecture, and the School of Medicine).
Anodyne: Dell PowerEdge R815; AMD Opteron 6174 12-Core 2.2GHz; 1 Node, 4 sockets; 48 cores; 192GB RAM Shared Memory; CentOS 5.10 Operating System. Anodyne is used for applied computational methods of electromagnetic geophysics. (PI: Prof. Chester Weiss).
Apollo (UNM Cancer Center): Coupled Intel/Xeon 32- and 64-bit clusters IBM with FastT500 and DS400 Fiber Channel SAN storage components, supporting the UNM Cancer Center’s Shared Resource for Genomics and Bioinformatics. These systems host a genome data warehouse based on the parallel Oracle RAC product and utilizing a Web Services-based ELT (Extraction, Transformation
Bethe (Physics and Astronomy): Dual Co-Processor SuperServer; Intel Xeon / 6 cores / 64 GB RAM; Intel Xeon Phi 5110p / 1 TFLOPS double precision / 8 GB RAM; NVIDIA Titan GPU / 1.4 TFLOPS double precision / 6 GB RAM; 1.0 TB File Storage; SUSE Linux Enterprise Server OS. GPU/Xeon Phi code development system for astrophysics and molecular biophysics (PIs:Prof. H Duan; Prof. SR Atlas).
Deepthought: Penguin Relion 2808GT + Relion 2800i; Intel Xeon E5-2650 V2 2.6 GHz; 4 Nova nodes + 4 CEPH nodes, 16 cores/node; 128 GB RAM/Nova node; 10G Ethernet interconnect; Scyld Cloud Management + OpenStack. Compute and storage server + RSC staging system for high-throughput cancer genome analysis (10G Science DMZ connection to UNM Cancer Center Next-Gen sequencer). (PIs: Prof. S Ness; Prof. C Willman)
Fluvial/Ubik (Earth and Planetary Sciences): A multi-cluster satellite data acquisition and analysis system, operated by CARC for the CREATE resident research group. The system consists of the Fluvial processing system and the Ubik file server. These systems feature a suite of commercial and open real-time satellite image processing software for the
LWA Data Archive (Physics and Astronomy): Silicon Mechanics Storform iServ R518; Intel Xeon E5620 Quad-Core 2.40GHz; 12MB Cache, 5.86GT/s QPI 8 cores, 24GB RAM; 50 TB RAID 6 Storage; Ubuntu OS. Data storage/server for NSF-supported Long-Wavelength Array Project (PI: Prof. G Taylor).
m3 (Physics and Astronomy): 8 core, 16 GB RAM 64-bit Intel Xeon system, with 12 TB local workspace disk. Serves as
SkyScan System (ARTS Lab and School of Architecture): SkyScan
Synergy (Translational Informatics/Internal Medicine): PSSC Labs PowerWulf Compute Engine CBeST v. 3.0 Beowulf; 12 nodes, 8 cores/node; 16GB RAM/node; 4.5 TB Accessible RAID Storage; CentOS Operating System. Compute server for cheminformatics and small-molecule drug discovery (PI: Prof. T Oprea).
Zeno (Mathematics and Statistics): Intel Xeon E5620 2.4 GHz; 4 Nodes, 8 cores/node; 32 GB RAM; 1.8 TB RAID Storage; Ubuntu OS. Compute server for computational geometry and biophysics research (PI: Prof. E Coutsias).
Network AccessEnergy Science Network (ESnet)
The Energy Sciences Network is a high-performance, unclassified national network built to support scientific research. Funded by the U.S. Department of Energy’s Office of Science and managed by Lawrence Berkeley National Laboratory, ESnet provides services to more than 40 Department of Energy (DOE) research sites, including the entire National Laboratory system, its supercomputing facilities, and its major scientific instruments. ESnet also connects to 140 research and commercial networks, permitting DOE-funded scientists to productively collaborate with partners around the world. UNM partners with ESnet to provide services to New Mexico’s national laboratories, Los Alamos National Laboratory, and Sandia National Laboratories.Internet2
Internet2 is the foremost U.S. advanced networking consortium. Led by the research and education community since 1996, Internet2 promotes the missions of its members by providing both leading-edge network capabilities and unique partnership opportunities that together facilitate the development, deployment, and use of
The Internet2 Network is one component of Internet2’s comprehensive systems approach to developing and deploying advanced networking for the research and education community, which encompasses Network Technologies, Middleware, Security, Performance Measurement, and Community Collaboration. UNM researchers have access to all Internet2-connected resources such as the NSF XSEDE network of supercomputer centers.Albuquerque GigaPoP
The Albuquerque GigaPoP is an aggregation point of networks to provide high-bandwidth network