CARC systems information
Parallel computing, storage, and visualization
The UNM Center for Advanced Research Computing supports supercomputing systems, high-throughput clusters, and large-scale disk storage for use by university researchers. CARC currently has over 3000 CPU cores and 92k NVIDIA Tesla K40M CUDA cores spanning a variety of distributed and shared-memory architectures. Online working NAS and nearline storage is provided by the Research Storage Consortium (RSC) HP x9000/7400 system ~1.5 PB (raw) configured as RAID6, with integrated tape library for data archiving. Additional tape backup is provided by an HP DataProtector based system with 72 TB of LTO5 tape storage rotated through a 24 TB robotic tape library system. These compute and storage systems are housed in a state-of-the-art, raised-floor, climate- and access-controlled machine room. CARC connects to Internet2 via UNM’s 10 Gbps backbone network. This backbone connects via a 10 Gbps link to the central Rio Grande Valley GIGA-POP, located in downtown Albuquerque and owned and operated by UNM. A summary of CARC compute resources can be found in the table below. Further details about the Galles Beowulf Cluster can be found below the following Supercomputer and Cluster Resource table. An NSF-style Facilities document is available to PIs for use in proposal preparation.
CARC Supercomputer and Cluster Resources
Xeon E5-2698 V4
Dell PowerEdge R620; Intel Xeon E5-2670, 2.6 GHz
Silicon Mechanics A422.v3 Shared-Memory Multi Processor
AMD Opteron 6272, 2.1 GHz
Dell Optiplex GX620 /Intel Pentium D/ 2.13 GHz; Dell Optiplex 745/ Intel Core2/ 2.8 GHz / 3.0 GHz
PowerEdge R730 /Intel Xeon CPU E5-2640/ 2.6 GHZ; PowerEdge R930 /Intel Xeon CPU E7-4809/ 2.0GHz
Linux Operating System
ConnectX-3 IB FDR
GB Ethernet (Beowulf cluster)
200 (including 16-node Hadoop subsystem)
64 (32 + 32 FP co-processors)
256 GB shared
4GB, 32GB, 96GB
Filesystem disk only
80-140 GB; 1 TB on Hadoop nodes
Peak FLOPS (theoretical), in TFLOPS
Intel Xeon Broadwell
Intel Pentium D, Intel Core 2
Intel Xeon CPU E7-2640, Intel Xeon CPU E7-4809 (Haswell)
Local scratch space (TB)
Galles Beowulf Cluster Specs
|Cores per Node||2||2||2||2|
|Processor Architecture||Intel Pentium D||Intel Pentium D||Intel Core Duo||Intel Core Duo|
|CPU GHz||3.00||2.80||2.13 / 2.40 / 2.80 / 3.00||2.80 / 3.00|
Xena Supercomputer Specs
|Cores per Node||32||32||16||16|
|Memory per Node||1 TB||3 TB||64 GB||64 GB|
|Processor Architecture||Intel Xeon CPU E7-4809||Intel Xeon CPU E7-4809||Intel Xeon CPU E5-2640||Intel Xeon CPU E5-2640|
|GPU||N/A||N/A||2 x Nvidia Tesla K40M / node||1 x Nvidia Tesla K40M / node|
Research Group-Dedicated Machines
CARC also hosts dedicated research computing resources for faculty representing several colleges and schools at UNM (Arts & Sciences, Engineering, Fine Arts, Architecture, and the School of Medicine).
Anodyne: Dell PowerEdge R815; AMD Opteron 6174 12-Core 2.2GHz; 1 Node, 4 sockets; 48 cores; 192GB RAM Shared Memory; CentOS 5.10 Operating System. Anodyne is used for applied computational methods of electromagnetic geophysics. (PI: Prof. Chester Weiss).
Apollo (UNM Cancer Center): Coupled Intel/Xeon 32- and 64-bit clusters IBM with FastT500 and DS400 Fiber Channel SAN storage components, supporting the UNM Cancer Center’s Shared Resource for Genomics and Bioinformatics. These systems host a genome data warehouse based on the parallel Oracle RAC product and utilizing a Web Services-based ELT (Extraction, Transformation and Loading) paradigm importing data into an XMLDB integration schema with multiple output data marts (PIs: Prof. C Willman; Prof. SR Atlas).
Bethe (Physics and Astronomy): Dual Co-Processor SuperServer; Intel Xeon / 6 cores / 64 GB RAM; Intel Xeon Phi 5110p / 1 TFLOPS double precision / 8 GB RAM; NVIDIA Titan GPU / 1.4 TFLOPS double precision / 6 GB RAM; 1.0 TB File Storage; SUSE Linux Enterprise Server OS. GPU/Xeon Phi code development system for astrophysics and molecular biophysics (PIs:Prof. H Duan; Prof. SR Atlas).
Deepthought: Penguin Relion 2808GT + Relion 2800i; Intel Xeon E5-2650 V2 2.6 GHz; 4 Nova nodes + 4 CEPH nodes, 16 cores/node; 128 GB RAM/Nova node; 10G Ethernet interconnect; Scyld Cloud Management + OpenStack. Compute and storage server + RSC staging system for high-throughput cancer genome analysis (10G Science DMZ connection to UNM Cancer Center Next-Gen sequencer). (PIs: Prof. S Ness; Prof. C Willman)
Fluvial/Ubik (Earth and Planetary Sciences): A multi-cluster satellite data acquisition and analysis system, operated by CARC for the CREATE resident research group. The system consists of the Fluvial processing system and the Ubik file server. These systems feature a suite of commercial and open real-time satellite image processing software for the Terascan MODIS and AHVRR satellite systems (PI: Prof. L Scuderi).
LWA Data Archive (Physics and Astronomy): Silicon Mechanics Storform iServ R518; Intel Xeon E5620 Quad-Core 2.40GHz; 12MB Cache, 5.86GT/s QPI 8 cores, 24GB RAM; 50 TB RAID 6 Storage; Ubuntu OS. Data storage/server for NSF-supported Long-Wavelength Array Project (PI: Prof. G Taylor).
m3 (Physics and Astronomy): 8 core, 16 GB RAM 64-bit Intel Xeon system, with 12 TB local workspace disk. Serves as analysis engine for the UNM ATLAS particle physics group and as the CARC gateway system connecting to the Open Science Grid (PIs: Prof. S Seidel, Prof. I Gorelov).
SkyScan System (ARTS Lab and School of Architecture): SkyScan gDome DigitalSky Cluster; ASUS P8Z68-V LX Custom Build; Intel i7 dual quad core; 8 nodes, 8 cores/node; Gb Ethernet Interconnect; Windows 7 OS. ARTS Lab supports research in digital graphics, sound, and realtime immersive projection using a 15′ diameter hemispheric domed projection surface (G-Dome Theater Display) with six projectors and five-channel audio (PIs: Prof. T Castillo; D Beining).
Synergy (Translational Informatics/Internal Medicine): PSSC Labs PowerWulf Compute Engine CBeST v. 3.0 Beowulf; 12 nodes, 8 cores/node; 16GB RAM/node; 4.5 TB Accessible RAID Storage; CentOS Operating System. Compute server for cheminformatics and small-molecule drug discovery (PI: Prof. T Oprea).
Zeno (Mathematics and Statistics): Intel Xeon E5620 2.4 GHz; 4 Nodes, 8 cores/node; 32 GB RAM; 1.8 TB RAID Storage; Ubuntu OS. Compute server for computational geometry and biophysics research (PI: Prof. E Coutsias).
Network AccessEnergy Science Network (ESnet)
The Energy Sciences Network is a high-performance, unclassified national network built to support scientific research. Funded by the U.S. Department of Energy’s Office of Science and managed by Lawrence Berkeley National Laboratory, ESnet provides services to more than 40 Department of Energy (DOE) research sites, including the entire National Laboratory system, its supercomputing facilities, and its major scientific instruments. ESnet also connects to 140 research and commercial networks, permitting DOE-funded scientists to productively collaborate with partners around the world. UNM partners with ESnet to provide services to New Mexico’s national laboratories, Los Alamos National Laboratory, and Sandia National Laboratories.Internet2
Internet2 is the foremost U.S. advanced networking consortium. Led by the research and education community since 1996, Internet2 promotes the missions of its members by providing both leading-edge network capabilities and unique partnership opportunities that together facilitate the development, deployment, and use of next generation Internet technologies. Internet2 brings the U.S. research and academic community together with technology leaders from industry, government and the international community to undertake collaborative efforts that have a fundamental impact on tomorrow’s Internet.
The Internet2 Network is one component of Internet2’s comprehensive systems approach to developing and deploying advanced networking for the research and education community, which encompasses Network Technologies, Middleware, Security, Performance Measurement, and Community Collaboration. UNM researchers have access to all Internet2-connected resources such as the NSF XSEDE network of supercomputer centers.Albuquerque GigaPoP
The Albuquerque GigaPoP is an aggregation point of networks to provide high-bandwidth network accessibility to the State of New Mexico. ABQG is the “on ramp” for all high-speed national networks, including Internet2 and ESNet. Access to Commodity Internet and peering, to keep in-state traffic local, is also available. ABQG is operated by the University of New Mexico and is a state-of-the-art interconnection facility designed to serve research and education programs in the state. Participants include New Mexico Institute of Mining and Technology, New Mexico State University, New Mexico Council for Higher Education Computing Communication Services, and the New Mexico State Agency of IT.