This web site is designed for accessibility. Content is obtainable and functional to any browser or Internet device. This page's full visual experience is available in a graphical browser that supports web standards. See reasons to upgrade your browser.

U.S. ATLAS Tier 1 Computing Facility

The U.S ATLAS Tier 1 Computing Facility supports the computing needs of U.S. physicists working with the ATLAS experiment currently under construction in European Laboratory for Particle Physics (CERN) in Geneva, Switzerland.
The Linux farm at the Atlas Computing Facility (ACF) consists of 750 rack mounted commodity based processors (rated at 3 TFLOPS) and 200 TB of disk storage.

The Global Computing Environment (GCE) group provides bulk of the support services at the Rhic Computing Facility (RCF) and Atlas Computing Facility (ACF) such as electronic mail, user account management, facility access management, file backup, help desk, web services, document processing and printing services. GCE also manages a centralized high throughput storage system consisting of 250 TB fiber channel SAN and a high density 100 TB Panasas storage appliance capable of 300 MB/s aggregate performance.

The dCache storage management software, jointly developed by Fermilab and DESY is a distributed disk caching front end for a tape storage facility. At RCF/ACF dCache leverages the advantages of distributed storage by providing users with a transparent global namespace to efficiently access the data on the multi Petabyte storage facility. as well as local storage on the Linux farm servers. In addition dCache acts as a Grid-enabled storage element by providing remote access to the data via Grid middleware tools.

The Mass Storage System (HPSS) at the RCF/ACF is a combination of several robotic systems and proprietary software designed to provide reliable and high throughput parallel archiving and retrieval of large amonts of data on 24x7 year round basis. The hardware storage component comprises 6 tape silos with a combined capacity of up to 29,000 tapes, 124 tape drives, 7PB of tape storage and 18TB of front end disk cache for all storage classes. HPSS is capable of up to 1200 MB/s data transfer rates.