r14 - 13 Nov 2015 - 18:15:24 - ArmenVartapetianYou are here: TWiki >  AtlasSoftware Web > USAtlasStorageSetUp

USAtlasStorageSetUp - LOCALGROUPDISK in the US


Each US Tier 1 and Tier 2 site hosts a LOCALGROUPDISK area. This space is not pledged to ATLAS and is at the disposal of US ATLAS, and is intended to hold datasets that are beneficial to US ATLAS analyses, as outlined below. The current US allocation of LOCALGROUPDISK is about 1800 TB, out of which 1100 TB is currently being used.

A complete list of US ATLAS LOCALGROUPDISK locations and usage is available at https://atlas-lgdm.cern.ch/LocalGroupDisk_Usage/index.html.

Why might LOCALGROUPDISK help me?

The key advantage of LOCALGROUPDISK is that you have complete control over what ATLAS datasets go there, and how long they are retained (within the space quotas mentioned below). The RAC encourages the use of LOCALGROUPDISK by US teams and individual users for data that do not meet the criteria for the other ATLAS spacetokens. For example:

  1. Physics or performance data for which US groups are responsible but for which ATLAS GROUPDISK space is not available;
  2. Data for which SCRATCHDISK or USERDISK provide too short a retention period (~two weeks at SCRATCHDISK and ~3 months at USERDISK).

We encourage you to use of LOCALGROUDISK where it could help your work.

How to put data into LOCALGROUPDISK

There are a few ways that data can be placed in LOCALGROUPDISK:

By default, the output of Grid jobs run at non-US sites goes to SCRATCHDISK Rucio Storage Elements (RSEs), which has a quota and lifetime of ~15 days. For the jobs run at US sites the output goes to USERDISK, which has a quota and lifetime of ~3 months. You can request to move your data (via R2D2) from SCRATCHDISK or USERDISK to a LOCALGROUPDISK RSE for longer term storage which is geographically closer to your working location.

From the R2D2 page you can:
a) Request a new transfer ("rule") by going to the top of page under: Data Transfers (R2D2) > Request new rule

b) Review the status of a transfer by going to: Data Transfers (R2D2) > List my rules

Please see Rucio Documentation for additional information and help. Support is provided by the Distributed Analysis Team (DAST): hn-atlas-dist-analysis-help@cern.ch

Alternatively, you can use the -destSE option in your PanDA jobs to place the output of the job directly into LOCALGROUPDISK. The names of the LOCALGROUPDISK areas (and corresponding geographical location) are:

MWT2_UC_LOCALGROUPDISK (Midwest Tier 2, Chicago area)
NET2_LOCALGROUPDISK (Northeast Tier 2, Boston University)
AGLT2_LOCALGROUPDISK (Great Lakes Tier 2, University of Michigan)
OU_OCHEP_SWT2 (Oklahoma)

How to access data on LOCALGROUPDISK

Once on LOCALGROUPDISK, your data can be accessed just as any other datasets in Rucio-managed space are:

* it can be copied to your local Tier 3 via rucio-get
* it can be directly accessed over the network via FAX. Information about using FAX in your analysis can be found here: https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/UsingFAXforEndUsersTutorial


Usage of LOCALGROUPDISK space below a threshold (currently 10TB per user per Tier 2 site, or 30 TB per user summed over all Tier 2 sites) will be approved automatically. Larger requests will be routed via the US Operations Team, and, if necessary to the RAC. You are welcome to send email to usatlas-rac-l@lists.bnl.gov to ask whether a proposed use is appropriate and/or possible.

Monitoring LOCALGROUPDISK usage

LOCALGROUPDISK allocations and usage in the US can be monitored by https://atlas-lgdm.cern.ch/LocalGroupDisk_Usage/index.html.

Right now the page allows you to see who is using the space (and by drilling down, what data they have on LOCALGROUPDISK and at which sites), as well as to put a request for particular allocation at particular LOCALGROUPDISK for a specific period of time. More information is provided in the monitoring page itself.

The US monitoring is being actively developed - if you can't see what you need, please send an email to usatlas-rac-l@lists.bnl.gov. We also recommend checking this Twiki - we will try to keep it up to date as monitoring evolves.

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback