r2 - 27 Sep 2010 - 18:36:37 - RichardMountYou are here: TWiki >  AtlasSoftware Web > Minutes27Aug2010

Minutes27Aug2010 RAC Minutes, August 27, 2010


Members (*=present, #=apologies)

*Richard Mount, *Kevin Black (Physics Forum Chair), #Jim Cochran (Analysis Support Manager), Alexei Klimentov (ATLAS ADC), Ian Hinchliffe (Physics Advisor), Rik Yoshida (Tier3 Coordinator), *Michael Ernst (U.S. Facilities Manager), Rob Gardner (Integration Coordinator), *Kaushik De (U.S. Operations Manager), *Armen Vartapetian (U.S. Operations Deputy)

Ex-Officio: *Torre Wenaus, Stephane Willocq, Mike Tuts, *Howard Gordon

Correction/approval of minutes of previous RAC meeting.

Approved.

Summary of Operational Issues in the Last Month (Kaushik, Michael)

Kaushik: Everything has been running smoothly. Sites still get full of data from time to time, but a combination of central deletion and local deletion managed by Armen and Wensheng Deng has been working well. It would be good to increase the level of automation in the deletion process.

An increase in requests for T2-->T3 transfers has been noted. All such transfers (e.g. for 500 MB) have to be manually approved according to current ATLAS policy. Kaushik proposed asking the ADC to allow US T2-->T3 transfers to proceed automatically up to a threshold of 1TB. Removing all restrictions might, in principle, cause competition for network transfer bandwidth. There was doubt that transfers to the US T3's could cause serious problems given the relatively small T3 installations, but the RAC agreed that the 1 TB threshold should be proposed to the ADC. It was not clear whether implementing a region-specific threshold was easy.

Michael: Production is idling right now and analysis is very spiky. There is substantial spare CPU capacity.

Additional production requests

Nurcan Ozturk requested increasing each of two 7TeV SUSY samples from 10k to 100k events:

"I would like to increase the size of the two SUSY samples that were produced in the central production system before. These samples are:

  1. mc09_7TeV.106457.SO10_axinoLSP_jimmy_susy.merge.AOD.e530_s765_s767_r1302_r1306/
  2. mc09_7TeV.114012.SO10_DR3_jimmy_susy.merge.AOD.e540_s765_s767_r1302_r1306/
There is only 10K events produced in the central production for each sample. I like to increase it to 100K to be able to obtain a good number of signal events."

Kevin Black requested 5M diphonton background events:

"We need 5M of standard JF35, in addition to the alreadsy existing data set

mc09_7TeV.105807.JF35_pythia_jet_filter.merge.AOD.e505_s765_s767_r1302_r1306

where the mean Xsec=5.4969E+07 and the mean filter efficiency is = 1.5667E-01.

Short justification: we need these for the Exotics diphoton+MET analysis, for which we aim to put out a first draft paper beginning of september using 1pb-1 of data. Our data driven background estimation method needs to be validated with sufficient statistics, presently the error is 20%. Our systematics also rely on data/MC comparisons which have large stats errors."

Approval of both requests was enthusiastically confirmed by the RAC. In accordance with normal RAC procedure, both had already been approved in email exchanges to minimize delays.

Additional Reconstruction Request

Vivek Jain had asked whether a request to reprocess ~100M minbias events would be acceptable - he was not yet quite ready to run. The RAC assumed that this would require access to raw data. There was some discussion about the special issues associated with accessing raw data, and potentially, generating a substantial output dataset. The RAC concluded that such requests should not pose major problems - it would run most naturally at the T1, but moving data to the T2s was also not a major problem. Vivek should be encouraged to provide more detailed information on the needed input datasets and the size of the output. There may be issues that would arise in this production, but there would be value exercising the system to perform such a task that is likely typical of many future requests.

Strategy for support of high-memory jobs (heavy ion etc.)

Richard introduced the topic saying that he personally did not know how jobs needing more than 2 GB could be submitted to the ATLAS cloud and find their way to the sites (e.g. some queues at BNL) that were prepared to execute them. Technically facilitating the processing of large memory jobs, while at the same time taking correct account of their elevated cost, seemed to be a requirement for the future. No member present had a clear view of the current situation. Kaushik and Torre agreed to investigate (or remember) the current status.

AOB

none

Action Items

  1. 8/27/2010: Kaushik and Torre, Investigate the current state of the technology to route large memory jobs to the sites/queues prepared to execute them.
  2. 5/7/2010: Richard, Create a web page summarizing the dataset distribution policies for the US resources.

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments

 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback