r3 - 02 Aug 2010 - 12:51:19 - RichardMountYou are here: TWiki >  AtlasSoftware Web > Minutes25Jun2010

Minutes25Jun2010 RAC Minutes, June 25, 2010


Members (*=present, #=apologies)

*Richard Mount, #Kevin Black begin_of_the_skype_highlighting     end_of_the_skype_highlighting begin_of_the_skype_highlighting     end_of_the_skype_highlighting (Physics Forum Chair), #Jim Cochran (Analysis Support Manager), Alexei Klimentov (ATLAS ADC), Ian Hinchliffe (Physics Advisor), Rik Yoshida (Tier3 Coordinator), *Michael Ernst (U.S. Facilities Manager), Rob Gardner (Integration Coordinator), *Kaushik De (U.S. Operations Manager), #Armen Vartapetian (U.S. Operations Deputy)

Ex-Officio: *Torre Wenaus, Stephane Willocq, Mike Tuts, *Howard Gordon

Correction/approval of minutes of previous RAC meeting and core team meetings.

All were approved.

Report on the Amsterdam meeting

Michael, Torre and Richard had participated in the "Jamboree on Evolution of WLCG Data and Storage Management". Michael had reported in some detail at the previous day's PS&C Management Meeting.

Torre said the meeting was very useful, particularly coming after the meeting on networking the previous week. Ideas were floated for doing things in new ways – more attention to caching, caching at finer granularity as proposed by René.

Michael was concerned about the ability of the currently funded network to support any T1/2 to any T1/2 traffic, but he noted that the warning sounded by David Foster in his networking talk was probably more alarmist than necessary.

Richard noted the counter arguments: that more comprehensive use of caching might reduce network traffic, and that other sciences were beginning to catch up with and overtake HEP in network usage. Nevertheless, the experiment-led network planning process proposed by David Foster was essential.

Richard noted that the stated motivation for the meeting had been to plan for 2013. He was pleased with the pragmatic outcome that focused on short term improvements and demonstrations with the potential to transform data management.

Operations Report (Kaushik)

The disk space situation is tight, but workable – normally no daily or nightly panics. Central deletion is still not working well, so while waiting for improvements more manual deletion is being performed to maintain breathing room.

PD2P (Panda Dynamic Data Placement) has been running for nine days. 329 subscriptions have been issued: 128 AOD, 119 ESD, 52 Ntuples. The popularity of the AODs was a slight surprise.

Monitoring re-use of PD2P-distributed data is not easy, but so far there is no evidence that re-use is occurring.

The possibility of ceasing or reducing central AOD distribution was discussed. It was agreed that the US would ask that AOD distribution to US T2s be reduced to 1 copy (from 2.5). The 1 copy would be spread over all US T2s.

Approach to Deletion

Richard introduced the topic saying he favored site autonomy w.r.t. deletion as a way to ensure scalability.

Kaushik outlined three options for the future:

  1. The Current Plan. All sites are cleaned centrally. Deletion decisions are based on the access information from the DQ2 tracker service. The is OK in principle and perhaps we should give it/them more time to make this work. Right now, sites are falling over when their disks become full.
  2. Panda does the deletion. This doesn't really fit into the Panda workflow. It would be a major feature bloat.
  3. Local deletion via script (as currently done for the production cache).

It was agreed that it would be desirable to do a serious demo project with local deletion, but that it was important not to undermine those who were working hard on centrally managed deletion. The demo could help take some pressure off central deletion.

Additional Production

Kevin could not attend but had sent email: "I asked several people to follow the instructions on the Twiki and two students and one faculty member were able to figure it out. So I would conclude that there is a reasonable description which can be followed. How should we announce this production mechanism to US atlas in general?"

It was agreed that the mechanism should be announced using a variety of channels including email to US ATLAS.

AOB

Kevin had sent an email request for additional production: 100,000 full simulation events bbar with 15 GeV mu filter. "It will be used to increase the background statistics for the bb for the case of the early W measurements..."

This request was approved.

Action Items

  1. 6/25/2010: Richard/Kevin Publicize Kevin's Twiki on Additional Production
  2. 5/7/2010: Richard, Create a web page summarizing the dataset distribution targets in the US (In progress)
  3. 4/9/2010: Kevin, Create first version of a Twiki guiding US physicists on requesting Additional Production. (Completed)
  4. 3/26/2010: All but especially core team, Find time during the ADC Workshop next week to identify a point-of-contact for Valid US Regional Production (will propose dropping this item).

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments

 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback