r2 - 22 Dec 2008 - 14:28:32 - WeiYangYou are here: TWiki >  Admins Web > SummaryReportP7



This report covers Phase 7 of the IntegrationProgram which covers the period of Oct 1 - Dec 31, 2008. Meetings during this period:

Procurement reports and capacity status

See also CapacitySummary in which we compare pledge and deployed capacities for each phase of the integration program.

Procurements during Phase 7 (Oct 1 - Dec 31, 2008): processing cores, useable TB

  • T1:
  • AGLT2:
  • MWT2_IU: 160 cores, 156 TB
  • MWT2_UC: 192 cores, 156 TB
  • NET2:
  • SWT2-UTA:
  • SWT2-OU:
  • WT2: 64 TB

Capacity status: (dedicated processing cores, usable storage) as of December 31, 2008: processing cores, useable TB

  • T1:
  • AGLT2:
  • NET2:
  • MWT2_IU: 768 cores, 266 TB
  • MWT2_UC: 1188 cores, 258 TB
  • SWT2-UTA:
  • SWT2-OU:
  • WT2: 33%*(1852)=617 cores, 275 TB

  • for reference: dedicated processing cores, usable storage as of Sep 30, 2008:
    • T1: 4000 cores, 2.1 PB
    • AGLT2: 924 cores, 400 TB plus 170 TB in dCache. NOTE: Doesn't reflect acquisitions purchased above (they likely won't be online by September 30th)
    • NET2: 570 cores, 170TB
    • MWT2_IU: 608 cores, 110 TB
    • MWT2_UC: 996 cores, 102 TB
    • SWT2-UTA: 520 cores, 81 TB
    • SWT2-OU: 260 cores, 16 TB
    • WT2: 33%*(1852)=617 cores, 211 TB

ATLAS release installation via Pacballs & DDM

  • Standard ATLAS software installation script integrated as a Panda job for OSG sites
  • Pacballs for releases distributed to BNL via DDM automatically
  • Software installation submission with privileged role deployed
  • Release installation via this framework tested at Tier 1 and Tier 2

Storage management

Storage capacity recommendations/guidance for the Facility (ref MinutesJune11, "-ext" additional DPD capacity):
Token Oct 15 Oct 15-ext
USATLAS space* 64 TB 64 TB
Total 363 TB 435 TB
  • The 64 TB for US regional quota, which will most likely be distributed among USER, GROUP and LOCALUSER tokens.


  • Space tokens implemented and Panda pilot integrated for three storage element types: SRM-dCache (4 sites), SRM-Bestman-Xrootd (2 sites), SRM-Bestman-Gateway (2 sites).
  • The October 15 targets should be reached with new storage procurements, but this is deployment is still in progress at most sites.

Network Monitoring

Deployment (procurement, installation, testing, integration) of network monitoring infrastructure throughout the US ATLAS Facility. Includes:
  • Procurement of two network monitoring hosts
  • Installation of US-LHC network monitoring OS and toolkits


File Catalogs

  • See FileCatalog for a the full program of work during this period for deployment of LFC.
  • LFC deployed at all sites in the facility, integrated with DDM and Panda production services.

Load testing

  • 200 MB/s, 400 MB/s benchmarks achieved at some Tier 2 facilities, as per SiteCertificationP7.
  • Multi-site Tier 1-Tier 2 load test achieving 1 GB/s for several hours.

Analysis milestones

Analysis benchmarks of various types, and at various scales. The benchmarks defined here are performed on each Tier 2 facility and noted in the the SiteCertificationP7.
  • standard means a standard pathena job defined by a specific release, application, and input dataset.
    • D3DP making jobs with release 14.2.20
    • Run on a FDR2 container dataset, jamboree08_run2.0052280.physics_Egamma.merge.AOD.o3_f8_m10/
  • suite means a package of templated jobs of various types, used to validate functionality of the analysis queues.
    • D3DP making jobs
    • AODtoDPD making jobs
    • TAG selection jobs
    • ARA jobs
  • Analysis benchmarks 1-3: 100/200/400 jobs.

US ATLAS facility client support

This task covers the provisioning of various ATLAS and OSG client tools for use in the US ATLAS Facility by site administrators and physicist-users. This includes:
  • Providing a package of OSG client middleware components that can be used in conjunction with dq2-client tools released by DDM to successfully and efficiently access US ATLAS storage elements. See further WlcgClient.
  • A worker node client component which includes LFC client utilities packaged by VDT for use with the standard OSG 1.0 worker-node client package. This provides Panda pilots the LFC client programs it needs to access LFC catalogs.


Carryover issues from Phase 6

  • High level goals in Integration Phase 7 (from BNL workshop):
    • Pilot integration with space tokens done
    • LFC deployed and commissioned: DDM, Panda-Mover, Panda fully integrated done
    • Transition to /atlas/Role=Production proxy for production done
    • Storage
      • Procurements - keep to schedule major purchases at 3 sites (AGLT2, MWT2, NET2)
      • Space management and replication new site clean-up tool, DQ2SiteCleanse
    • Network and Throughput
      • Monitoring infrastructure and new gridftp server deployed deployed at 4/5 Tier 2, Tier 1
      • Throughput targets reached
    • Analysis
      • New benchmarks for analysis jobs coming from Nurcan: robots, stress tests
    • Upcoming Jamborees DONE
    • Probably will hold another US ATLAS Tier 2/Tier3 meeting, Winter/Early Spring - scheduled
    • OSG site admins meeting coming up: https://twiki.grid.iu.edu/bin/view/SiteCoordination/SiteAdminsWorkshop2008 DONE

Carryover issues for Phase 8

  • Complete commissioning of new storage and compute servers at all sites.
  • Throughput targets:
  • Analysis stress testing:
  • WLCG storage capacity reporting:

-- RobertGardner - 22 Dec 2008

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback