r15 - 19 Nov 2009 - 17:07:50 - MaximPotekhinYou are here: TWiki >  AtlasSoftware Web > OSGAtBNL

Open Science Grid middleware extensions activity at BNL


Program overview

The Open Science Grid (OSG) is a consortium that was awarded funding as a follow-on to the Particle Physics Data Grid (PPDG) and other US grid projects, with the broadened objectives of establishing a US computing grid infrastructure serving LHC and other application science needs, and integrating this US infrastructure with international grids such as the Worldwide LHC Computing Grid (WLCG).

The OSG work program is in two principal areas: providing an open distributed computing facility through computing center collaboration and a common middleware foundation, and secondly, extending OSG capability through targeted, science-driven software tools augmenting the foundation middleware that are required by LHC and other demanding science applications. BNL is involved in OSG through ATLAS and STAR and is participating in both OSG activity areas. BNL focus topics in the facility area are in security systems and mass storage interfaces. In the software extensions area we focus principally on workload management and distributed data management and storage issues. The high level BNL tasks are

  1. Provide distributed production and data management services at BNL and among US ATLAS collaborating institutions, coherently with international ATLAS via integration with CERN and WLCG services.
  2. Provide distributed data management services for STAR between BNL and LBNL.
  3. Augment these production-oriented job and data management services with support for ATLAS and STAR distributed analysis.

The purpose of the software component of this effort is to

  • work with OSG participants and collaborating projects (particularly Condor) in developing a workload management system supporting US ATLAS distributed production/analysis and wider OSG use, based primarily on integration of existing Condor and experiment software
  • support the integration, deployment and operation of this system for ATLAS
  • provide support and maintenance of this system for OSG users
  • participate in the leadership of the extensions area of OSG

Program Components

Current Staff Schedule

Percent Area Person Role OSG paid FTE
5% Extensions Howard Gordon PI N 0.00
30% Extensions David Stampf Developer Y 0.30
25% Security John Hover Developer Y 0.25
70% Extensions Jose Caballero Developer Y 0.70
30% Outreach Jose Caballero Liaison Y 0.30
70% Extensions Maxim Potekhin OSG Area Co-Coordinator Y 0.70
30% Engagement Maxim Potekhin Liaison Y 0.30
5% USATLAS Tier 1 Facility Michael Ernst Project Advisor N 0.00
20% Extensions Torre Wenaus Area Co-Coordinator & Co-PI N 0.00

High level milestones

Date   Milestone
March 2007 Complete deployment of dCache at US ATLAS Tier 2s
June 2007 Production deployment of experiment-neutral Panda as supported general OSG service
June 2007 ATLAS validation of OSG infrastructure and extensions in full-chain production challenge
September 2007 Deliver Panda/Condor integration phase 1
November 2007 Workload and data management extension performance baseline documentation completed
December 2007 Panda/Condor integration phase 1 deployed and in production on OSG
September 2008 Deliver Panda/Condor integration phase 2
December 2008 Panda/Condor integration phase 1 deployed and in production on OSG
June 2009 Panda service migrated from BNL to CERN
August 2009 Comprehensive scalability test
September 2009 Panda jobs ran in uid-switched environment (glexec)
November 2009 User Analysis Test


Major updates:
-- MaximPotekhin - 19 Nov 2009
-- MaximPotekhin - 06 Nov 2009
-- MaximPotekhin - 14 Oct 2008
-- TorreWenaus - 25 Sep 2006

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments

 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback