r5 - 09 May 2006 - 23:16:23 - KristyKallbackYou are here: TWiki >  Admins Web > Tier2Workshop0606

US ATLAS Tier2 Workshop

Introduction

A two-day workshop, May 9-10 2006, to review current status and firm up deployment and operational plans for the US ATLAS Tier2 centers over the next year leading to LHC turn-on. The deliverable of the meeting will be a deployment plan and schedule with milestones which account for:

  • The overall production and analysis schedules in ATLAS
  • Procurement, deployment status, and plans of existing Tier2 facilities including cluster, storage, and network infrastructure
  • ATLAS specific services that need to be integrated, validated, deployed and operated: data and production services resident at Tier2 and their operational connectivity with Tier1 hosted services.
  • OSG deployed grid middleware infrastructure (production and integration testbed)
  • Operational support issues (ATLAS and grid services)
  • Individual user analysis support
  • Fabric level services (cluster management, storage management, network)
  • Tier2 policies (compute and storage): specification and publishing; accounting and auditing
  • Tier3 support issues

The meeting will be attended by USATLAS management, Tier1 personnel, Tier2 managers and administrators. In addition, we anticipate someone representing the physics analysis support centers.

Logistics

The meeting, hosted by UC site of the Midwest Tier2 Center, will be held at the University of Chicago's Gleacher Center in downtown Chicago. You must register to attend the meeting (in order to plan for meals and coffee).

Registration payment:

  • By check in person on the day of the conference, or by faxing in advance to Esleen Fultz at 773-834-6818:
    • Name as it appears on the card
    • Credit card number
    • Expiration date
    • 4 digit code

Make hotel reservations asap.

Agenda

Reference and input to meeting

Production, SC4, and general ATLAS user analysis requirements
 - General requirements and schedules from:
      --> sc4 and csc schedules
      --> software/service schedules (panda and dq2 releases, other tools)
      --> other commissioning projects?
 - Validation of software and services on the Tier2
 - Integration and validation with new OSG services
 - Specific requirements on the site:
    * for Panda jobs (eg, use of local SRM-based SE, web caching services, firewalls and
      ports, workernode requirements, local disk requirements, etc ). When does
      ws-gram come into play; implications for tier2.
    * for general ATLAS jobs -- guidelines and practices, publication to users
    * for OSG grid users
    * for local/interactive users

Tier2 data services
  - model - which services are provided, do we have archetypes for deployment
    and operational support, interaction with physics working groups and
    analysis support centers (dataset provisioning and management, priorities)
  - ATLAS distribted databases (tag, conditions, geo, etc) & catalogs
  - web caching services
  - DDM related services
  - user access and usage
  - interaction of all the above with backend storage systems (dcache, srm,
    filesystems, etc, see below)

Storage and DDM requirements for site storage, management
  - SRM/dCache production-level services
      * this is probably the most important issue
      * need to get on SAME footing with BNL Tier1 w.r.t. release,
         validation, availability of dccp clients, etc.
   - other local storage, home servers for users

Site architecture and configuration --- review where we are, discussed
modifications for next procurements -- perhaps this can come out in
site reports.
  - CPU farms
  - Storage
  - Grid services
  - Edge servers
  - Interactive hosts
  - Hosts for Tier3 integration?
  - Isolating ATLAS and OSG workloads

Monitoring and Accounting
  - what are precise monitoring requirements: which site-level systems,
    reporting to which top-level servers, etc.
  - service alerts and monitors
  - for operations and capturing and tracking ATLAS specific issues
  - accounting for RAC purposes
  - accounting for LCG purposes
  - accounting for OSG purposes

Policy publication and implementation
  - GUMS configurations
  - Not only the queues, priorities, and quotas, but also Tier2 usage
    by general ATLAS users (analysis, production, interactive)
  - local scheduler configurations (PBS and Condor)
  - publication of US ATLAS and site-level policies.
  - policy change control

OSG integration
  - running OSG services on Tier2 resources
  - integration / validation of ATLAS on OSG
  - supporting other VOs on usatlas resources
  - interfacing to OSG operations, security, etc

LCG interoperability
  - how do we support this, at what level, and how do we account for it
    correctly?
  - providing usatlas bdii for this purpose, plus condor ClassAd generator

Operations
  - User support: a basic plan for servicing local issues, interfacing
    with overall ATLAS operations groups
  - Coordination with, support for, analysis support centers
  - Admin support
     * Maintenance and upgrades in the presence of on-going SC's and
        production running.
  - DDM operations monitoring
  - Panda operations monitoring

Network upgrades
  - follow up on upgrades or new issues from the networking workshop
  - combined with discussion of storage services, plan data I/O tests and
    identify bottlenecks and plans to address
  - network tuning parameters - recommendations

Tier3 model
  - can we flesh out a bit, even crudely
  - how should tier2's support these facilities.
  - what is the model for data access, code dev, in relation to
    datasets hosted at tier2, eg.
  - other tier2 support for tier3 facilties


-- RobertGardner - 26 Apr 2006

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments

 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback