r6 - 08 Dec 2006 - 10:47:01 - RobertGardnerYou are here: TWiki >  Admins Web > MidwestTier2ActionItems

Midwest Tier2 Action Items

Introduction

These are some notes to follow-up on TierTwoPlanning tasks and deliverables as discussed at the May Tier2 workshop in Chicago. This is being updated for the Tier2 workshop at Harvard, August 17-18, 2006.

Facilities

Follow-up from the TierTwoFacilities working group:

Current Inventory of Deployed and Leveraged Facilities

  • MWT2 - UC Site [Prototype Midwest Tier 2, Dedicated Leveraged]
    • 64 nodes, 128 CPUs, 129K SI2K?
    • Dual Xeon 3.0 GHz
    • 5 TB total, 3 volumes in rotation
    • Addtional storage in UC VO box (4 TB) and local analysis (6 TB)

  • MWT2 - IU Site [Prototype Midwest Tier 2, Dedicated Leveraged]
    • 32 nodes, 64 CPUs, 54K SI2K?
    • Dual Xeon 2.4 GHz CPUs
    • 1.5 TB disk via IBM's GPFS
    • 8 TB via SAN (NFSv3)
    • HPSS � 60 TB (tape)

  • Teraport, [Dedicated Leveraged]
    • 30% on avg of 128 nodes, 84.5K SI2K?
    • Dual Opteron, 2.2 GHz
    • 11 TB GPFS storage (shared, currently 3 TB in use by ATLAS)

  • Current totals: 267K SI2K? , 28.5 TB disk, 60 TB tape

Deployment of new MWT2 facilities

UC site

The UC cluster is comprised of MWT2 dedicated compute, dCache read and write pools, grid nodes, and management nodes. The interconnect for these machines is a Cisco 6509 ethernet system (GigE? and 10 GigE? equipment).

  • 8 compute nodes (dual AMD64 dual core CPUs)
  • 6 additional nodes of the same flavor
  • 1 dCache write pool node with 10 GigE? interface
  • 2 dCache write pool nodes
  • 2 Grid nodes with 2 TB RAID5
  • 5 Grid nodes with 320 GB RAID1
  • 1 Head node for cluster management
  • 1 KVM system (8 port, with 15" monitor, keyboard and trackball)
  • 1 remote management system (Cyclades, Ethernet patch panels, 3 PDUs, RJ45 to DB9 serial adapters)
  • 1 spare parts kit
  • 2 rack cabinet
  • Additional ~14 nodes to be purchased in September (TBD).

IU site

The IU cluster is comprised of MWT2 dedicated compute, dCache read and write pools, grid nodes, and management nodes. The interconnect for these machines is a FORCE10 ethernet system (GigE? and 10 GigE? equipment).
  • 8 compute nodes (dual AMD64 dual core CPUs)
  • 6 additional nodes of the same flavor
  • 1 dCache write pool node with 10 GigE? interface
  • 2 dCache write pool nodes
  • 2 Grid nodes with 2 TB RAID5
  • 5 Grid nodes with 320 GB RAID1
  • 1 Head node for cluster management
  • 1 KVM system (8 port, with 15" monitor, keyboard and trackball)
  • 1 remote management system (Cyclades, Ethernet patch panels, 3 PDUs, RJ45 to DB9 serial adapters)
  • 1 spare parts kit
  • 2 rack cabinet
  • Additional ~10 nodes to be purchased in September (TBD).

Networking

Follow-up from the TierTwoNetworking working group:
  • Current Route between IU and UC (10 Gb Starlight connection nearing completion)
  • Hardware identified for NDT installation, plans to proceed with install in the near term
  • 10 Gbps Cisco 6509 router purchased (PS, supervisor, 4-port 10 GigE blade, 48 port GigE blade); now installed Y%
  • 10 Gbps service established to MWT2 machine room DONE
  • 10G VLAN setup between IU and UC DONE

Storage and Data Services

Follow-up from the TierTwoStorageDataServices working group:
  • 65 TB dCache deployed DONE

Policy and Accounting

Follow-up from the TierTwoPolicyAccounting working group:
  • Delivery of policy description for MWT2.
    • 12/2006: New MWT2 facilities support only grid submissions for two VOMS roles: software is usatlas2, and production usatlas1 for Panda production. Older facilities (UC_ATLAS_Tier2, IU_ATLAS_Tier2, UC_Teraport) continue to suppport Panda production, production from ATLAS grid users (usatlas3, usatlas4), and general OSG.

Operations and User Support

Follow-up from the TierTwoOperationsUserSupport working group:
  • Now have RT queues setup by BNL and in use for both MWT2 support and internal issues.
  • Have RT system setup for UC Teraport facility.

-- RobertGardner - 15 Aug 2006 -- KristyKallback - 15 Aug 2006

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments


pdf MWT2-Details.pdf (139.1K) | KristyKallback? , 15 Aug 2006 - 17:03 |
 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback