r16 - 16 Nov 2005 - 12:57:39 - DantongYuYou are here: TWiki >  Projects Web > SuperComputing2005

Welcome to the home of SC2005

SC2005 Poster: Small (183kB JPEG) Full Size (338MB TIFF)

SC 2005 Booth Shift Schedule

GUMS: Grid User Management System

GUMS is a major Open Science Grid component in the VO Privilege Project, sponsored by Brookhaven National Lab and Fermi Lab, to investigate and implement fine-grained authorization for access to OSG-enabled resources (computing, storage, and networking) and services. GUMS improves user account assignment and management at OSG sites, and reduces the associated administrative overhead. We will give an animated demo on how a user is authenticated and authorized to use OSG resources in a framework consisting of GUMS, VOMS, and OSG gatekeepers. Please go to Web Demo link.

GUMS is also present as part of the Trust and Security booth

dCache: USATLAS dCache-based Storage System

The dCache system, a scalable high-performance storage system, manages hundreds of terabytes of data among hundreds of disk storage nodes and magnetic tape silos, strategically distributed across OSG-enabled USATLAS Tier 1 (at BNL) and Tier 2 centers. The dCache system stores the past data challenges and ROME production. We will demo and give a presentation on how dCache is directly used for staging in the input data and storing results in the current ATLAS production framework. The dCache demo will be combined with Service Challenge Demo as follows. Here is the poster for dCache. Here is the presentation file dCache.

A QoS Enabled Collaborative Data Sharing Infrastructure for Peta-scale Computing Research

TeraPaths, A DOE MICS/SciDac funded project, investigates the integration and use of differentiated network services, based on LAN QoS and MPLS, in the ATLAS data intensive distributed computing environment. TeraPaths manages the network as a critical resource, in a way similar to how resource scheduler/batch managers currently manage CPU resources in a multi-user environment. We will demo how TeraPaths can manage data transfers with guarantees of speed and reliability among OSG sites. TeraPaths will setup a QoS path across the DOE ESnet and NSF-funded UltraLight networks between BNL and the University of Michigan (both sites are OSG-enabled). dCache and bbcp will be used for data transfers along this path. _SC'05 demo_.

We also have a _software system demo_ running on our testbed. The bandwidth reservation can be requested using the _TeraPaths Web Service_. The structure of testbed is as follows: QoS TestBed?

BNL ATLAS Service Challenge

The ATLAS Service Challenge serves to test the infrastructure and services needed for full ATLAS data acquisition, transfer and analysis. It focuses on reliable data transfers from the Tier-0 center at CERN to all Tier-1 centers and subsequently on data transfers between Tier 1 and Tier 2 centers. The reliable data transfers will be managed by the ATLAS data management system which integrates the data flow with the production system. The data transfers will rely upon, as well as exercise, the underlying stable and high performance network. In this demo, we will show how BNL acts as the major US hub for data storage and data re-distribution to three OSG-enabled USATLAS Tier 2 centers. We plan to have a LHC Service Challenges.

-- JohnHover - 09 Nov 2005

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


ppt BNL-SC3-GDB.ppt (909.5K) | DantongYu? , 14 Nov 2005 - 21:29 | LHC Service Challenges
jpg QoS_Test_Bedcopy.JPG (300.6K) | DantongYu? , 16 Nov 2005 - 12:56 | QoS? TestBed?
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback