r5 - 24 Mar 2010 - 14:25:00 - RobertGardnerYou are here: TWiki >  Admins Web > MinutesMar24



Minutes of the Facilities Integration Program meeting, Mar 24, 2010
  • Previous meetings and background : IntegrationProgram
  • Coordinates: Wednesdays, 1:00pm Eastern
    • (605) 715-4900, Access code: 735188; Dial *6 to mute/un-mute.


  • Meeting attendees: Horst, Kaushik, Rob, Charles, Tom, Mark, Bob, Fred, Wei, Hiro, Rik, Jason, Saul, John, Shawn, Michael, Doug
  • Apologies: Aaron, Nate,

Integration program update (Rob, Michael)

  • SiteCertificationP12 - FY10Q2
  • Special meetings
    • Tuesday (9am CDT): Frontier/Squid
    • Tuesday (9:30am CDT): Facility working group on analysis queue performance: FacilityWGAP suspended for now
    • Tuesday (12 noon CDT) : Data management
    • Tuesday (2pm CDT): Throughput meetings
  • Upcoming related meetings:
  • US ATLAS persistent chat room http://integrationcloud.campfirenow.com/ (requires account, email Rob), guest (open): http://integrationcloud.campfirenow.com/1391f
  • Program notes:
    • last week(s)
      • Phase 12 of Integration Program winding down
      • lsm project for xrood sites - Charles agreed to develop a python module to avoid ld preload dependence, xrdmodule.py
      • Need to discuss ATLAS release installation plan (below, Xin)
      • Need to discuss Tier 3-OSG issues (below, Rik, John)
      • Thanks everyone for attending the OSG AH meeting @ Fermilab last week - a good meeting.
      • LHC operations: aiming for high intensity 450 GeV, stable beam within 24 hours which will lead to a ramp up of activities and requirements for stable operations at all sites.
      • Review season is about to start
    • this week
      • LHC collisions at 7 TeV formally by March 30, starting the 18-24 month run (press release)
      • Two OSG documents for review:
      • Updated CapacitySummary
        • ip-1.pdf: ip-1.pdf
        • 1024 HU cores are gauranteed for ATLAS use, in addition to the 500 cores we've been using
      • Hope for stable beam around March 30. However after 3.5 TeV ramp at noon resulted in Cryo failure that will take a day to recover.
      • Quarter is about to end - quarterly reporting, there will be a 9 day deadline
      • glexec heads up: WLCG management discussion about glexec yesterday - details to be spelled out - will be a requirement to have this installed. Will require integration testing. There were a number of issues raised previously, so a lot of details to iron out. Will need to work with OSG to get glexec installation - may want to invite Jose to the meeting and describe the system - shouldn't place much impact on users or sites. The basic requirement is traceability at the gatekeeper level.

Tier 3 Integration Program (Doug Benjamin & Rik Yoshida)

  • last week(s):
    • Presentations at last week's workshop here
    • Tier 3 xrootd meetings: https://twiki.cern.ch/twiki/bin/view/Atlas/XrootdTier3
    • Jamboree at the end of the month - cleaning up "model" Tier 3 at ANL; encourage those setting up to attend.
    • Working groups ATLAS-wide are making progress
    • Rik is participating in user support working group - lightly attended at the moment; encourage others to attend.
    • Tier 3 - OSG issues:
      Hi all,
      Following are some issues that need clarification. Michael asked me to summarize them. 
      Perhaps they could be discussed in the meeting this afternoon.
      1) Rob Quick would like to know how the OSG GOC should route tickets (GGUS, end-user) 
      that involve ATLAS Tier 3 OSG sites. There are two options:
       -- Send them as regular GOC tickets, via email, directly to the site admins as listed in OIM.
       -- Send them as forwarded tickets into BNL's Tier 3 RT queue.
      There are pros and cons to each. Doing everything directly means less ability to 
      track issues and notice unresponsive T3s. Doing everything via BNL RT means any 
      non-ATLAS-related issues get lumped in with ATLAS concerns. This only matters 
      if a Tier 3 site is doing work for other VOs--which is probably unlikely.
      My main concern is that whatever scheme is chosen, all tickets get handled similarly.
      2) The Tufts site apparently has the OSG software installed and it is publishing 
      via Gratia to OSG, but the site has not been created in OIM. The specific question 
      is if someone could urge them to create their entry? The larger issue is the need 
      to precisely clarify what the responsibilities of an ATLAS Tier 3 site are with 
      respect to OSG, and make sure that they perform all the necessary steps.
      3) In order for a site to subscribe to ATLAS data (via DQ2?), and possibly to
       transfer data with lcg-cp, it apparently is necessary/useful for the site to 
      publish its SE info into the WLCG BDII. (Marco Mambelli is involved in the lcg 
      tools on OSG and maybe can provide more exact requirements.)
      If so, this means that the site requires a CEMon/GIP installation. Currently, 
      these only get installed along with a CE, which some T3s may not need. So 
      we need to determine if a standalone CEMon/GIP setup is required, and if 
      so we need to request such a package be defined in the VDT/OSG stack. 
      The pieces exist--it is just a matter of configuration. Burt Holzman and 
      Brian Bockelman are willing to do it, but want confirmation that it is 
      required by our model before putting in the effort.
    • follow-up next week.
    • Note: BDII is critical service for US ATLAS. There is an SLA.
  • this week:
    • The link to ATLAS T3 working groups Twikis are here
    • Draft users' guide to T3g is here
    • Model T3g is up at ANL, allowing users. Will use it at the end of next week.
    • Pathena submission to T3 still not working. Build job works, but not the full job.
    • Still need a dq2 get, or via FTS channel.
    • Working groups making lots of progress
    • One issue is that the file system CVMFS - centralizes deployments of releases and conditions data
    • OSG support issues
      • how to route tickets for T3g's? Should we route directly to the sites?
      • Questions about who is responsible for the tickets? What about connection to the T3 support group? So there is a bigger question that just GOC and RT tickets.
      • Rik - would like to get US participation in the T3 support group. Support model not clear.
      • Non-grid T3's - should they go through DAST?
      • Should bring up at the L2/L3 management meeting.
      • Nothing new from Hiro

Operations overview: Production and Analysis (Kaushik)

  • Production reference:
  • Analysis reference:
  • last meeting(s):
    • Note starting with this meeting analysis queue issues formerly covered by Nurcan will be addressed here.
  • this week:
    • Back to full capacity
    • Some requests for regional production - ready to be used for backfill
    • 200M events from central production will be defined as queue fillers
    • Next week: meeting at BNL ADC computing meeting (Alexei organizing). Large ADC-ATLAS attendance. Will focus on production issues to be discussed. Planning for the next couple of years.
    • Distributed analysis tests at AGLT2 - full sequence of 250, 500, 750, 1000, .. Ganga robot job sets. Results looked great.
      • Will prepare a talk for next week.
      • Will re-run high occupancy test.
      • Deploy pcache3 at UM? After next set of tests.
    • (Aside: there were HC tests at SLAC last week - found previous results reproduced with good results). Using old version of ROOT - older xrootd client.

Data Management & Storage Validation (Kaushik)

Release installation, validation (Xin, Kaushik)

The issue of validating process, completeness of releases on sites, etc.
  • last meeting
    • Its been a one-person operation to deploy releases.
    • There will be an installation database which has a record of all the pieces necessary.
    • Integrate Panda-based installation w/ this database.
    • Control will be given to Alessandro.
    • Why not use Alessandro's system itself?
    • If we use the EGEE WMS system some of the pre-requisites are there. For example we have CE information published in the BDII.
  • this meeting:
    • Michael: John Hover will open a thread with Alessandro to begin deploying releases using his methods. Which is WMS-based installation.
    • John's email will start the process today -
    • There will be questions - certificate to be used and account to be mapped to.
    • Charles: makes point that it would be good to have tools that admins could have to test releases. Will do this in the context of Alessandro's framework.

Shifters report (Mark)

  • Reference
  • last meeting:
    Yuri's summary from the weekly ADCoS meeting:
    1)  3/10: Transfer errors at MWT2_IU, "MWT2_IU_LOCALGROUPDISK failed to contact on remote SRM," was due to a dCache restart, therefore a transient problem.  eLog 10282, ggus 56360, RT 15687.
    2)  3/11 - 3/13: MWT2_UC -- problems with atlas s/w releases 15.6.3 & 15.6.6.  Jobs were failing with errors like:
    ImportError: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by
    Issue resolved -- Xin re-installed the releases.  eLog 10309, RT 15685, ggus 56346.
    3)  3/12: BNL -- transfer errors due to a bad credential on one of the grid FTP doors.  Issue resolved.  eLog 10363.
    4)  3/12: Site UTA_SWT2 was set off-line to upgrade SL to v5.4 and ROCKS.  dq2 site services migration to BNL also occurred during this outage.  3/15: test jobs completed successfully, back to on-line.  eLog 10447.
    5)  3/12: From Shawn at AGLT2:
    We had a routing incident with our Solaris node. File transfers from 16:42 until 17:10 were impacted.  Should be OK now.
    6)  3/13: Transfer errors at UTA_SWT2_HOTDISK -- problem understood, from Patrick:
    The mapping errors arose from an unstable NIC on the GUMS host. The "unable to connect" errors were due to rebooting the SRM and GUMS hosts to accommodate update NIC driver configurations.  RT 15715, ggus 56427, eLog 10378, 79.
    7)  3/14: AGLT2 -- transfer errors from T0 ==> AGLT2_CALIBDISK.  Issue was a h/w problem on a dCache storage node (UMFS07.AGLT2.ORG).  Issue resolved with Dell tech support.  ggus 56434, RT 15717, eLog 10409, 25.
    8)  3/14:  Transfer errors between BNL-OSG2_USERDISK (src) and MWT2_IU_LOCALGROUPDISK (dest).  Issue understood -- from Michael:
    Experts in the US have investigated the issue and found that it is caused by modifications in the FTS timeout settings in conjunction with no data flowing while a transfer is in progress.  More details in eLog 10420.
    9)  3/15: New releases of DQ2 Central Catalogs.  Details here: https://savannah.cern.ch/support/?113291 (Note: looks like a permission problem with this web page?)
    10)  3/16: BNL -- US ATLAS conditions db maintenance completed.  No user impact.
    11)  3/16: From Wei at SLAC:
    Quite a few jobs failed to write to our storage. It was due to a bug in the particular version of xrootd we are using at SLAC. It is now fixed.
    12)  3/16: File transfer problems between BNL & NET2.  From Saul:
    One of our main file systems at NET2 is behaving badly right now and writing speeds are down to 30-60 MB/sec.  That's very likely why things are getting backed up and timing out.  We don't know what's going on yet, but are investigating.  
    RT 15735, eLog 10484.
    Follow-ups from earlier reports:
    (i)  New calendar showing site downtimes for all regions (EGEE, NDGF, OSG) is here:
    (ii)  3/3: Consolidation of dq2 site services at BNL for the tier 2's by Hiro beginning.  Will take several days to complete all sites.  ==> Has this migration been completed?
  • this meeting:
    Yuri's summary from the weekly ADCoS meeting:
    1)  3/17 - 3/18: MWT2_UC, ANALY_MWT2 -- off-line for electrical work -- completed, back to on-line.  eLog 10526.
    2)  3/18: Issue with transfer failures at NET2 resolved.  RT 15745/46, ggus 56511.
    3)  3/19: Issue with installation jobs at IllinoisHEP resolved? 
    4)  3/19: From aaron at MWT2:
    Due to a power event and reboot of a number of our worker nodes, we lost a fair number of jobs. You should expect to see a number of jobs failing with a lost heartbeat.
    5)  3/21: LFC problem at AGLT2 understood -- from Shawn:
    Ours was just "slow"...we are working on the back-end iSCSI to get backups setup and the iSCSI appliance was really slow for a while.
    6)  3/22: Issue with installation jobs at HU_ATLAS_Tier2 understood (sort of, still some questions about jobs running when a site is in 'brokeroff' vs. 'test' vs. ....).  Test jobs succeeded, queue  HU_ATLAS_Tier2-lsf set to online.
    7)  3/22: Lack of pilots at several sites was due to a problem with the submit host gridui11.  Machine rebooted, pilots again flowing.
    8)  3/22: BNL -- FTS and LFC database maintenance completed successfully.
    9)  3/23: From Charles at MWT2:
    Due to a glitch while installing a new pcache package, a number of jobs have failed during stage-in at MWT2_IU and MWT2_UC, with the following error:
    23 Mar 20:46:13|LocalSiteMov| !!WARNING!!2995!! lsm-get failed (51456): 201 Copy command failed
    This was a brief transient problem and has been resolved. Please do not offline the sites. We are watching closely for any further job failures.
    10)  3/24: Pilot update from Paul (v43a):
    * Multi-jobs. Several jobs can now be executed sequentially by the same pilot until it runs out of time. The total run time a multi-job pilot is allowed to run is defined by schedconfig.timefloor (minutes) [currently unset for all sites, 
    so feature is not enabled anywhere as of today]. 
    The primary purpose is to reduce the pilot rate when a lot of short jobs are in the system, and can be used for both production and analysis jobs. Initial testing will use a suggested timefloor of 15-20 minutes. Requested by Michael Ernst.
    * Tier 3 modifications. Minor changes to skip e.g. LFC file registrations on tier 3 sites. cp and mv site movers can be used to transfer input/output files. Currently pilot is writing output to ///. Input file specification done via file list. 
    Testing under way at ANALY_ANLASC.
    * Further improvements in (especially) get and put pilot error diagnostic messages. Requested by I Ueda.
    * Corrected problem with athena setup when option -f was used. Requested by Rod Walker.
    * Added pilot support for 5-digit releases. Requested by Tadashi Maeno et al.
    * Removed hardcoded slc5 software path from setup path since it is no longer needed. Requested by Alessandro Di Salvo.
    * Replaced hardcoded panda servers with pandaserver.cern.ch for queuedata download. Requested by Graeme Stewart.
    * Installation problem now recognized during CMTCONFIG verification (NotAvailable error). Requested by Rod Walker.
    * Job definition printout now contains special command setup for xrdcp (when available). Note: printout done twice, at the beginning of the job and when all setups are done. Special command setup will only be set in the second printout. 
    Requested by Johannes Elmsheuser et al.
    * Corrected undefined variable in local site mover. Requested by Pedro Salgado.
    * Minor change in queuedata read function needed for glExec integration to allow queuedata file to be found after identity switching. glExec/pilot integration done in parallel by Jose Caballero.
    Follow-ups from earlier reports:
    (i)  New calendar showing site downtimes for all regions (EGEE, NDGF, OSG) is here:
    (ii)  3/3: Consolidation of dq2 site services at BNL for the tier 2's by Hiro beginning.  Will take several days to complete all sites.  ==> Has this migration been completed?

DDM Operations (Hiro)

  • Reference
  • last meeting(s):
    • DQ2 logging has a new feature - errors are reporting now. Request: to be able to search at the error level.
    • Will be adding link for FTS log viewing.
    • FTS channel configuration change for data flow time-out. New FTS has option for terminating the transfer timeouts. Default for the entire transfer is 30 minutes. Wastes channel for a failed transfer. If no progress in first 3 minutes, transfer is terminated. Now active for all t2 channels.
      • If no progress (bytes transferred) during the a 180 second window, transfer cancelled. (Every 30 seconds a transfer marker is sent.) Making a page with all the settings.
      • Have observed that some transfers being terminated.
      • BNL-IU problem - fails for small files when directly writing into pools. All sites with direct transfers to pools are affected - its a gridftp2.
      • Logfiles and root files - few hundred kilobyte sized files.
      • In the meantime BNL-IU is not using gridftp2
      • dcache developers being consulted - may need a new dcache adapter
    • DQ2 SS consolidation except BU - problem with checksum issues.
    • Need to update Tier 3 DQ2. Note: Illinois working
  • this meeting:
    • BU site service now at BNL.. so all sites now running DQ SS at BNL DONE
    • FTS log, site level SS logs both available

Conditions data access from Tier 2, Tier 3 (Fred, John DeStefano)

Throughput Initiative (Shawn)

  • NetworkMonitoring
  • https://www.usatlas.bnl.gov/dq2/throughput
  • Now there is FTS logging to the DQ2 log page at: http://www.usatlas.bnl.gov/dq2log/dq2log (type in 'fts' and 'id' in the box and search).
  • last week(s):
    • No meeting this week. Next meeting next week.
    • A couple of sites having issues with perfsonar install - developers investigating.
  • this week:
    • Minutes:
      		USATLAS Throughput Meeting Notes - March 23, 2010
      Attending: Aaron, Sarah, John, Saul, Charles, Jason, David, Andy, Hiro, Mark, Augustine, Horst, Karthik
      Excused: Karthik
      1) Jason updated us on the segfaulting issue: related to perl modules ending.  Shouldn't be causing any real problems  Restarts are done daily of all services that should be "Running".  A future version of perfSONAR may have a "monitor" which watches processes that should be running AND will restart them if they stop.   Karthik reported that the DNS issue they were having (heavy DNS load from perfSONAR nodes) was resolved by putting 'nscd' in to cache DNS requests. Andy report that the next version will have 'nscd' to cache DNS.   Jason sent along instructions on how sites can put 'nscd' in place right now:   
        sudo apt-get install nscd
        sudo /etc/init.d/pSB_owp_master.sh restart
        sudo /etc/init.d/pSB_owp_collector.sh restart
      About 1 month before another release (April 23 or so). 
      2) Hiro reported that the dCache bug for small file transfers using GridFTP2 will prevent our "transaction" testing until it is resolved.  The transaction tests use lots of small files.  Hiro will try to work with WT2/Wei concerning transaction testing to a BestMan/Xrootd site in the interim.   Now there is FTS logging to the DQ2 log page at: http://www.usatlas.bnl.gov/dq2log/dq2log (type in 'fts' and 'id' in the box and search).   Will add the appropriate links to the Throughtput testing pages at: http://www.usatlas.bnl.gov/dq2/throughput to that we can search when there are failures.    
      3) Site reports
      	BNL -  John reported tests with Artur and Eduardo at CERN to explore the 10 minute "bursty" network results they were seeing at joint-techs.  Looks fine now.   LHCOPN 8.5+8.5 Gbps works fine.   perfSONAR issues as per other sites. 
      	BU - Augustine reported on local throughput problem; dual 10GE network NIC and 1GE destinations was having poor performance.  Fix was to  disable Linux "autotuning" via setting net.ipv4.tcp_moderate_rcvbuf = 0.   More details would be interesting.  BNL perfSONAR now configured for testing (after call).
           MWT2 - Small files problem at IU.  perfSONAR problems as mentioned.    Xrootd testing ongoing at IU.   Bonnie++ testing of XFS (default vs tuned) at UC.
      	Illinois - perfSONAR issues there.  Restarts working.
      	SWT2_OU/SWT2_UTA.  No additions from Karthik's note.  All working now.
      	AGLT2 -  NFS replacement via Lustre being explored.  perfSONAR issue with stopping service observed.  
      4) AOB?  None.
      Plan to meet again in two weeks.  Sites should prepare by looking at their perfSONAR measurement results and bring questions to the meeting.    Notify Shawn if there are other topics to add to the agenda.
      Please send along corrections and comments to the list.
    • perfsonar release schedule - about a month away - anticipate doing only bug fixes.
    • Transaction bottleneck tests - but there is a dcache bug for small files that must be solved first; use xrootd site.
    • Look at data in perfsonar - all sites
    • BU site now configured. SLAC - still not deployed, still under discussion.

Site news and issues (all sites)

  • T1:
    • last week(s):On-going issue with condor-g - there has been incremental progress being made but there are new effects observed. Observed a slow-down in job throughput. Working with Condor team, some fixes were applied (new condor-q) which helped for a while; decided to add another submit host to the configuration. New HPSS data movers and network links. Long awaited DDN equipment has arrived, 2 PB (fully populated 9900, 1200 drives, 2 TB). Open Solaris and ZFS. Four head nodes. Have Dell R710s servers in front of this array. Had to add FC switch. Pedro and storage management group have a new lsm w/ callbacks for staging - to be integrated into production carefully. Xin has configured an ITB queue for Panda jobs; tested for get, close to completing tests for put operations. C6100 eval, 4 mobos (r410), pricing doesn't look too encouraging (yet).
    • this week: Testing of new storage - dCache testing by Pedro. Will purchase 2000 cores - R410s rather than high density units, ~ six weeks. Another Force10 coming online 100 Gbps interconnect. Requested another 10G link out of BNL - for the Tier 2s. Hope ESnet will manage the bw's to sites well. Fast track muon recon running for the last couple of days, majority at BNL (kudos); lsm by Pedro now supporting put operations - tested on ITB. CREAM CE discussion w/ OSG (Alain) - have encouraged him to go for this and make available to US ATLAS as soon as possible.

  • AGLT2:
    • last week: Running well. Lustre to replace NFS.
    • this week: Lustre in VM going well.

  • NET2:
    • last week(s): Problem with GPFS slowness, investigating. Production on new Nahalem nodes; need WLCG reporting work.
    • this week: Filesystem problem turned out to be a local networking problem. HU nodes added - working on ramping up jobs. Top priority is acquiring more storage - will be Dell. DQ2 SS moved to BNL. Shawn helped tune up perfsonar machines. Moving data around - ATLASDATADISK seems too large. Also want to start using pcache.

  • MWT2:
    • last week(s): Work proceeds on deployment of 1 PB storage. Systems racked, cabling started. Electrical work continues - will need to schedule a downtime to re-arrange UPS power between dCache nodes. Working on distributed Xrootd testbed between IU and UC (ANALY_MWT2_X). PNFS trash feature enabled - fixed pnfs orphans. Making python bindings to the xrootd library - accessing xrootd functions directly. Will be generally useable.
    • this week: Electrical work complete putting new storage systems behind UPS. New storage coming online: SL5.3 installed via Cobbler and Puppet on seven R710 systems. RAID configured for MD1000 shelves. 10G network to each system (6 into our core Tier 2 Cisco 6509, 1 into our Dell 6248 switch stack). dCache installed. Also working on WAN Xrootd testing (see ATLAS Tier 3 working group meeting yesterday). Python bindings for xrootd library - work continues - in advance of local site mover development for xrootd.

  • SWT2 (UTA):
    • last week: SL5.4 w/ Rocks 5.3 complete. SS transitioned to BNL. Issues w/ transfers failing to BNL. There may be an issue w/ how checksums are being handled. 400 TB of storage being racked and stacked. Looking into ordering more compute nodes.
    • this week: All running fine - putting together 400 TB storage. Continuing to look into procuring new compute and storage.

  • SWT2 (OU):
    • last week: Waiting on 23 node compute node order from Dell. Will have 456 cores.
    • this week: 23 servers arrived. Call scheduled w/ Dell regarding installation.

  • WT2:
    • last week(s): ATLAS home and release NFS server failed; will be relocating to temporary hardware. All is well. Storage configuration changed - no longer using the xrootd namespace (CNS service)
    • this week: Short outtage this afternoon for about an hour.

Carryover issues (any updates?)

VDT Bestman, Bestman-Xrootd

Local Site Mover

  • Specification: LocalSiteMover
  • code
    • BNL has a lsm-get implemented and they're just finishing implementing test cases [Pedro]
  • this week if updates:

WLCG Capacity Reporting (Karthik)

  • last discussion(s):
    • There is a report complete - there is an email every Tuesday.
    • AGLT2 is the only site that is in compliant in terms of reporting HS correctly. OIM is likely out of date.
    • Once the sites have completed their updates Karhik will check.
    • Karthik will send a reminder.
  • this meeting
    • This is a report of pledged installed computing and storage capacity at sites.
      Report date: Tue, Mar 23 2010
       #       | Site                   |      KSI2K |       HS06 |         TB |
       1.      | AGLT2                  |      1,570 |     10,400 |          0 |
       2.      | AGLT2_CE_2             |        100 |        640 |          0 |
       3.      | AGLT2_SE               |          0 |          0 |      1,060 |
       Total:  | US-AGLT2               |      1,670 |     11,040 |      1,060 |
       Diff:   | US-AGLT2               |            |          0 |          0 |
               |                        |            |            |            |
       4.      | BU_ATLAS_Tier2         |      1,910 |          0 |        400 |
       Total:  | US-NET2                |      1,910 |          0 |        400 |
       Diff:   | US-NET2                |            |    -11,040 |       -660 |
               |                        |            |            |            |
       5.      | BNL_ATLAS_1            |      8,100 |     31,000 |          0 |
       6.      | BNL_ATLAS_2            |          0 |          0 |          0 |
       7.      | BNL_ATLAS_5            |          0 |          0 |          0 |
       8.      | BNL_ATLAS_SE           |          0 |          0 |      4,500 |
       Total:  | US-T1-BNL              |      8,100 |     31,000 |      4,500 |
       Diff:   | US-T1-BNL              |            |    -21,200 |      -3840 |
               |                        |            |            |            |
       9.      | MWT2_IU                |      3,276 |      5,520 |          0 |
       10.     | MWT2_IU_SE             |          0 |          0 |        179 |
       11.     | MWT2_UC                |      3,276 |      5,520 |          0 |
       12.     | MWT2_UC_SE             |          0 |          0 |        250 |
       Total:  | US-MWT2                |      6,552 |     11,040 |        429 |
       Diff:   | US-MWT2                |            |          0 |       -631 |
               |                        |            |            |            |
       13.     | OU_OCHEP_SWT2          |        464 |          0 |         16 |
       14.     | SWT2_CPB               |      1,383 |          0 |        235 |
       15.     | UTA_SWT2               |        493 |          0 |         15 |
       Total:  | US-SWT2                |      2,340 |          0 |        266 |
       Diff:   | US-SWT2                |            |    -11,040 |       -794 |
               |                        |            |            |            |
       16.     | WT2                    |        820 |      9,057 |          0 |
       17.     | WT2_SE                 |          0 |          0 |        597 |
       Total:  | US-WT2                 |        820 |      9,057 |        597 |
       Diff:   | US-WT2                 |            |     -1,983 |       -463 |
       Total:  | All US ATLAS           |     21,392 |     62,137 |      7,252 |
    • Only AGLT2 is correct


  • last week
  • this week

-- RobertGardner - 23 Mar 2010

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


pdf US_BDII_Requirements_for_Atlas.pdf (47.6K) | RobertGardner, 23 Mar 2010 - 08:00 |
pdf ip-1.pdf (670.0K) | RobertGardner, 24 Mar 2010 - 13:00 |
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback