r7 - 25 Nov 2009 - 18:00:09 - MarkSosebeeYou are here: TWiki >  Admins Web > MinutesNov25

MinutesNov25

Introduction

Minutes of the Facilities Integration Program meeting, Nov 25, 2009
  • Previous meetings and background : IntegrationProgram
  • Coordinates: Wednesdays, 1:00pm Eastern
    • (605) 715-4900, Access code: 735188; Dial *6 to mute/un-mute.

Attending

  • Meeting attendees: Michael, Rob, Booker, John D, Wei, Sarah, Saul, Tom, Fred, John B, Charles, Xin, Patrick, Horst, Nurcan, Mark, Mark, Armen
  • Apologies: none

Integration program update (Rob, Michael)

  • SiteCertificationP11 - FY10Q1
  • Special meetings
    • Tuesday (9am CDT): Frontier/Squid
    • Tuesday (9:30am CDT): Facility working group on analysis queue performance: FacilityWGAP suspended for now
    • Tuesday (12 noon CDT) : Data management
    • Tuesday (2pm CDT): Throughput meetings
  • Upcoming related meetings:
  • US ATLAS persistent chat room http://integrationcloud.campfirenow.com/ (requires account, email Rob), guest (open): http://integrationcloud.campfirenow.com/1391f
  • Program notes:
    • last week(s)
      • Upcoming schedule for LHC operations. Increased activity by this Friday. Expect some beam splash events soon. Then rotating beams with, and w/o RF capture.
      • Day 13 could mean 900 GeV collisions.
      • Data before Christmas will be 'special' - including RAW and ESD data. Expect normal distribution.
      • Will be a wide distribution, especially at the Tier 2s. Very important to have all Tier 2's stable and available.
      • We should refrain from upgrades and other extended downtimes.
      • End of this week to December 18 - refrain from any disturbance.
      • December 18 to January 10 - will be a better window for upgrades and scheduled interventions.
      • Are the space token upgrades really necessary before December 18?
      • Expect adc-operations list will be used to communicate data replication, etc. and Michael will send summaries.
      • Active cleaning going on right now to create space for the new data - though there are issues Kaushik reports. Believes though that all Tier 2's are in good shape.
    • this week
      • xrootd meeting next week - need ATLAS representation:
        Date: Tuesday, December 1, 2009
        Time: 9:30am Pacific, 11:30am Central, 6:30pm Geneva
        Phone: 510-665-5437, #4321
        
        Within OSG, we talked about how our understanding of what USATLAS needs from OSG and Xrootd hasn't been as clear as it needs to be, so we invited Rob Gardner (or someone he delegates) to join the meeting to help keep us all in sync. I hope you don't mind.
        
        Agenda:
        1) Update on OSG plans, particularly with respect to XRootd and Tier-3 usage of XRootd.
        2) Update on XRootd plans
        3) Update on USATLAS needs and expectations for XRootd and OSG's distribution of it.
        
        Thanks,
        -alain
      • UAT post mortem from last week: http://indico.cern.ch/conferenceDisplay.py?confId=74076
        • final remarks, follow-up plans?
      • LHC restart underway! http://atlas.web.cern.ch/Atlas/public/EVTDISPLAY/events.html
      • Presentation this week, from Fabiola: As announced in a mail by the DG a few days ago, there will be a presentation on Thursday from 4pm in the CERN Main Auditorium about the status of the LHC after its first week of operation in 2009: http://indico.cern.ch/conferenceDisplay.py?confId=74907. In addition to the machine talk, to be given by Steve Myers, the experiments have been asked (yesterday afternoon) to give short (15') status reports about their first days of operation with beam(s). Andreas Hoecker will give the ATLAS talk.

Tier 3 Integration Program (Doug Benjamin & Rik Yoshida)

  • last week:
    • Discussions w/ Torre about use of pathena + panda at the Tier 3. Will be testing this at Duke soon.
    • Follow-up phone meeting for the organizational meeting - Monday/Tuesday of next week.
    • Setting up VMs to test clusters (Marco helping)
    • Contact w/ Massimo Lamana regarding Tier 3 issues ATLAS-wide, to be discussed at CERN at the end of the month, perhaps end of January a meeting.
    • CERN web filesystem meeting; will be testing next generation version. Can the US setup a mirrored CERN VM site?
    • Subscription tests to SEs have not yet begun
    • Kaushik believes we still need a manual approval process to ensure reasonable requests
    • Immediate need is to get test subscriptions for bringing up Tier 3 SEs.
    • Hiro: LFC migration to BNL-LFC; Illinois-hep now migrated. OU later today. UTD ? Need to make a msyql dump. Wisconsin tomorrow. Then done.
  • this week:
    • no report this week

Operations overview: Production (Kaushik)

Data Management & Storage Validation (Kaushik)

  • Reference
  • last week(s):
    • Formalize procedure for cleaning USERDISK (procedure just needs to be put in twiki by Armen); this will be done centralized.
    • Time-out problems xrootd. Wei suggests reducing the number of transfers done at the same time. Default is 200.
    • A new dq2 version is available.
    • LOCALGROUPDISK to be deployed at every T2. US-only usage; will be monitored by Hiro's system. Timeframe: within a week. There is a stability issue with xrootd.
    • Michael: issue of publishing of SEs in the GIS (BDII). The reason is we want to allow data replication between Tier 2's and other Tier 1s. We need to make sure we get our SEs get published into OSG interoperability BDII. Start within a week. Xin to follow-up
    • MinutesDataManageNov17
    • Need to get LOCALGROUPDISK setup quickly before end of week
  • this week:
    • MinutesDataManageNov24
    • We've received requests from end-users for getting data off of sites. "Which tool am I supposed to use?" Kaushik: we need to send a consistent message to use pathena at the Tier 2's. We have 4 complete copies of the data at the Tier 2s, 6 complete in the US. AOD = actually performance DPDs. There are only 3 good runs of interest.
    • RAW data requests to Tier 2? None yet. People are calling!

Shifters report (Mark)

  • Reference
  • last meeting:
    Yuri's summary from the weekly ADCoS meeting:
    http://indico.cern.ch/materialDisplay.py?contribId=1&materialId=0&confId=74330
    1)  11/11-11/12: IU_OSG -- kernel upgrades completed, site set back to 'online'.
    2)  11/11: BNL -- ~150 jobs failed with stage-in errors -- issue was an off-line storage server -- resolved.  RT 14585.
    3)  11/12: BNL -- US ATLAS conditions oracle cluster db maintenance, originally scheduled for 11/12/09, was postponed until
    Monday, November 16th, and eventually to the 21st of December.
    4)  11/13: ~500 failed jobs at BU with local site mover errors.  The log extract included "no space left on device."  From Saul:
    We got short of disk space in the process of moving our DATADISK.  It should be fixed now.  eLog 6926.
    5)  11/13: At the beginning of the shift BNL and AGLT2 had no activated jobs, but plenty of assigned ones.  Issue with the BNL_ATLAS_DDM queue was eventually resolved.  See extensive mail thread for details.
    6)  11/14: Jobs at AGLT2 were gradually draining out.  From Bob:
    Running job count at aglt2 began to drop at 17:40pm.  I subsequently found a crashed "ypbind", and restarted it at 20:15.  All times EST.  Grid services are once again authenticating, however, we expect a number 
    of dead/crashed jobs to show up from this time period.
    7)  11/15: srm storage filled up at UTD-HEP.  Some issues running the "proddisk-cleanse.py" script.  Being worked on.  Site set 'off-line'.  RT 14708.
    8)  11/17 early a.m.: AGLT2 -- transfer errors, jobs were failing with " Put error: Copy command returned error code 256 and output: httpg://head01.aglt2.org:8443/srm/managerv2: CGSI-gSOAP: Could not open connection!  Resolved -- from Shawn:
    The /var partition on the dCache headnode was full. This was apparently due to excessive logging into the postgres DB. Some space has been freed and both postgres and dcache services restarted on head01.aglt2.org.
    9)  11/17: LFC migration to BNL completed for tier 3 site IllinoisHEP.  Test jobs submitted, but they seem to still be using the old LFC information.  Wensheng updated the ToA, and the jobs have now finished successfully.  
    Will set the site to 'on-line' once the output file transfers complete.
    
    Follow-ups from earlier reports:
    (i)  UAT -- postmortem will be held November 19, 2:00pm CET.
    (ii) A new test instance of the RT server at BNL was announced by Jason (message to the usual mail lists).  Try it out at: https://rt.racf.bnl.gov/rt3/
    
  • this meeting:
    Yuri's summary from the weekly ADCoS meeting:
    http://indico.cern.ch/materialDisplay.py?contribId=0&materialId=0&confId=74938
    
    1)  11/18: LFC migration completed for tier 3 site IllinoisHEP -- test jobs successful -- back to 'online'.
    2)  11/19: SLAC -- maintenance outage.  Test jobs completed successfully -- back to 'online'.
    3)  11/19: PERF-EGAMMA token added at SWT2_CPB.
    4)  11/19: Brief network outage at NET2 -- from John:
    Early Tuesday morning, we had a campus network incident that caused a short outage and triggered a Lustre bug that would lock up machines.  
    We're seeing the wave of `Lost heartbeat' errors for the jobs that were running one the crashed nodes coming through now.  That was an isolated incident, 
    and the Lustre bugfix has been identified so we can avoid this in the future, too.
    5)  11/20: AGLT2 -- file transfer errors: "FTS State [Failed] FTS Retries [1] Reason [DESTINATION error during TRANSFER_PREPARATION phase: [USER_ERROR] 
    failed to contact on remote SRM [httpg://head01.aglt2.org:8443/srm/managerv2]. Givin' up after 3 tries]."  From Bob:
    The pnfsManager was offline. I just restarted it. Watching the situation.  ggus 53453, 53465, eLog 7137.
    6)  11/20: US sites running an older version of dq2 ss had to apply a patch to the file : /usr/lib/python2.3/site-packages/dq2/siteservices/fetch.py  (thanks Hiro)
    7)  11/20 afternoon: same issue with lack of pilots that was observed the previous week -- from Torre:
    Pilot status update is slow so jobs which actually have slots still show as queued, inhibiting new pilot submissions. And not all sites benefit from CERN pilot 
    backfill because some are on voatlas61 which still has problems submitting pilots to BNL. As of 4pm eastern (10min from now) all sites will be backfilled from voatlas60 
    at CERN which is working fine 
    (this is how the problem was 'fixed' last week).
    8)  11/22: "Lost heartbeat" job failures at BNL due to a power outage (storms) that occurred on 11/20.  ggus 53538.
    9)  11/21-22: dCache issue at MWT2_UC -- from Charles:
    There was a service disruption for MWT2_UC dcache, beginning yesterday (11-21). This has been resolved.  A system disk filled yesterday on the dCache admin node, 
    leading to system-wide failures. Restarting the affected admin host was not sufficient. After restarting the entire dCache system, we are back in operation. DQ2 logs show successful transfers - 
    dCache logs are also showing successful transfers in the other direction as well.  RT 14772.
    10)  11/23: Charles / Stephane discovered an issue with incorrect ACL's in the MWT2 LFC.  Resolved (Charles sent around instructions other sites can use to check their LFC's on 11/24.)
    11)  11/23: RAID controller failure in one of the dCache pools at IU -- resolved.  eLog 7243.
    12)  11/23-24: UTD-HEP -- LFC migration to BNL completed -- Alden made the needed updates to schedconfigdb -- test jobs submitted.
    13)   11/24: Central deletion of data in the USERDISK space token created problems with dCache at AGLT2 -- from Hiro:
    It is now found conclusively that the deletion program has crashed AGLT2 dCache/Chimera. The problem is caused by the bulk request of 50 files in a single request. 
    I have now changed the maximum value for AGLT2 to be 10 instead of 50. Although it is not affecting other sites, if your site wants to reduce this value to be smaller, please let me know.
    
    Follow-ups from earlier reports:
    (i)  UAT -- postmortem was held on November 19, 2:00pm CET.  Agenda:
    http://indico.cern.ch/conferenceDisplay.py?confId=74076
    (ii) A new test instance of the RT server at BNL was announced by Jason (message to the usual mail lists).  Try it out at: https://rt.racf.bnl.gov/rt3/  -- 
    update: migration to new RT system completed on 11/24.
    (iii) BNL -- US ATLAS conditions oracle cluster db maintenance, originally scheduled for 11/12/09, was postponed until
    Monday, November 16th, and eventually to the 21st of December. 
     

Analysis queues (Nurcan)

  • Reference:
  • last meeting:
  • this meeting:
    • One user in US trying to run on first real data: data09_900GeV.00140541.physics_MinBias.merge.AOD.f175_m273. A job on a local file is successful.
    • UAT postmortem on Nov. 19th: http://indicobeta.cern.ch/conferenceDisplay.py?confId=74076
    • Analysis of errors from Saul on a subsample of 5000 failures: http://www-hep.uta.edu/~nurcan/UATcloseout/
      • Two main errors: Athena crash (43.6%) and staging input file failed (43.1%). Athena crash mostly refer to user job problems (forgetting to set trigger info to false, release 15.1.x does not support xrootd at SLAC and SWT2, etc.). Stage-in problems mostly refer to BNL (storage server problem) and MWT2 (dcache bugs, file-locking bug in pcache) jobs.
    • "Ran out of memory" failures are from one user at BNL long queue and AGLT2 as seen in the subsample. I have contacted with user as to a possible memory leak in user's analysis code.
    • DAST team has started training new shifters this month; 3 people in NA time zone, 2 people in EU time zone. 2 more people will train starting in December.

DDM Operations (Hiro)

Conditions data access from Tier 2, Tier 3 (Fred, John DeStefano)

  • Reference
  • last week
    • Fred is attempting stress testing servers at BNL. Problems getting POOL conditions files available at BNL's HOTDISK.
    • Entries are not being generated in the correct format. Might be related to using an old release of dq2-client tools.
    • May need to post-pone the test till after Christmas.
  • this week
    • https://www.racf.bnl.gov/docs/services/frontier/meetings/minutes/20091124/view
    • New squid server at BNL separate from the launch pad. frontier03.usatlas.bnl.gov;
    • TOA listings in place now
    • PFC generation now working: Xin: integrated Alessandro's script for use on OSG sites. Working at BNL - now working on Tier 2s, sending installation job via Panda. AGLT2 needs to update OSG wn-client. xrootd sites don't have the same.
    • Presentations tomorrow and Monday
    • Frontier & Oracle testing - hammering servers - sending blocks of 3250 jobs. Very good results with frontier, millions of hits on the squid with only 60K hits in oracle (thus protected). Repeating tests but directly accessing oracle - couldn't get above 300 jobs. Now doing about 650 jobs simultaneous jobs. John and Carlos looking at dCache and oracle load. Impressive - only one failed job. Each job takes 4GB of raw data and makes histos (reasonable pattern).
    • Yesterday saw 6 GB/s throughput from dCache pool. Today only 4 GB/s, though oracle heavily loaded 80% cpu but holding well; 20 MB/s peak on oracle nodes. Protecting oracle from throughput as well as queries. Utilization of oracle when using frontier was maybe 5% on each node. Impressive.

Throughput Initiative (Shawn)

  • NetworkMonitoring
  • https://www.usatlas.bnl.gov/dq2/throughput
  • last week(s):
    • Throughput test for BNL - week ago. Pushed 1.4 GB/s out of BNL to Tier 2s: MWT2, AGLT2, WT2.
    • Passed milestones for BNL, UC, SLAC DONE
    • Need to test NET2, re-do AGLT2.
  • this week:
    	USATLAS Throughput Meeting Notes -- November 24, 2009
         =====================================================
    
    Attending:  Shawn, Dave, Horst, Hiro, 
    Excused: Jason, Karthik
    
    1) perfSONAR status:  Got a report from Jason/I2:
    
    " - OU: installed the 3.1.1 disks and did no see any major problems. 
     - UM/Illinois: working with Andy and Aaron to debug a database/data storage issue with BWCTL and OWAMP data.  Bug appears to be related to reverse DNS vs IP storage in the database, this is not critical, but will be addressed after the holidays.
    
    If anyone has any questions I will be happy to answer via email."
    
    Karthik report OU did see the disk error on ps1.ochep.ou.edu (latency node).  That systems was reinstalled and reconfigured and things should be running OK now on both nodes.
    
    Illinois is still seeing Throughput controller is still going to "Not Running".  In contact with Aaron about this.   
    
    Question:  
    
    2) Throughput planning (tests/milestones):
    Checksumming will soon be enabled.   We should plan to repeat prior milestones after checksumming is in place.   Also need to Check/verify "transaction" throughput.  (Small files ~ 1 MB, measure how many succeed in a given time). Hiro will work on determining the parameters for a "transaction" test.  Channels now in place for Tier-3's (each will shortly have their own channel).  Tier-3 tests should be appropriate scale but also measure throughput and transaction. Hiro's monitor is already managing this and will Tier-3's as requested.
    
    3) Site reports
    
    	a) BNL - No issues on throughput.   Monitor updated to show FTS transfer DB.  Errors are visible (at some level) but needs work (not online yet).   Logs will also be visible.  Lifetime is ~1 week.   Changed network configuration. Internal network significantly reworked.   Inter-switch limit of 30Gbps eliminated...not sure what the new limit is.  Storage access of 6-7 GB /sec now observed. 
         b) AGLT2  -  Part of winning SC09 bandwidth
         c) OU - No report.  Karthik
    	d) Illinois - No report.  Eventually will contact Hiro about tests.
    
    We plan to meet again next week at the usual time.  Please send along any corrections or updates to the list.  Thanks,
    
    Shawn

Site news and issues (all sites)

  • T1:
    • last week(s): Network upgrade yesterday - Shegeki Misawa - based on Force10 and Foundry Networks. Started all services from zero, to an entire Tier 1 facility ~ 3 hours. Some unexpected issues identified (involving unintended package updates). Completely new network, lots of cabling! Five racks of Dell nodes, Dell on-site. Networking going to the new data center. Ordering 1, 1/2 PB of disk, Nexan disk arrays (PCI FC connected to Thors; Nexan controller is powerful). Fully configured S2A990? 2 TB drives providing 2PB of storage - evaluating. Wei notes that at WT2 Solaris 10, update 7 - seeing interrupts going to only one CPU; not seen at BNL - CPUs are load-balanced. Wei: should be partially solved with Update 8. Now an additional 10G to CERN (total 20 G now). Finding 1.5 Gbps capacity at times.
    • this week: 960 new cores now commissioned (now being used for Fred's jobs). Production/analysis split to be discussed. Evaluating DDN storage, to be delivered 2.4 PB raw (1.7 useable), also some small Nexan arrays to be added to the Thors. Commissioned 3 SL8500 tape libraries, to be shared with RHIC.

  • AGLT2:
    • last week: Taken delivery of full complement of compute nodes and disk servers at both UM and MSU. Plan to run w/ 12 jobs / machine. 24GB mem plus hyperthreading. See https://hep.pa.msu.edu/twiki/bin/view/AGLT2/HepSpecResults.
    • this week: See issues discussed above. Still transitioning SL5, cleaning up configuration to clear up old history.

  • NET2:
    • last week(s): BDII problem currently - someone is trying to copy a dataset to CERN's scratch disk. Will need to publish SRM information correctly in OSG.
    • this week: Error from transfers to cern scratch space token - required several fixes. Johannes jobs are running out of storage on worker nodes, fixed. No other outstanding problems, observed users accessing the new data.

  • Aside: Kausihk notes that three metrics are being used by ATLAS to determine if sites can participate in analysis. Hz (10), efficiency, total events processed / 24 hours. There is on-going discussion in the ICB (Michael and Jim will be in this meeting). The main point is ATLAS management should not be making decisions regarding data placement and use resources.

  • MWT2:
    • last week(s): Main issue is stabilizing dcache - some issues with pool selection since upgrade.
    • this week: Consultation with dCache team regarding cost function calculation and load balancing among gridftp doors in latest dCache release. LFC ACL incident; fixed. Procurement proceeding.

  • SWT2 (UTA):
    • last week: Will do LFC later this afternoon. Purchasing proceeding.
    • this week: LFC upgraded; fixed BDII issue; applied SS update; SRM restart to fix reporting bug; purchases coming in

  • SWT2 (OU):
    • last week: Looking at a network bandwidth asymmetry. 80 TB being purchased; ~200 cores. Another 100 TB also on order.
    • this week: Dell has the storage order.

  • WT2:
    • last week(s): relocating ATLAS release to new server; xrootd. Publishing BDII - working. Site maintenance tomorrow; Linux kernel for security patches. Might do LFC upgrade.
    • this week: completed LFC migration; 160 TB useable last friday; 160 TB in January;

Carryover issues (any updates?)

OSG 1.2 deployment (Rob, Xin)

  • last week:
    • BNL updated
  • this week:
    • Any new updates?

OIM issue (Xin)

  • last week:
    • Registration information change for bm-xroot in OIM - Wei will follow-up
    • SRM V2 tag - Brian says nothing to do but watch for the change at the end of the month.
  • this week:

Release installation, validation (Xin)

The issue of validating presence, completeness of releases on sites.
  • last meeting
  • this meeting:

HTTP interface to LFC (Charles)

VDT Bestman, Bestman-Xrootd

  • See BestMan page for more instructions & references
  • last week(s)
    • Have discussed adding Adler32 checksum to xrootd. Alex developing something to calculate this on the fly. Expects to release this very soon. Want to supply this to the gridftp server.
    • Need to communicate w/ CERN regarding how this will work with FTS.
  • this week

Local Site Mover

Gratia transfer probes @ Tier 2 sites

Hot topic: SL5 migration

  • last weeks:
    • ACD ops action items, http://indico.cern.ch/getFile.py/access?resId=0&materialId=2&confId=66075
    • Kaushik: we have the green light to do this from ATLAS; however there are some validation jobs still going on and there are some problems to solve. If anyone wants to migrate, go ahead, but not pushing right now. Want to have plenty of time before data comes (means next month or two at the latest). Wait until reprocessing is done - anywhere between 2-7 weeks from now, for both SL5 and OSG 1.2.
    • Consensus: start mid-September for both SL5 and OSG 1.2
    • Shawn: considering rolling part of AGT2 infrastructure to SL 5 - should they not do this? Probably okay - Michael. Would get some good information. Sites: use this time to sort out migration issues.
    • Milestone: my mid-October all sites should be migrated.
    • What to do about validation? Xin notes that compat libs are needed
    • Consult UpgradeSL5
  • this week

WLCG Capacity Reporting (Karthik)

  • last discussion(s):
    • Note - if you have more than one CE, the availability will take the "OR".
    • Make sure installed capacity is no greater than the pledge.
    • Storage capacity is given the GIP by one of two information providers (one for dCache, one for Posix-like filesystem) - requires OSG 1.0.4 or later. Note - not important for WLCG, its not passed on. Karthik notes we have two ATLAS sites that are reporting zero. This is a bit tricky.
    • Have not seen yet a draft report.
    • Double check the accounting name doesn't get erased. There was a big in OIM - should be fixed, but checked.
    • Reporting come two sources: OIM and the GIP from the sites
    • Here is a snapshot of the most recent report for ATLAS sites:
      --------------------------------------------------------------------------------------------------------
      This is a report of Installed computing and storage capacity at sites.
      For more details about installed capacity and its calculation refer to the installed capacity document at
      https://twiki.grid.iu.edu/twiki/pub/Operations/BdiiInstalledCapacityValidation/WLCG_GlueSchemaUsage-1.8.pdf
      --------------------------------------------------------------------------------------------------------
      * Report date: Tue Sep 29 14:40:07
      * ICC: Calculated installed computing capacity in KSI2K
      * OSC: Calculated online storage capacity in GB
      * UL: Upper Limit; LL: Lower Limit. Note: These values are authoritative and are derived from OIMv2 through MyOSG. That does not
      necessarily mean they are correct values. The T2 co-ordinators are responsible for updating those values in OIM and ensuring they
      are correct.
      * %Diff: % Difference between the calculated values and the UL/LL
             -ve %Diff value: Calculated value < Lower limit
             +ve %Diff value: Calculated value > Upper limit
      ~ Indicates possible issues with numbers for a particular site
      -----------------------------------------------------------------------------------------------------------------------------
      #  | SITE                 | ICC        | LL          | UL          | %Diff      | OSC         | LL      | UL      | %Diff   |
      -----------------------------------------------------------------------------------------------------------------------------
                                                            ATLAS sites
      1  | AGLT2                |      5,150 |       4,677 |       4,677 |          9 |    645,022 | 542,000 | 542,000 |      15 |
      2  | ~ AGLT2_CE_2         |        165 |         136 |         136 |         17 |     10,999 |       0 |       0 |     100 |
      3  | ~ BNL_ATLAS_1        |      6,926 |           0 |           0 |        100 |  4,771,823 |       0 |       0 |     100 |
      4  | ~ BNL_ATLAS_2        |      6,926 |           0 |         500 |         92 |  4,771,823 |       0 |       0 |     100 |
      5  | ~ BU_ATLAS_Tier2     |      1,615 |       1,910 |       1,910 |        -18 |        511 | 400,000 | 400,000 | -78,177 |
      6  | ~ MWT2_IU            |        928 |       3,276 |       3,276 |       -252 |          0 | 179,000 | 179,000 |    -100 |
      7  | ~ MWT2_UC            |          0 |       3,276 |       3,276 |       -100 |          0 | 179,000 | 179,000 |    -100 |
      8  | ~ OU_OCHEP_SWT2      |        611 |         464 |         464 |         24 |     11,128 |  16,000 | 120,000 |     -43 |
      9  | ~ SWT2_CPB           |      1,389 |       1,383 |       1,383 |          0 |      5,953 | 235,000 | 235,000 |  -3,847 |
      10 | ~ UTA_SWT2           |        493 |         493 |         493 |          0 |     13,752 |  15,000 |  15,000 |      -9 |
      11 | ~ WT2                |      1,377 |         820 |       1,202 |         12 |          0 |       0 |       0 |       0 |
      -----------------------------------------------------------------------------------------------------------------------------
      
    • Karthik will clarify some issues with Brian
    • Will work site-by-site to get the numbers reporting correctly
    • What about storage information in config ini file?
  • this meeting

AOB

  • last week
    • Thursday, November 25 - we probably should have a meeting on that day. (Day before Thanksgiving).
  • this week


-- RobertGardner - 24 Nov 2009

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments

 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback