r5 - 16 Dec 2009 - 14:46:08 - NurcanOzturkYou are here: TWiki >  Admins Web > MinutesDec16



Minutes of the Facilities Integration Program meeting, Dec 9, 2009
  • Previous meetings and background : IntegrationProgram
  • Coordinates: Wednesdays, 1:00pm Eastern
    • (605) 715-4900, Access code: 735188; Dial *6 to mute/un-mute.


  • Meeting attendees: Justin, Fred, Rob, Aaron, Nate, Michael, Wei, Shawn, Jim, Booker Bense, Karthik, Nurcan, Armen, Mark, Kaushik, Bob, Charles, John B,
  • Apologies: Horst

Integration program update (Rob, Michael)

  • SiteCertificationP11 - FY10Q1
  • Special meetings
    • Tuesday (9am CDT): Frontier/Squid
    • Tuesday (9:30am CDT): Facility working group on analysis queue performance: FacilityWGAP suspended for now
    • Tuesday (12 noon CDT) : Data management
    • Tuesday (2pm CDT): Throughput meetings
  • Upcoming related meetings:
  • US ATLAS persistent chat room http://integrationcloud.campfirenow.com/ (requires account, email Rob), guest (open): http://integrationcloud.campfirenow.com/1391f
  • Program notes:
    • last week(s)
    • this week
      • Opportunistic storage for Dzero - from OSG production call. They want it at more OSG-US ATLAS sites than is used today. Asking for 0.5 to 1 TB. We're not sure what it means to configure. Request also coming from Brian Bockelman as well - but few details. There of course are authorization and authentication issues that would need to be configured. Mark: UTA has given 10-15 slots on the old cluster with little impact. We need to have someone from US ATLAS leading the effort to support D0, working through the configuration issues, etc. Mark will follow-up with Joel. * LHC shutdown a few hours ago - no more operations till feb. * Operations call this morning - reprocessing operations December 22 - but will not be using the Tier 2's. * Interventions should be completed within the January timeframe.

Tier 3 Integration Program (Doug Benjamin & Rik Yoshida)

  • last week(s):
    • no report
  • this week:
    • Storage element subscription to Tier 3 completed. Hiro: its working.
    • SE prob at ANL - runaway processes - will monitor
    • Panda submission to Tier 3's - Torre and Doug were going to work on this.
    • T3-OSG meeting - security issues discussed. Have some preliminary ideas.
    • Hiro: T3 cleanup - there is a program being developed to ship a dump of T3-LFC to each T3 - Charles will work on ccc.py to use this dump (sql database)
    • Justin: subscriptions working fine at SMU

Operations overview: Production (Kaushik)

Data Management & Storage Validation (Kaushik)

Shifters report (Mark)

  • Reference
  • last meeting:
    Yuri's summary from the weekly ADCoS meeting:
    1)  12/2-3: Another instance of a db release file in the LFC for UTD-HEP, but no longer on disk.  Fixed by Wensheng (thanks!).  RT 14843.  (One more instance of this issue on 12/6 as well.)
    2)  12/3: From Charles at UC:
    We had an apparent power interruption at UC last night at around 2AM CST. Expect some "lost heartbeat" errors from jobs that were running at the time.
    3)  12/3: BNL: From Michael:
    Due to a configuration issue associated with the dccp client some jobs at BNL failed. The problem was resolved in the meanwhile.  (~4k failed jobs.)  eLog 7687.
    3)  12/3: IU_OSG -- Jobs were failing with the error "Put error: lfc-mkdir failed: LFC_HOST iut2-grid5.iu.edu cannot create....  Could not secure the connection |Log put error: lfc-mkdir failed."  From Aaron at MWT2:
    This has been resolved by a restart of proxies at IU_OSG.  RT 14849.
    4)  12/5-7: Power problems at AGLT2 -- from Bob:
    On Saturday night (~11:40pm EST) there was a power hit at Michigan State that took out a number of worker nodes.  It also apparently took out a central air conditioner.  On Sunday night (~11:20pm) that central air caught up with a major switch room at the MSU campus, 
    and took down the network switch equipment for 2 hours, completely isolating more than half of our dcache disk servers from the systems that remained up at University of Michigan.  Three of these did not restore properly when the network connectivity was 
    re-established and were manually restarted early this morning, total down time for them about 8 hours.  All jobs running at the time at MSU were lost.  We had other issues this afternoon with network instability, that may have blown our running job load, 
    but should now be back on track.  All of these blown jobs should eventually show up with lost heart beat.
    5)  12/7: SLAC -- ADCoS shifter reported t1-t2 transfer errors.  ggus 53942.  This issue was resolved by restarting the SRM service.
    6)  12/7: BNL DQ2 site services s/w upgraded to the newest production version (Hiro).
    7)  12/7: AGLT2_PRODDISK to BNL-OSG2_MCDISK transfer errors.  From Shawn:
    We have two storage nodes with dCache service problems. I believe a simple restart should fix it.  ggus 53915, eLog 7819.
    8)  12/8: Power outage at BNL completed:
    The partial power outage at the RACF that affected a portion of the Linux Farm cluster on Tuesday, Dec. 8 is now over. All affected systems (ATLAS, BRAHMS, LSST, PHENIX and STAR) have been restored and are available to the Condor batch system again.
    Follow-ups from earlier reports:
    (i) BNL -- US ATLAS conditions oracle cluster db maintenance, originally scheduled for 11/12/09, was postponed until Monday, November 16th, and eventually to the 21st of December.
    (ii) BNL -- cyber-security port scans, originally scheduled for December 2/3, have been rescheduled for December 21/22.
  • this meeting:
     Yuri's summary from the weekly ADCoS meeting:
    1)  12/9: Panda server modified to use new db accounts.  Temporarily created a problem with attempts to modify the status of sites via the usual 'curl' interface.  Fixed by Graeme.
    2)  12/9: Some sites noted an increase in the number of pilots waiting in their queues.  Possibly due to (from Torre):
    The autopilot setup on voatlas60 is the same as its been for a couple of weeks, but condor there has been tuned up and it seems it is now more effective at getting pilots to the US queues. The motivation for CERN submissions is to have a centrally managed submit point for everyone that 
    provides redundancy for regional submission, 
    so I think we should adapt ourselves to pilots coming from CERN as well as BNL. 
    Whatever the nqueue setting for a queue is, each submitter will maintain that nqueue independently, so two equally successful submitters will result in ~double the pilot flow. Hence I would suggest reducing nqueue (not necessarily by a factor 2) such that pilot flow is reasonable again.
    3)  12/10: Job failures at MWT2_IU & IU_OSG with stage-in/out errors -- from Charles:
    dCache service at IU was interrupted for some maintenance which took longer than expected. We're back online now. If job recovery is enabled for MWT2_IU (which I believe is the case) these output files should be recoverable.  RT 14890, eLog 7892.
    4)  12/10 p.m. - 12/11 a.m.: A couple of storage server outages at BNL -- resolved.  eLog 7910.
    5)  12/15: Pilot update from Paul (v41c):
    * Local site mover is now using --guid option. Requested by Charles.
    * Correction for appdir used by CERN-UNVALID since previous pilot version caused problems there (pilot v 40b used until now). $SITEROOT was used to build path to release instead of schedconfig.appdir. CERN-PROD and CERN-RELEASE were not affected since $SITEROOT and appdir both points to .../release area. 
    * Pilot options -g  and -m  can now be used to specify locations and destinations of input and output files in combination with mv site mover (compatible with Nordugrid). Requested by Predrag Buncic for CERNVM project.
    * Empty copyprefix substrings replaced with dummy value. Initially caused problems at UTD-HEP due to misconfiguration in schedconfig.
    * STATUSCODE file now created in all getJob scenarios. Requested by Peter Love.
    * Value of ATLAS_POOLCOND_PATH dumped in pilot log. Requested by Rod.
    * The xrdcp site mover (written by Eric for use at ANALY_LYON) has been updated to also work at ANALY_CERN.
    * Note: There will be at least one more minor pilot release before Christmas.
    6)  12/14: From Bob at AGLT2:
    At approximately 4:50am EST today, cluster activity at AGLT2 began to ramp down.  We discovered processes were hung on dCache admin nodes and probably on a few disk servers as well.  At 10:35am cluster activity resumed to normal after services were restarted.  
    We expect this will throw errors in running jobs during this time period.
    7)  12/15: Jobs failures at OU with stage-in errors.  Coincided with a pilot update, which exposed some needed updates to schedconfigdb entries for the site.  Alden made the updates to schedconfigdb, Paul is working on a modification to the pilot which should be ready in the next day or so.  
    Site set to 'off-line'.  RT #14912.  
    12/16 a.m. -- problem now appears to be solved, OU set back to 'on-line'.
    Follow-ups from earlier reports:
    (i) BNL -- US ATLAS conditions oracle cluster db maintenance, originally scheduled for 11/12/09, was postponed until Monday, November 16th, and eventually to the 21st of December.
    (ii) BNL -- cyber-security port scans, originally scheduled for December 2/3, have been rescheduled for December 21/22.
    • excessive pilots observed at some sites. there is a second pilot submitter instance. Look at nqueue setting, may need to be tweaked down.

Analysis queues (Nurcan)

  • Reference:
  • last meeting:
    • One user in US trying to run on first real data: data09_900GeV.00140541.physics_MinBias.merge.AOD.f175_m273. A job on a local file is successful.
    • UAT postmortem on Nov. 19th: http://indicobeta.cern.ch/conferenceDisplay.py?confId=74076
    • Analysis of errors from Saul on a subsample of 5000 failures: http://www-hep.uta.edu/~nurcan/UATcloseout/
      • Two main errors: Athena crash (43.6%) and staging input file failed (43.1%). Athena crash mostly refer to user job problems (forgetting to set trigger info to false, release 15.1.x does not support xrootd at SLAC and SWT2, etc.). Stage-in problems mostly refer to BNL (storage server problem) and MWT2 (dcache bugs, file-locking bug in pcache) jobs.
    • "Ran out of memory" failures are from one user at BNL long queue and AGLT2 as seen in the subsample. I have contacted with user as to a possible memory leak in user's analysis code.
    • DAST team has started training new shifters this month; 3 people in NA time zone, 2 people in EU time zone. 2 more people will train starting in December.
  • this meeting:
    • User activity has already slowed down this week. Jump with real data arriving didn't materialize. Next three weeks should be clear for upgrades.
    • No major problems with data access (yet). Sometimes release isn't installed at the site. Why was the job scheduled there? Should we put release matching in.
    • Probs accessing conditions database. Rod has been responding to some of the problems. Recent releases seem to solve the problems.
    • User support during the break - mostly one shifter on duty. Next week all north american timezone shifts. Following week it will be a EU-zone person mainly with a NA-zone person only for Thursday-Friday.

DDM Operations (Hiro)

  • Reference
  • last meeting(s):
    • Will be testing FTS 2.2 this week.
    • Problems at AGLT2 - deletion program killed the namespace service - bulk operation limited in Chimera? We need to follow-up on this.
    • ACLs are now all fixed at this point; see Charles email regarding instructions for fixing
    • LFC ghosts at AGT2 being investigated by Bob and Charles
    • dBRelease files should now be at HOTDISK only
    • Pilot - a new version now handles the file correctly
    • Wei reports dBRelease files in DATADISK - Nov 22 timestamp. Need to follow-up. (Wei, Hiro to follow-up with Stephane.) Also notes deletion doesn't seem to be working.
    • There is some interference between test transfers and data distribution so some sites are flagged as problematic
    • user area files are being deleted - if not let Hiro know.
  • this meeting:
    • looks okay overall - very efficient for the last week.
    • there was a bug in the pilot code that would register the file incorrectly in the LFC - expect to be fixed in an update. More critical for T3s.
    • discussing w/ ddm developers speeding up call-backs
    • FTS for checksum checking - testing at BNL version 2.2 - needs version 2.2.2 not 2.2.0. Still waiting for production version to arrive. will postpone throughput test for checksums until that is done.
    • Should start monitoring SAM tests. We need to get a hold of the ATLAS availability calculation.
    • There is a package required in the OSG software - Michael is working with Alessandro De G to
    • Should we upgrade DQ2 site services for Tier 2s. NE, MW, SW. Hiro will update the DQ2 twiki.

Conditions data access from Tier 2, Tier 3 (Fred, John DeStefano)

  • Reference
  • last week
    • https://www.racf.bnl.gov/docs/services/frontier/meetings/minutes/20091124/view
    • New squid server at BNL separate from the launch pad. frontier03.usatlas.bnl.gov;
    • TOA listings in place now
    • PFC generation now working: Xin: integrated Alessandro's script for use on OSG sites. Working at BNL - now working on Tier 2s, sending installation job via Panda. AGLT2 needs to update OSG wn-client. xrootd sites don't have the same.
    • Presentations tomorrow and Monday
    • Frontier & Oracle testing - hammering servers - sending blocks of 3250 jobs. Very good results with frontier, millions of hits on the squid with only 60K hits in oracle (thus protected). Repeating tests but directly accessing oracle - couldn't get above 300 jobs. Now doing about 650 jobs simultaneous jobs. John and Carlos looking at dCache and oracle load. Impressive - only one failed job. Each job takes 4GB of raw data and makes histos (reasonable pattern).
    • Yesterday saw 6 GB/s throughput from dCache pool. Today only 4 GB/s, though oracle heavily loaded 80% cpu but holding well; 20 MB/s peak on oracle nodes. Protecting oracle from throughput as well as queries. Utilization of oracle when using frontier was maybe 5% on each node. Impressive.
  • this week
    • Fred testing 650 job submissions - remote access to newest version of the frontier launch pad.
    • Has tested all US sites - all passed DONE
    • Xin: when sites are updated to OSG 1.2, make sure wn-client is updated as well.
    • Wei: some HC tests using these conditions access jobs are hitting release server heavily. Fred will follow-up.

Throughput Initiative (Shawn)

  • NetworkMonitoring
  • https://www.usatlas.bnl.gov/dq2/throughput
  • last week(s):
  • this week:
    USATLAS Throughput Meeting Notes
    		December 15, 2009
    Attending:  Shawn, Andy, Aaron, Charles, Sarah, Dave, Hiro, Jeff, Mark, Karthik
    Excused: Horst
    (Note there are some "action" items embedded below as well as some un-answered questions for sites that missed this call)
    1) perfSONAR status at sites (merged with "Site reports")
    	a) AGLT2 perfSONAR -  Noted 8 of 12 sites show "No" in Bi-directional column.   Maybe this is a firewall issue at some sites?   Jeff reported perfSONAR will need a minor fix to allow specifying port ranges for firewall configs (next bugfix release).   Both service nodes running OK.   Not easily seeing others in the USATLAS or LHC communities of interest (sometime they show up, sometimes not).  
    	b) MWT2_IU - Issues: disk filled on latency box (logrotation failed...now fixed).   Throughput not showing bi-directional tests.  Lookup service not showing all site. Need to add in other USATLAS Tier-2s.  
    	c) MWT2_UC - Issues:  UC boxes snmp_ma service is "down".   Similar issue to IU in visible hosts in "USATLAS" community.  Both service boxes operational.  Need to add in other USATLAS Tier-2s.   Losses between SWT2/SMU and UC.  Losses to SWT2 are periodic and bursty, to SMU are more uniform.   Throughput results asymmetric for many destinations.  UC<->BNL, UC<->OU, UC<->AGLT2?
    	d) NET2_BU - No report.  Question about Harvard site: will it run a set of perfSONAR instances?  Are all Tier-2s being tested to for both throughput and latency?
    	e) SWT2_OU - Issues: All services running OK.   Ping service was running on throughput node.  Disabled this week.   (Andy) Likely cause was PID file with another PID running under old/unused OWAMP. Throughput tests to BU,  BNL, AGLT2_UM, AGLT2_MSU, SWT2-UTA.  Don't have MWT2_IU  or MWT2_UC.  Will add missing hosts.    Some missing tests during interval.  Latency tests only MWT2_IU, SWT2_UTA, will add the rest.  Big asymmetry BNL->OU ~30 Mbps, OU->BNL 560 Mbps.  
    	f) SWT2_UTA - Issues:  All services running OK.   At v3.1.1.  No tests scheduled yet.   Need to look into this.  Will add all missing sites.  Mark asked (via email) about another topic:  tuning recommendations related to /etc/sysctl.conf on sites?
    	g) SLAC -  No report.   Question about perfSONAR status at SLAC?   What is the prognosis for getting these systems running?
    	h) BNL - No report.   Need John Bigrow to verify all Tier-2 sites are being tests to for latency and throughput.
    	i) Report from Illinois (Dave) -  Lost a harddisk on the single perfSONAR instance and had to reinstall/reconfigure.   All USATLAS sites available for test configuration.    Just today found throughput testing was down.   Lots of errors in /var/log/syslog related to an Internet2 site.   Reboot temporarily fixed issue (throughput restarted) but Internet2 node was still causing errors.   Fix was to remove that Internet2 node from scheduled testing.  Dave also pointed out the "first node" vs "second node is simply alphabetic!
    2) Throughput milestones for next quarter.   Will be the focus of the next meeting (January 12, 2010).  Transaction tests and redoing throughput tests with FTS check-summing enabled should be scheduled in January during downtime.  We should have until 3rd week in February to do various tests/milestones.
    3) Site reports - merged into topic 1
    AOB:  Hiro reported no checksum in FTS 2.2 at BNL (production version)...upgrade in January? to checksum enabled version likely.   Shawn will distribute copy of current perfSONAR spreadsheet.  Sites asked to verify their info.  All Tier-2 sites should insure they area testing to all listed Tier-2 sites.   
    NOTE: Spreadsheet attached to this email.  Updated BNL node roles (thanks Dave!)...please VERIFY your site details including Node names/roles and versions.  All Tier-2 sites should also add missing Tier-2 sites to their scheduled perfSONAR latency/throughput tests.
    Next meeting January 12, 2010.  Next year meetings will be bi-weekly on the second and fourth Tuesday of each month.
    Season's Greetings and Happy Holidays to all.
    • Each T2 must test against all other T2s
    • Check spreadsheet for correctness
    • Asymmetries between certain pairs - need more data
    • Jan 12 next meeting - will start bi-weekly.
    • Will start a transaction-type test (large number of small files; check summing needed)

Site news and issues (all sites)

  • T1:
    • last week(s): 960 new cores now commissioned (now being used for Fred's jobs). Production/analysis split to be discussed. Evaluating DDN storage, to be delivered 2.4 PB raw (1.7 useable), also some small Nexan arrays to be added to the Thors. Commissioned 3 SL8500 tape libraries, to be shared with RHIC.
    • this week: One of the production panda sites is being used for high lumi pileup, high memory jobs (3 GB/core). Stability issues with Thor/Thumpers - some problems with high packet rate with link aggregated NICs. 2PB disk purchase on-going. Another Force10 switch with 60 GB inter-switch links.

  • AGLT2:
    • last week: See issues discussed above. Still transitioning SL5, cleaning up configuration to clear up old history.
    • this week: Running well - issue with dccp copies seemed to hang - had to reboot dcache headnode. Would like to do some upgrades of storage nodes Friday. Trying out Rocks5 build for updating nodes.

  • NET2:
    • last week(s): Error from transfers to cern scratch space token - required several fixes. Johannes jobs are running out of storage on worker nodes, fixed. No other outstanding problems, observed users accessing the new data.
    • this week: working with local users can access pool conditions data at HU. Separate install queue for software kits.

  • MWT2:
    • last week(s): Consultation with dCache team regarding cost function calculation and load balancing among gridftp doors in latest dCache release. LFC ACL incident; fixed. Procurement proceeding.
    • this week: Update of myricom driver updates to troubleshoot.

  • SWT2 (UTA):
    • last week: LFC upgraded; fixed BDII issue; applied SS update; SRM restart to fix reporting bug; purchases coming in.
    • this week: Major upgrade at UTA_SWT2 - replaced storage system, new compute nodes - all in place. Reinstalling OSG. SRM, xroot all up. Hopefully up in a day or two.

  • SWT2 (OU):
    • last week: Looking at a network bandwidth asymmetry. 80 TB being purchased; ~200 cores. Another 100 TB also on order.
    • this week: Equipment for storage is being delivered. Will be taking a big downtime to do upgrades, OSG 1.2, SL 5, etc.

  • WT2:
    • last week(s): completed LFC migration; 160 TB useable last friday; 160 TB in January;
    • this week: All is well. Some SL5 migration still going. Suddenly a number of older machines from babar have become available. working xrootd-solaris bug.

Carryover issues (any updates?)

OSG 1.2 deployment (Rob, Xin)

  • last week:
    • BNL updated
  • this week:
    • Any new updates?

OIM issue (Xin)

  • last week:
    • Registration information change for bm-xroot in OIM - Wei will follow-up
    • SRM V2 tag - Brian says nothing to do but watch for the change at the end of the month.
  • this week:

Release installation, validation (Xin)

The issue of validating presence, completeness of releases on sites.
  • last meeting
  • this meeting:

HTTP interface to LFC (Charles)

VDT Bestman, Bestman-Xrootd

  • See BestMan page for more instructions & references
  • last week(s)
    • Have discussed adding Adler32 checksum to xrootd. Alex developing something to calculate this on the fly. Expects to release this very soon. Want to supply this to the gridftp server.
    • Need to communicate w/ CERN regarding how this will work with FTS.
  • this week

Local Site Mover

  • Specification: LocalSiteMover
  • code
  • this week if updates:
    • BNL has a lsm-get implemented and they're just finishing implementing test cases [Pedro]

Gratia transfer probes @ Tier 2 sites

Hot topic: SL5 migration

  • last weeks:
    • ACD ops action items, http://indico.cern.ch/getFile.py/access?resId=0&materialId=2&confId=66075
    • Kaushik: we have the green light to do this from ATLAS; however there are some validation jobs still going on and there are some problems to solve. If anyone wants to migrate, go ahead, but not pushing right now. Want to have plenty of time before data comes (means next month or two at the latest). Wait until reprocessing is done - anywhere between 2-7 weeks from now, for both SL5 and OSG 1.2.
    • Consensus: start mid-September for both SL5 and OSG 1.2
    • Shawn: considering rolling part of AGT2 infrastructure to SL 5 - should they not do this? Probably okay - Michael. Would get some good information. Sites: use this time to sort out migration issues.
    • Milestone: my mid-October all sites should be migrated.
    • What to do about validation? Xin notes that compat libs are needed
    • Consult UpgradeSL5
  • this week

WLCG Capacity Reporting (Karthik)

  • last discussion(s):
    • Note - if you have more than one CE, the availability will take the "OR".
    • Make sure installed capacity is no greater than the pledge.
    • Storage capacity is given the GIP by one of two information providers (one for dCache, one for Posix-like filesystem) - requires OSG 1.0.4 or later. Note - not important for WLCG, its not passed on. Karthik notes we have two ATLAS sites that are reporting zero. This is a bit tricky.
    • Have not seen yet a draft report.
    • Double check the accounting name doesn't get erased. There was a big in OIM - should be fixed, but checked.
    • Reporting come two sources: OIM and the GIP from the sites
    • Here is a snapshot of the most recent report for ATLAS sites:
      This is a report of Installed computing and storage capacity at sites.
      For more details about installed capacity and its calculation refer to the installed capacity document at
      * Report date: Tue Sep 29 14:40:07
      * ICC: Calculated installed computing capacity in KSI2K
      * OSC: Calculated online storage capacity in GB
      * UL: Upper Limit; LL: Lower Limit. Note: These values are authoritative and are derived from OIMv2 through MyOSG. That does not
      necessarily mean they are correct values. The T2 co-ordinators are responsible for updating those values in OIM and ensuring they
      are correct.
      * %Diff: % Difference between the calculated values and the UL/LL
             -ve %Diff value: Calculated value < Lower limit
             +ve %Diff value: Calculated value > Upper limit
      ~ Indicates possible issues with numbers for a particular site
      #  | SITE                 | ICC        | LL          | UL          | %Diff      | OSC         | LL      | UL      | %Diff   |
                                                            ATLAS sites
      1  | AGLT2                |      5,150 |       4,677 |       4,677 |          9 |    645,022 | 542,000 | 542,000 |      15 |
      2  | ~ AGLT2_CE_2         |        165 |         136 |         136 |         17 |     10,999 |       0 |       0 |     100 |
      3  | ~ BNL_ATLAS_1        |      6,926 |           0 |           0 |        100 |  4,771,823 |       0 |       0 |     100 |
      4  | ~ BNL_ATLAS_2        |      6,926 |           0 |         500 |         92 |  4,771,823 |       0 |       0 |     100 |
      5  | ~ BU_ATLAS_Tier2     |      1,615 |       1,910 |       1,910 |        -18 |        511 | 400,000 | 400,000 | -78,177 |
      6  | ~ MWT2_IU            |        928 |       3,276 |       3,276 |       -252 |          0 | 179,000 | 179,000 |    -100 |
      7  | ~ MWT2_UC            |          0 |       3,276 |       3,276 |       -100 |          0 | 179,000 | 179,000 |    -100 |
      8  | ~ OU_OCHEP_SWT2      |        611 |         464 |         464 |         24 |     11,128 |  16,000 | 120,000 |     -43 |
      9  | ~ SWT2_CPB           |      1,389 |       1,383 |       1,383 |          0 |      5,953 | 235,000 | 235,000 |  -3,847 |
      10 | ~ UTA_SWT2           |        493 |         493 |         493 |          0 |     13,752 |  15,000 |  15,000 |      -9 |
      11 | ~ WT2                |      1,377 |         820 |       1,202 |         12 |          0 |       0 |       0 |       0 |
    • Karthik will clarify some issues with Brian
    • Will work site-by-site to get the numbers reporting correctly
    • What about storage information in config ini file?
  • this meeting


  • last week
  • this week

-- RobertGardner - 15 Dec 2009

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback