r4 - 25 May 2011 - 14:13:30 - RobertGardnerYou are here: TWiki >  Admins Web > MinutesMay25

MinutesMay25

Introduction

Minutes of the Facilities Integration Program meeting, May 25, 2011
  • Previous meetings and background : IntegrationProgram
  • Coordinates: Wednesdays, 1:00pm Eastern
  • Our Skype-capable conference line: (6 to mute) ** announce yourself in a quiet moment after you connect *
    • USA Toll-Free: (877)336-1839
    • USA Caller Paid/International Toll : (636)651-0008
    • ACCESS CODE: 3444755

Attending

  • Meeting attendees: Rob, Fred, Dave, John DeStefano, John Brunelle, Patrick, Jason, Horst, Bob, Saul, Charles, Wei, Aaron, Sarah, Tom, Karthik, Wensheng, Hiro, Xin, Alden, Armen, Mark, Kaushik, Philippe
  • Apologies: Doug, Michael, Shawn, Nate, AK

Integration program update (Rob, Michael)

  • Special meetings
    • Tuesday (12 noon CDT, weekly - convened by Kaushik) : Data management
    • Tuesday (2pm CDT, bi-weekly - convened by Shawn): Throughput meetings
  • Upcoming related meetings:
  • For reference:
  • Program notes:
    • last week(s)
      • Integration program from this quarter, FY11Q3
      • Recap of May 12 LHCONE summit meeting in Washington DC
        • http://www.internet2.edu/science/LHCONE.html
        • Discuss some details during the Networking section below
        • Michael - this activity is motivated by the fact that LHC models are being re-thought, having implications on network traffic. Is the existing infrastructure adequate? Headed by David Foster, head of LHCOPN (Tier 0-Tier 1). There are architecture designs available, and there is a lot of engagement with the user community.
      • NEW US ATLAS Facilities meeting on virtual machine provisioning and configuration management, FacilitiesMeetingConfigVM
        • First day - Includes hands-on type demonstrations, presentations, discussions
        • Digest overnight, continue discussion, look for commonalities, look for what others are doing, move the facility forward with less effort
        • Please contribute to this - mid-June, Doodle poll available
      • There is OSG site administrator's meeting being planned for August
      • Opportunistic usage - there has not been much load from HCC as of yet. Have invited a second VO - Engage (a collection of VOs).
      • LHC status - there was machine development prior to the technical stop. Ramping up now, expect collisions tonight, and more interest in physics analysis. Plan interventions carefully.
    • this week
      • US ATLAS Facilities meeting on virtual machine provisioning and configuration management, FacilitiesMeetingConfigVM
      • Dates fixed June 15, 16, 2011
      • Will be collecting informal presentations/contributions
      • BNL rooms TBD, site access

OSG Opportunistic Access (Rob)

last week(s)
  • SupportingHCC NEW - VO portfolio for HCC
  • [[http://red-web.unl.edu/gratia/xml/facility_success_cumulative_smry?span=86400&facility=AGLT2*|ATLAS|MWT2&probe=.*&resource-type=Batch&vo=hcc&user=.*&starttime=2011-05-01+00%3A00%3A00&exclude-facility=NONE|Generic|Obsolete&exclude-user=NONE&endtime=2011-05-24+23%3A59%3A59&exclude-vo=unknown|other]Results through May 24]]
  • BU - just haven't got to it yet; not sure about ...
  • UTA_SWT2 - plan to enable after cluster is ready
  • WT2 - waiting for Dan Fraser. Need to follow-up.
this week
  • HCC - no significant issues
  • HCC - SLAC will meet this week
  • Engage - next week

Operations overview: Production and Analysis (Kaushik)

  • Production reference:
  • last meeting(s):
    • Now have an analysis backlog. Full and IO heavy. proddisks are filling - watch.
    • mc10c still going, and reprocessing. and there are some new requests. there is pileup going on as well.
    • Awaiting mc11 G4 task.
    • New site services are coming online - interest in DATADISK. (T2's presently can pull data from BNL only, therefore not getting some analysis jobs at T2s)
    • Once removed, T2s will be able to get data from outside the US
    • Starting tomorrow, we might see more analysis at the US
  • this week:
    • Expect to be quite full for a while to come.
    • Aborted tasks due to calibration mistake. New mc11 started today - one task in the US
    • mc10b - finished, but there were issues in the FR and IT clouds doing reconstruction (none done at T2s due to high IO requirements). MWT2 loaned to FR and IT to meet the backlog.
    • May even to loan the German T1 - to meet physics demand.
    • More MC in the pipeline.
    • Last night: production dropped off due to nqueue set to zero. Xin reports email exchange w/ Rod and others. What happened? Alden: database updater problem in pilot controller, NULL values attempted -not allowed; had put in a protection in place, backed out.
    • More user analysis coming

Data Management and Storage Validation (Armen)

Shifters report (Mark)

  • Reference
  • last meeting: Operations summary:
    Yuri's summary from the weekly ADCoS meeting:
    http://www-hep.uta.edu/~sosebee/ADCoS/ADCoS-status-summary-5_16_2011.html
    
    1)  5/13: From Saul & John at NET2: 1000-2000 failing jobs with get errors at HU_ATLAS_Tier2 due to a network problem.
    We're working on it and will reduce the number of running jobs in the mean time.  ggus 70598 was opened during this period (lsm timeout errors).  Problem 
    now resolved - ggus ticket closed on 5/16.  eLog 25367.
    2)  5/14: The queue BNL_ATLAS_2 was set on-line temporarily to enable the processing of some high priority jobs requiring > 2 GB of memory.  (This queue is 
    normally off-line.)  eLog 25387.
    3)  5/14: MWT2_UC - problem with a storage server (failed disk).  Queues set off-line while the disk was being replaced.  Problem fixed, queues back on-line 
    as of ~2:30 p.m. CST.
    4)  5/16: SLAC - failed functional test transfers ("gridftp_copy_wait: Connection timed out").  From Wei: The URL copy timeout parameter in the FTS channel 
    STAR-SERV04SRM (to SLAC) is set to 800 sec, much short than almost all other channels. This is the cause of all failures I checked. i will set this parameter to 
    something longer.  ggus 70641 closed, eLog 25455.
    5)  5/17: Previously announced ADCR database intervention canceled.  See eLog 25463 (and the message thread therein).
    6)  5/17: SWT2_CPB maintenance outage for cluster software updates, reposition a couple of racks, etc.  Expect to complete by late afternoon/ early evening 
    5/18.  eLog 25474, https://savannah.cern.ch/support/index.php?121013.
    7)  5/17: AGLT2_USERDISK to MAIGRID_LOCALGROUPDISK file transfer failures ("globus_ftp_client: Connection timed out").  Appears to be a network 
    routing problem between the sites.  ggus 70671 in-progress, eLOg 25480.
    8)  5/18 early a.m.: ADCR database hardware problem (disk failure).  For now db admins have switched over to a back-up instance of the database.  
    See: https://atlas-logbook.cern.ch/elog/ATLAS+Computer+Operations+Logbook/25499.
    
    Follow-ups from earlier reports:
    
    (i)  4/8: NERSC - file transfer errors.  See ggus 69526 (in-progress), eLog 24176.
    Update 4/19: some progress has been made on understanding the issue(s) - will close this ticket once it appears everything is working correctly.
    (ii)  4/8: OU_OSCER_ATLAS - still see intermittent job failures with segfault errors.  Site was set off-line 4/11 due to a spike in the failure rate.  Discussed in: 
    https://savannah.cern.ch/support/?120307 (site exclusion), ggus 69558 / RT 19757, eLog 24133/92, https://savannah.cern.ch/bugs/index.php?79656.
    Update 5/16: No conclusive understanding of the seg fault job failures.  Decided to set the site back on-line (5/16) to see if the problem persists.  Awaiting 
    new results (so far no jobs have run at the site).
    (iii)  5/2: UTD_HOTDISK file transfer errors ("failed to contact on remote SRM [httpg://fester.utdallas.edu:8446/srm/v2/server]").  From Joe: Hardware failure on 
    our system disk. Currently running with a spare having out of date certificates. Our sys-admin is working on the problem.  ggus 70196 in-progress, eLog 24971.
    Update 5/10: Site admin reported that UTD was ready for new test jobs, but they failed with "Required CMTCONFIG (i686-slc5-gcc43-opt) incompatible with 
    that of local system (local cmtconfig not set)" (evgen jobs) and missing input dataset (G4sim).  Under investigation.   https://savannah.cern.ch/support/?120588, 
    eLog 25250.
    Update 5/14: Issue with some problematic atlas s/w releases has been resolved.  ggus 70196 closed, queues set back on-line.  eLog 25377.
    (iv)  5/4: OU_OCHEP_SWT2_PRODDISK - file transfer failures due to checksum errors ("[INTERNAL_ERROR] Destination file/user checksum mismatch]").  
    Horst & Hiro are investigating.  https://savannah.cern.ch/bugs/index.php?81834, eLog 25039.
    Update 5/13: No more failures after 2011-05-05 06:08:45 - Savannah ticket closed.
    (v)  5/10 a.m. - from Saul at NET2: We had about 500 jobs fail at BU_ATLAS_Tier2o last night due to a bad node.   It's now off-line.  Later in the evening, from 
    John: We're swapping around some internal disk behind atlasproddisk, and we're draining the queues so that we can make the final switchover tomorrow 
    morning after the sites have quiesced.  Panda queues set to 'brokeroff'.
    Update 5/17: queues back on-line.
    (vi)  5/11: WISC DDM failures.  Blacklisted in DDM: https://savannah.cern.ch/support/index.php?120901.  ggus 70467.  Issue is a cooling system problem in a 
    data center.  (Also, there seem to be some older, still open Savannah tickets related to DDM errors at the site?)
    Update 5/12: From Wen - The cooling problem is fixed. Now the data servers are back.  ggus 70467 closed.
    Update 5/14: Cooling problem recurred.  Issue resolved, ggus 70599 closed.
    
    • DB outtage ADCR - affecting panda. Moved to backup instance. Recovering now.
    • Pilot update
    • OU OSCER site - working with Horst. Seg fault errors that looked like a site issue. No conclusive understanding yet (now there's a brokerage issue).
    • Many tickets closed
  • this meeting: Operations summary:
    Yuri's summary from the weekly ADCoS meeting (this week provided by Jarka Schovancova):
    http://indico.cern.ch/getFile.py/access?contribId=1&resId=0&materialId=slides&confId=140582
    
    1)  In general several older Savannah DDM tickets were resolved and/or closed.  Thanks.
    2)  5/19: SMU_LOCALGROUPDISK - DDM failures with "error during TRANSFER_FINALIZATION file/user checksum mismatch."  Justin at SMU thinks this 
    issue has been resolved.  Awaiting confirmation so ggus 70737 can be closed.  eLog 25537.
    3)  5/19: WISC_LOCALGROUPDISK - DDM failures with [GRID_FTP_ERROR] globus_ftp_client : the server responded with an error500."  Wen reported 
    the problem has been fixed (configured the BestMan server to obtain checksum values).  Site was blacklisted in DDM while the issue was being addressed - 
    since removed (https://savannah.cern.ch/support/?121061).  ggus 70734 closed, eLog 25664.
    4)  5/23: New pilot release from Paul (version SULU 47d).  Details here:
    http://www-hep.uta.edu/~sosebee/ADCoS/pilot-version_SULU_47d.html
    5)  5/23: AGLT2 network incident - from Shawn: We had a dCache storage node reorder its NICs, breaking the bonding configuration.  This has been fixed 
    now. To prevent a re-occurrence we assigned HWADDR in the relevant /etc/sysconfig/network-scripts/ifcfg-ethX files.  eLog 25692.
    6)  5/24: NET2 - DDM transfer errors. Saul reported that the underlying issue was a networking problem that caused a gatekeeper to become overloaded. 
    Thinks the issue is now resolved. https://gus.fzk.de/ws/ticket_info.php?ticket=70844, eLog 25722.  
    Savannah site exclusion: https://savannah.cern.ch/support/?121125.
    7)  5/24: Charles announced an updated version of the pandamover-cleanup.py script.  See:
    http://repo.mwt2.org/viewvc/admin-scripts/lfc/pandamover-cleanup.py, and the talk by Tadashi regarding updated procedures for pandamover cleaning: 
    https://indico.cern.ch/conferenceDisplay.py?confId=140214.
    
    Follow-ups from earlier reports:
    (i)  4/8: NERSC - file transfer errors.  See ggus 69526 (in-progress), eLog 24176.
    Update 4/19: some progress has been made on understanding the issue(s) - will close this ticket once it appears everything is working correctly.
    Update 5/17: long discussion thread in the ggus ticket - it was marked as 'solved' on this date.  (Recent transfers had succeeded.)
    (ii)  4/8: OU_OSCER_ATLAS - still see intermittent job failures with segfault errors.  Site was set off-line 4/11 due to a spike in the failure rate.  Discussed in: 
    https://savannah.cern.ch/support/?120307 (site exclusion), ggus 69558 / RT 19757, eLog 24133/92, https://savannah.cern.ch/bugs/index.php?79656.
    Update 5/16: No conclusive understanding of the seg fault job failures.  Decided to set the site back on-line (5/16) to see if the problem persists.  Awaiting 
    new results (so far no jobs have run at the site).
    Update 5/19: Initial set of jobs at OU_OSCER have completed successfully, so ggus 69558 & RT 19757 were closed,  eLog 25555.  
    https://savannah.cern.ch/support/?120307 was closed.  Will continue to monitor the status of new jobs.
    (iii)  5/17: SWT2_CPB maintenance outage for cluster software updates, reposition a couple of racks, etc.  Expect to complete by late afternoon/ early evening 
    5/18.  eLog 25474, https://savannah.cern.ch/support/index.php?121013.
    Update 5/18: Outage is over - test jobs completed successfully.  Queues back to on-line.  eLog 25553.  http://savannah.cern.ch/support/?121013 closed.
    (iv)  5/17: AGLT2_USERDISK to MAIGRID_LOCALGROUPDISK file transfer failures ("globus_ftp_client: Connection timed out").  Appears to be a network 
    routing problem between the sites.  ggus 70671 in-progress, eLog 25480.
    Update 5/24: NGI_DE helpdesk personnel are working on the problem.  ggus ticket appended with additional info.
    
    • Seems as though a number of older tickets have been cleared out in the past week.
    • New pilot release this past week
    • OU_OSCER - not quite resolved
    • AGLT2 ticket resolved.

DDM Operations (Hiro)

Throughput and Networking (Shawn)

  • NetworkMonitoring
  • https://www.usatlas.bnl.gov/dq2/throughput
  • Now there is FTS logging to the DQ2 log page at: http://www.usatlas.bnl.gov/dq2log/dq2log (type in 'fts' and 'id' in the box and search).
  • last week:
    • No throughput group meeting this week.
    • LHCONE - Jason Z
      • notes being finalized
      • 25 in person, 30 remote. Good participation and discussion of data transfer problems.
      • Connectivity issues in US and Europe were discussed.
      • June 13 - Tier 2 meeting at LHCOPN meeting in DC
      • There will be some US ATLAS participation
    • Shawn - concept is essentially traffic separation, and engineering, and cost savings
    • How best to do this? Which infrastructure is appropriate.
    • There is an LHCONE architecture document available.
    • Michael - there is an intention to keep the LHCOPN and LHCONE infrastructures separate
  • this week:
    • Philippe's overview.
    • T2-T2 bandwidth issue, OU to IU, still not understood. There was a blip were it shot up, but still not understood.
    • Updated network diagrams. Illinois is done.
    • On-going perfsonar maintenance.
    • Overall perfsonar status - Nagios does send probs; typically lookup service, worker.
    • Tomasz working on portable dashboard, independ of Nagios; scheduler completed and working well. Next: web interface for configuration.
    • Issue: standalone viewer versus dashboard sometimes differ; related to timeout? Separate and flag. (Nagios tests until success)
    • 10G PS host planning. Oddities of 1G to 10G hosts.
    • Internal BW work at AGLT2 - pcache and lsm testing, working well

HTPC configuration for AthenaMP testing (Horst, Dave)

last week
  • Horst - still working on queue config for ITB, getting ready for submission. Doug's jobs to OSCER keep failing with seg faults.
  • Dave - all ready at Illinois. Doug has submitted a lot of test jobs okay. 16.6.5.5. version had bug producing corrupted ESD files. Doug on vacation. Progressing.
this week
  • Dave: Doug Smith back so lots of activity with 16.6.5 successful at Illinois. 20 jobs using 16.6.6 - all ran well, with all options. So lots of progress.
  • Horst - CVMFS + Pilots setup
  • Suggestion - look for IO performance measures.

CVMFS

See TestingCVMFS

last week:

  • At MWT2 - in production; setting up monitoring of squid. And at Illinois - switched to production server as advised by John. Both analysis and production.
  • Ordering should be BNL, CERN
  • John DeStefano - looking to put together a test to put more load on CVMFS, not yet thoroughly tested.
  • cvms-talk will tell you the server in use and proxy
  • There still is some traffic on the testbed instance.
  • Doug - final system is in test mode at CERN-PH. Its fully installed and tested, in the final configuration
    • Needs a discussion with SIT and ADC migration team
    • CERN IT has box for conditions database.
    • End sites will not see the switchover.
    • Local configuration will need to change slightly - line that says which repos to mount. atlas-cern.ch, atlas-condb.cern.ch. symlink within repo that points to cond data.
    • Structure is described into the Doug's talk at last software week
    • Nearly all releases are in the "new area"
    • Transition plan - next week or two
    • Cloud squads polled
    • Site configuration change announced
    • ADC decision
    • Notification to clouds to precision config change and dates
  • New configurations are at the stratum server at CERN - so probably should limit testing. Illinois will work.
  • Conditions data test as well
this week:

Federated Xrootd at sites: Tier 3 (Doug), Tier 2 (Charles)

last week(s):
  • Charles continuing to investigate performance in the xroot client across wide area
  • Looking at the performance tuning.
  • In xrootd its more complicated to control the parameters, requires later version of root.
  • Wei - N2N? conversion module is working SLAC.
  • Doug:
    • global redirector - Hiro will bring it up within the week; new hardware in a couple weeks.
    • There is a complicated config rd file - sent by Wei
    • xrootd configuration whereas redirector can also act as proxy. The proxy machine talks to the global redirector. Hiro: 3rd party transfer? Doug: Andy says it not in the mainline xrootd source.
    • Andy has made some changes to fix init.d and xrootd init. There have been several bug fixes of late.
    • We're at 3.0.3.
    • New xrootdfs fixes 3.0.4rc1.
    • Andy also has fixes for locking proxy.
    • Asked Andy and Dirk about decision for xrootd yum repo
    • OSG will provide libraries for gridftp server; Tanya has been following all the email regarding the release caches.
    • There are two "official" repos for xrootd
  • Wei: 3.0.4rc1 rpm now available from Lukaz. rpms are in website at CERN.
  • Hiro has provided a dq2 plugin called xrdcp - allows copy between xrootd servers. According to a global namespace.
    • Testing at BNL Tier3
    • Will contribute to dq2 repo at CERN
    • Doug is asking for a sub-release; Hiro
  • Doug and Andy - working on security bits. Gerry Ganis has put in gsi authentication, and Brian Bockelman. Andy will write a plugin that can be tested. Servers have a server certificate with "atlas-xrootd" in the service name.
this week:
  • Charles - have been getting xrootd segfaults in N2N? service, infrequently under heavy load
  • Optimization of WAN client
  • Yves Kemp visit at UC - comparing nodes, also working on wide area storage in the dCache context. NFS v4.1 seems to be doing quite well.

Tier 3 Integration Program (Doug Benjamin)

Tier 3 References:
  • The link to ATLAS T3 working groups Twikis are here
  • T3g Setup guide is here
  • Users' guide to T3g is here
  • US ATLAS Tier3 RT Tickets

last week(s):

  • None - covered above

this week:

Tier 3GS site reports (Doug Benjamin, Joe, AK, Taeksu)

last week:
  • AK - has been working with Jason Zurawski - will be providing a report.
  • Michael is developing a US ATLAS policy for Tier3GS
  • CVMFS has been installed at Bellarmine by Horst

this week:

Site news and issues (all sites)

  • T1:
    • last week:
      • Chimera migration tools under study
      • Networking changes to avoid multiple hops
      • Hiro's plugin for direct xrootd transfers
      • More scalable access to mass data - discussion w/ blue arc pNFS-based solution ongoing; this has been delayed until August
      • Order for 150 westmere-based wn's via UCI - unexplained delays
    • this week:
      • Hiro - all is well, SS update.

  • AGLT2:
    • last week(s):
      • Met with MWT2 to discuss LSM and pcache
      • Revisiting testing of direct-access methods
      • Plan to deploy CVMFS with new rocks build, likely complete today, rolling re-builds will being later in the week
      • Met w/ Dell - possible future options with SSDs. 3 TB disks, on portal in June; future systems, timescale very long though. New Athon systems in August ($/compute good)
      • NexSAN - tests completed, perf not as good. Older satabeast used. 60 disk 4U unit tested. Size issue - does not fit into existing racks; density is good.
      • Rocks update - Bob - upgraded Condor 7.6 issues w/ hierarchical acct issues; negotiator process would crash under certain conditions. Settled on build, ready to go. 5 racks at MSU rebuilt. Will be putting into Condor pool.
      • Running at half capacity.
      • Tom - working on builds - had a network "disaster" caused by a single back NIC. Low level of packet loss related to a NIC pre-fail.
    • this week:
      • all WN's re-built
      • CVMFS - ready to turn on site-wide
      • lsm-pcache updated (a few little things found, /pnfs nacc mount needed)
      • dcap - round robin issue evident for calib sites.
      • Want to update dcache to 1.9.12-3, its now golden; downtime? wait for a couple weeks (wait for PLHC results to go out)

  • NET2:
    • last week(s):
      • IO progress - 1.6 GB/s between BU and HU, filling the 10G link. Setting up a second link. Seeing 750 MB/s.
      • Now ready to ramp up analysis at HU,500 jobs. (staging in presently)
      • Smooth operating in the past week, but lots on the agenda. Checksum reporting issue.
    • this week:
      • Internal IO ramp-up progress still on-going
      • Found a lot of "get" issues; investigating

  • MWT2:
    • last week:
      • UC: dcache update to address Chimera crashes
      • Sarah - MWT2 queue development
      • Illinois - CVMFS, HTPC
    • this week:
      • UC: no recurrence of Chimera crashes since dcache upgrade to 1.9.5-26
      • Sarah - MWT2 queue development, Condor preemption - successful pilots
      • Illinois - CVMFS testing: testing the new repository from Doug; have had to put in a few softlinks and getting them running successfully; testing access to conditions data; There are problems with the pilot; Xin notes that in two weeks Alessandro will have a completed; participating in HTPC testing

  • SWT2 (UTA):
    • last week:
      • Outtage - software upgrade on CPB, some rack re-arrangement. CE tweaks. Should back up next week.
    • this week:
      • Issue with data server monday night - resolved.

  • SWT2 (OU):
    • last week:
      • All is well. Pilots on MP queue. * this week:
      • Waiting for MP jobs from Doug.

  • WT2:
    • last week(s):
      • All is fine.
    • this week:
      • 38 R410s online this morning. Will update

Carryover issues (any updates?)

Python + LFC bindings, clients (Charles)

last week(s):
  • wlcg-client-lite to be deprecated
  • Still waiting on VDT for wlcg-client, wn-client
  • Question - could CVMFS be used to distribute software more broadly? This would require some serious study.
this week:
  • We've had an update from Alain Roy/VDT - delays because of personnel availability, but progress on build is being made, expect more concrete news soon.

WLCG accounting (Karthik)

last week: this week:

AOB

last week this week
  • None.


-- RobertGardner - 24 May 2011

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments

 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback