r5 - 01 Jun 2011 - 14:42:23 - NathanYehleYou are here: TWiki >  Admins Web > MinutesJune1

MinutesJune1

Introduction

Minutes of the Facilities Integration Program meeting, June 1, 2011
  • Previous meetings and background : IntegrationProgram
  • Coordinates: Wednesdays, 1:00pm Eastern
  • Our Skype-capable conference line: (6 to mute) ** announce yourself in a quiet moment after you connect *
    • USA Toll-Free: (877)336-1839
    • USA Caller Paid/International Toll : (636)651-0008
    • ACCESS CODE: 3444755

Attending

  • Meeting attendees: Fred, Rob, John DeStefano, Sarah, Shawn, Dave, Patrick, Jason, Alden, Bob, Charles, Armen, Mark, AK, Xin, Nate, Hiro, Saul, Michael, Wensheng, Torre, Wei, Tom, John Brunelle
  • Apologies: Karthik, Horst, Kaushik, ...

Integration program update (Rob, Michael)

  • Special meetings
    • Tuesday (12 noon CDT, weekly - convened by Kaushik) : Data management
    • Tuesday (2pm CDT, bi-weekly - convened by Shawn): Throughput meetings
  • Upcoming related meetings:
  • For reference:
  • Program notes:
    • last week(s)
      • Integration program from this quarter, FY11Q3
      • US ATLAS Facilities meeting on virtual machine provisioning and configuration management, FacilitiesMeetingConfigVM
        • Dates fixed June 15, 16, 2011
        • Will be collecting informal presentations/contributions
        • BNL rooms TBD, site access
    • this week
      • Discussion about upcoming meeting on virtual machines and config management, FacilitiesMeetingConfigVM
      • See agenda - format will be improvised as topics are discussed, and interactive
      • Note that mode of operation has changed for multi-cloud production: input files will come from remote Tier 1s. There is a performance issue - transfers are fairly slow, causing sites to drain. Note that we have not optimized network links; it needs to be addressed with ADC. We do expect this to change with LHCONE, but this won't be for a while.
      • Hiro has attempted optimization of FTS settings, but this has not helped.
      • Sarah - notes that we are draining anyway; there is a problem delivering pilots. Is the timefloor setting set?
      • Need to investigate any issues with the pilot rate, but also the transfer rates back to the destination cloud. Hiro notes the small file transfers are dominated by the setup overhead. The transfers to Lyon in particular are problematic.
      • Need some monitoring plots to show this back to ADC. Hiro has some of these.

OSG Opportunistic Access (Rob)

last week(s)
  • HCC - no significant issues
  • HCC - SLAC will meet this week to discuss security requirements
  • Engage - next week possibly
this week
  • HCC @ SLAC: security requirements discussion thread last week - articulation of Glidein-WMS requirements from Burt; meeting today.
  • HCC @ UTA: enabled.
  • HCC @ NE: not yet.
  • Engage - setting up conf. call to discuss support issues and requirements

Operations overview: Production and Analysis (Kaushik)

  • Production reference:
  • last meeting(s):
    • Expect to be quite full for a while to come.
    • Aborted tasks due to calibration mistake. New mc11 started today - one task in the US
    • mc10b - finished, but there were issues in the FR and IT clouds doing reconstruction (none done at T2s due to high IO requirements). MWT2 loaned to FR and IT to meet the backlog.
    • May even to loan the German T1 - to meet physics demand.
    • More MC in the pipeline.
    • Last night: production dropped off due to nqueue set to zero. Xin reports email exchange w/ Rod and others. What happened? Alden: database updater problem in pilot controller, NULL values attempted -not allowed; had put in a protection in place, backed out.
    • More user analysis coming
  • this week:
    • Mark believes there are a number of factors relating to not keeping MWT2 full
    • There was a brokerage issue last weekend having to do with space available issues (Rod Walker found the issue) preventing jobs from running in the US cloud; a change in schedconfig fixed the problem.

Data Management and Storage Validation (Armen)

Shifters report (Mark)

  • Reference
  • last meeting: Operations summary:
    Yuri's summary from the weekly ADCoS meeting (this week provided by Jarka Schovancova):
    http://indico.cern.ch/getFile.py/access?contribId=1&resId=0&materialId=slides&confId=140582
    
    1)  In general several older Savannah DDM tickets were resolved and/or closed.  Thanks.
    2)  5/19: SMU_LOCALGROUPDISK - DDM failures with "error during TRANSFER_FINALIZATION file/user checksum mismatch."  Justin at SMU thinks this 
    issue has been resolved.  Awaiting confirmation so ggus 70737 can be closed.  eLog 25537.
    3)  5/19: WISC_LOCALGROUPDISK - DDM failures with [GRID_FTP_ERROR] globus_ftp_client : the server responded with an error500."  Wen reported 
    the problem has been fixed (configured the BestMan server to obtain checksum values).  Site was blacklisted in DDM while the issue was being addressed - 
    since removed (https://savannah.cern.ch/support/?121061).  ggus 70734 closed, eLog 25664.
    4)  5/23: New pilot release from Paul (version SULU 47d).  Details here:
    http://www-hep.uta.edu/~sosebee/ADCoS/pilot-version_SULU_47d.html
    5)  5/23: AGLT2 network incident - from Shawn: We had a dCache storage node reorder its NICs, breaking the bonding configuration.  This has been fixed 
    now. To prevent a re-occurrence we assigned HWADDR in the relevant /etc/sysconfig/network-scripts/ifcfg-ethX files.  eLog 25692.
    6)  5/24: NET2 - DDM transfer errors. Saul reported that the underlying issue was a networking problem that caused a gatekeeper to become overloaded. 
    Thinks the issue is now resolved. https://gus.fzk.de/ws/ticket_info.php?ticket=70844, eLog 25722.  
    Savannah site exclusion: https://savannah.cern.ch/support/?121125.
    7)  5/24: Charles announced an updated version of the pandamover-cleanup.py script.  See:
    http://repo.mwt2.org/viewvc/admin-scripts/lfc/pandamover-cleanup.py, and the talk by Tadashi regarding updated procedures for pandamover cleaning: 
    https://indico.cern.ch/conferenceDisplay.py?confId=140214.
    
    Follow-ups from earlier reports:
    (i)  4/8: NERSC - file transfer errors.  See ggus 69526 (in-progress), eLog 24176.
    Update 4/19: some progress has been made on understanding the issue(s) - will close this ticket once it appears everything is working correctly.
    Update 5/17: long discussion thread in the ggus ticket - it was marked as 'solved' on this date.  (Recent transfers had succeeded.)
    (ii)  4/8: OU_OSCER_ATLAS - still see intermittent job failures with segfault errors.  Site was set off-line 4/11 due to a spike in the failure rate.  Discussed in: 
    https://savannah.cern.ch/support/?120307 (site exclusion), ggus 69558 / RT 19757, eLog 24133/92, https://savannah.cern.ch/bugs/index.php?79656.
    Update 5/16: No conclusive understanding of the seg fault job failures.  Decided to set the site back on-line (5/16) to see if the problem persists.  Awaiting 
    new results (so far no jobs have run at the site).
    Update 5/19: Initial set of jobs at OU_OSCER have completed successfully, so ggus 69558 & RT 19757 were closed,  eLog 25555.  
    https://savannah.cern.ch/support/?120307 was closed.  Will continue to monitor the status of new jobs.
    (iii)  5/17: SWT2_CPB maintenance outage for cluster software updates, reposition a couple of racks, etc.  Expect to complete by late afternoon/ early evening 
    5/18.  eLog 25474, https://savannah.cern.ch/support/index.php?121013.
    Update 5/18: Outage is over - test jobs completed successfully.  Queues back to on-line.  eLog 25553.  http://savannah.cern.ch/support/?121013 closed.
    (iv)  5/17: AGLT2_USERDISK to MAIGRID_LOCALGROUPDISK file transfer failures ("globus_ftp_client: Connection timed out").  Appears to be a network 
    routing problem between the sites.  ggus 70671 in-progress, eLog 25480.
    Update 5/24: NGI_DE helpdesk personnel are working on the problem.  ggus ticket appended with additional info.
    
    • Seems as though a number of older tickets have been cleared out in the past week.
    • New pilot release this past week
    • OU_OSCER - not quite resolved
    • AGLT2 ticket resolved.
  • this meeting: Operations summary:
    Yuri's summary from the weekly ADCoS meeting (this week provided by Jarka Schovancova):
    http://www-hep.uta.edu/~sosebee/ADCoS/ADCoS-status-summary-5_16_2011.html
    
    1)  5/26: NET2 - file transfer errors to DATADISK.  (Issue related to the performance of checksum calculations, Bestman crashes, etc.)  See discussion 
    thread in https://ggus.eu/ws/ticket_info.php?ticket=70973, eLog 25826.
    2)  5/27: New pilot version from Paul (SULU 47e), produced to help with a production problem at LYON.  This had the effect of generating thousands of 
    errors at two FR cloud sites (see for example https://ggus.eu/ws/ticket_info.php?ticket=71032).  Problem under investigation.
    3)  5/28: Job brokerage was broken in the US & IT clouds.  Issue was a disk space check against an incorrect value.  Problem resolved.
    4)  5/29: MWT2_UC - job failures with transfer timeout errors.  From Rob: Not a site problem - caused by low concurrency settings for FTS instances at FR, 
    CERN for transfers from MWT2 endpoints.  ggus 71036 closed, eLog 25993.
    5)  5/31: ADCR database maintenance (switch db services back to original hardware - see eLog 25529 and thread therein for original issue).  Affected 
    services: ADCR_DQ2, ADCR_DQ2_LOCATION, ADCR_DQ2_TRACER, ADCR_PANDA, ADCR_PANDAMON, ADCR_PRODSYS, ADCR_AMI.  
    Duration ~one hour.  Work completed as of ~4:00 a.m. CST.  eLog 25949/50.
    6)  5/30-5/31: OU_OCHEP_SWT2 file transfer failures (two issues: (i) incorrect checksums, (ii) files with zero bytes size).  Horst reported that the issue 
    is resolved.  https://rt.racf.bnl.gov/rt/Ticket/Display.html?id=20106 closed, eLog 25943.
    7)  5/31: From Sarah at MWT2_IU: We have a storage pool off-line with disk issues at MWT2_IU.  We have paused the scheduler to prevent new jobs from 
    starting while it is down, and are working to bring it back online. We may see some transfers fail for files on the pool while it is off-line.
    8)  5/31: UTD-HEP set off-line at request of site admin (cleaning dark data from the storage).  eLog 25944.
    9)  6/1: Start of TAG reprocessing campaign (p-tag: p586).  From Jonas Strandberg: This will be a light-weight campaign starting from the merged AODs 
    and producing just the TAG and the FASTMON as output which are both very small.
    
    Follow-ups from earlier reports:
    (i)  5/17: AGLT2_USERDISK to MAIGRID_LOCALGROUPDISK file transfer failures ("globus_ftp_client: Connection timed out").  Appears to be a network 
    routing problem between the sites.  ggus 70671 in-progress, eLog 25480.
    Update 5/24: NGI_DE helpdesk personnel are working on the problem.  ggus ticket appended with additional info.
    Update 5/31 from Shawn: I am marking this as resolved but the solution seems to be that the remote site only has commercial network peering and will 
    be unable to connect to AGLT2 and WestGrid because of this. Not sure if the systems involved have been configured to limit their interactions to reachable 
    sites.  ggus 70671 closed,  eLog 25905.
    (ii)  5/19: SMU_LOCALGROUPDISK - DDM failures with "error during TRANSFER_FINALIZATION file/user checksum mismatch."  Justin at SMU thinks this 
    issue has been resolved.  Awaiting confirmation so ggus 70737 can be closed.  eLog 25537.
    Update 5/27: resolution of the problem confirmed - ggus 70737 closed.
    (iii)  5/24: NET2 - DDM transfer errors. Saul reported that the underlying issue was a networking problem that caused a gatekeeper to become overloaded. 
    Thinks the issue is now resolved. https://gus.fzk.de/ws/ticket_info.php?ticket=70844, eLog 25722.  Savannah site exclusion: 
    https://savannah.cern.ch/support/?121125.
    

DDM Operations (Hiro)

Throughput and Networking (Shawn)

  • NetworkMonitoring
  • https://www.usatlas.bnl.gov/dq2/throughput
  • Now there is FTS logging to the DQ2 log page at: http://www.usatlas.bnl.gov/dq2log/dq2log (type in 'fts' and 'id' in the box and search).
  • last week:
    • Philippe's overview.
    • T2-T2 bandwidth issue, OU to IU, still not understood. There was a blip were it shot up, but still not understood.
    • Updated network diagrams. Illinois is done.
    • On-going perfsonar maintenance.
    • Overall perfsonar status - Nagios does send probs; typically lookup service, worker.
    • Tomasz working on portable dashboard, independ of Nagios; scheduler completed and working well. Next: web interface for configuration.
    • Issue: standalone viewer versus dashboard sometimes differ; related to timeout? Separate and flag. (Nagios tests until success)
    • 10G PS host planning. Oddities of 1G to 10G hosts.
    • Internal BW work at AGLT2 - pcache and lsm testing, working well
  • this week:
    • Off week
    • Installation in the Italian cloud for network monitoring infrastructure. Depends on Alessandro's time.

CVMFS

See TestingCVMFS

last week:

this week:

  • Xin - has checked with Alessandro regarding the final formed CVMFS - still being tested
  • The LCG VO_ATLAS_SW_DIR will be used/assumed by the pilot.
  • Pilot wrapper needs to change to recognize the LCG environment. Suggests testing at Illinois.
  • Time frame from Alessandro - depends on testing the repo
  • Jose is writing a new pilot wrapper for OSG sites
  • Dave - will serve as a test site; notes that pilot and new layout are not in synch at the moment.

Federated Xrootd at sites: Tier 3 (Doug), Tier 2 (Charles)

last week(s):
  • Charles - have been getting xrootd segfaults in N2N? service, infrequently under heavy load
  • Optimization of WAN client
  • Yves Kemp visit at UC - comparing nodes, also working on wide area storage in the dCache context. NFS v4.1 seems to be doing quite well.
this week:

Tier 3 Integration Program (Doug Benjamin)

Tier 3 References:
  • The link to ATLAS T3 working groups Twikis are here
  • T3g Setup guide is here
  • Users' guide to T3g is here
  • US ATLAS Tier3 RT Tickets

last week(s):

  • Doug not here
this week:

Tier 3GS site reports (Doug Benjamin, Joe, AK, Taeksu)

last week:
  • None reported

this week:

Site news and issues (all sites)

  • T1:
    • last week:
      • Hiro - all is well, SS update.
    • this week:

  • AGLT2:
    • last week(s):
      • all WN's re-built
      • CVMFS - ready to turn on site-wide
      • lsm-pcache updated (a few little things found, /pnfs nacc mount needed)
      • dcap - round robin issue evident for calib sites.
      • Want to update dcache to 1.9.12-3, its now golden; downtime? wait for a couple weeks (wait for PLHC results to go out)
    • this week:

  • NET2:
    • last week(s):
      • Internal IO ramp-up progress still on-going
      • Found a lot of "get" issues; investigating
    • this week:

  • MWT2:
    • last week:
      • UC: no recurrence of Chimera crashes since dcache upgrade to 1.9.5-26
      • Sarah - MWT2 queue development, Condor preemption - successful pilots
      • Illinois - CVMFS testing: testing the new repository from Doug; have had to put in a few softlinks and getting them running successfully; testing access to conditions data; There are problems with the pilot; Xin notes that in two weeks Alessandro will have a completed; participating in HTPC testing
    • this week:
      • testing srvadmin 6.5.0 x86_64 rpms from dell as well as 6.3.0-0001 firmware on the PERC6E? /I cards in our R710s to reduce sense errors

  • SWT2 (UTA):
    • last week:
      • Issue with data server monday night - resolved.
    • this week:

  • SWT2 (OU):
    • last week:
      • Waiting for MP jobs from Doug. * this week:

  • WT2:
    • last week(s):
      • 38 R410s online this morning. Will update
    • this week:

Carryover issues (any updates?)

Python + LFC bindings, clients (Charles)

last week(s):
  • We've had an update from Alain Roy/VDT - delays because of personnel availability, but progress on build is being made, expect more concrete news soon.
this week:

WLCG accounting (Karthik)

last week: this week:

HTPC configuration for AthenaMP testing (Horst, Dave)

last week
  • Dave: Doug Smith back so lots of activity with 16.6.5 successful at Illinois. 20 jobs using 16.6.6 - all ran well, with all options. So lots of progress.
  • Horst - CVMFS + Pilots setup
  • Suggestion - look for IO performance measures.
this week

AOB

last week this week
  • None.


-- RobertGardner - 31 May 2011

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments

 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback