r4 - 28 Oct 2011 - 19:42:51 - WeiYangYou are here: TWiki >  Admins Web > MinutesFedXrootdOct28

MinutesFedXrootdOct28

Coordinates

  • Bi-weekly US ATLAS Federated Xrootd meeting
  • Friday, 2-3pm Eastern
  • USA Toll-Free: (877)336-1839
  • USA Caller Paid/International Toll : (636)651-0008
  • ACCESS CODE: 3444755

  • Attending: Rob, Patrick, Wei, Doug, Ofer, Hiro, Andy
  • Apologies: Horst

FAX Status Dashboard

https://uct3-xrdp.uchicago.edu:8443/rsv/

Background

Meeting business

  • Twiki documentation locations
    • Some have difficulty to access certain CERN twiki, unknown why. Suggest to put at BNL twiki with link at CERN twiki to BNL (http, not https).
  • Meeting time change?
    • Initial doodle poll suggested Fri slot. Some interests to use the un-utilized slot on Wed. Will doodle again.

Xrootd release 3.1.0 deployment

this meeting:
  • Question: when and how to decide if a release passed validation and can be deployed. 3.1.0 is the first "mature" release so suggested to site to deploy for the proxy function. Should be more careful for Tier 2s. Wei has tested proxy functionalities, also start deploying to WT2 (deployed on one solaris node, working).
    • Comment by Wei: Xrootd releases come out with some functional validation by stakeholders and large sites. But lack a formal release validation process.
  • RPM updates overwrite /etc/init.d/{xrootd,cmsd} which have LFC environment setup. Those setup should go to /etc/system/xrootd which survives rpm updates. Patrick will test it.
  • Rob: ready to deploy at UC T3 proxy. Wei: deployed at SLAC Tier 2 proxy.
  • Doug get xrdcp seg fault ofter. Will supply a core file to Andy.

dq2-ls-global and dq2-list-files-global

last meeting:
  • want dq2 client tools that can list files in a dataset in GFN (or local redirector); and check against their existence in FAX or local site.
this meeting:
  • Hiro's poor man's version can be found at http://www.usatlas.bnl.gov/~hiroito/xrootd/dq2/; work with containers.
  • Doug: very long waiting to dq2-ls-global when there are missing files (incomplete datasets). not acceptable in real use. Hiro/Wei: multi-threading the dq2-l*-global, or use xprep before checking existence.

  • Hiro: will build xprep into dq2-ls-global.
  • Doug: what about those dq2-get/xprep requests still in queue, shouldn't be marked as not existing. Hiro: Looking for a way consolidate sites xprep queue info for dq2-ls-global to check. Will discuss detail over e-mail.

ANALY queue

last meeting:
  • Setting up a panda analy queue to run tests against the federation, and therefore using GPN from within the pilot/lsm, testing both direct access and stage-in with HC testing.
  • Rob will follow-up.
  • look up dbrelease file from federation.

this meeting:

  • Rob ran interactive test jobs against glrd, MWT2, Illinois, AGLT2 and BNL. First tried glrd, slow (probably was redirected to BNL). not surprisingly, BNL is slow. subsequent test against glrd is faster (probably was redirected to other sites).
  • To run in Panda queue, Dan van der Ster suggested to use prun with --pfnList to supply a list of files for jobs (list coming from dq2-list-file-global), but may still have dependency on site having the datasets, even though reading points to glrd. Doug: that may not be the case. Rob will try.

D3PD example

last meeting:

  • Get Shuwei's top DP3D? example into HC (Doug?)
  • Doug will follow-up in two weeks to see about getting this into HC, and the workbook updated. Need to drive this with real examples, with updated D3PDs? . So examples need to be updated for Rel 17.

this meeting:

  • Doug: Goal is to get this into HC test, with sites being able to replace input datasets. will be used by sites to compare performance of reading from local and remote storage. will follow up.

N2N

last week(s)
  • See further https://twiki.cern.ch/twiki/bin/viewauth/Atlas/AtlasXrootdSystems
  • Discussed having N2N2? to allow opaque info be passed to N2N2. We will not have N2N2 due to architectural difficulty.
  • We discussed making GFN symlinks but some sites think the amount of data they have is too large, and maintenance is also an issue.
  • Discussed having an external mapping of GFN to GUID, a dedicated DB or a table in LFC. Nobody likes it.
  • Discussed embedded GUID into GFN as opaque like info.
  • Decided to continue improving the current N2N and leave GUID as a future option. Chicago can keep the source of N2N in CVS for now - Send update to Rob. Wei can compile

  • Doug's use-case - look up files that existed at BNL but N2N can't find it. Hiro: need to change the code slightly - will do.
  • Probably only happens at BNL. Had to do with the way panda outputs to BNL.
  • Advises Doug to test exclusively to T2 - shouldn't fail.
    • Comment by Doug --- isn't this advice really counter productive if we are federating storage against all T1 and T2 storage then we should not only pick cases that we know will suceed that is a false test. As I said before I picked an analysis dataset typically used by users (Top group anyway).
    • Comment by Wei --- Agree with Doug. However, the issue is understood and Hiro is working on improving it (time permitted). So we advise Doug to move on and look at new issues that will pop up.

this meeting:

  • Hiro will look at BNL specific modification in N2N code
  • When the new dq2-ls-global is ready (with xprep to MYXROOTD), Doug can test failure rates at glrd and various sites.

Intergrated checksumming

last week:
  • Wei: with 3.1, checksum is working for Xrootd proxy even when N2N is in use. Tested at SLAC at both T2 and T3. Should be straightforward for Posix sites.

  • Not sure about dCache sites. Probably need a plugin for dCache. Callout to figure the checksum from a dCache system. Andy and Hiro will go through this at CERN

this meeting:

  • Wei: Direct reading, dq2-get (-whatever) don't need checksum from remote sites.
  • On-hold

Proxy server

last week:
  • Reading from Native Proxy triggers stage-in (reading from global redirector will not).
  • You want to export the path as no stage for the proxy. This will be in 3.1.

this meeting:

  • xrdcp seg fault. See 3.1.0 deployment notes above.
  • no other issue.

sss module development

last week(s):
  • Issues in "sss": private/public address NICs; Double free() in sss (Wei got a core dump).
  • The multiple home/NIC issue is not limited to using the "sss" module. But as a broader issue, it may only be addressed after 3.1.0 release.
  • "sss" allows xrootdfs or proxy to tell server the actual users, therefore it can enable authentication/authorization even when XrootdFS? is in use.

  • The dual NIC issue was fixed by Andy
  • xrootdfs sometimes crashes at SLAC T3 - but not sure what triggers the crash. have some guess regarding xrd client lib timeout idle connections, put in mitigations in xrootdfs according to the guess. In use at SLAC for a while without crashing (maybe just lucky)
  • There might be a problem with the "unix" security module will be fixed in 3.1.rc2. (not using thread-safe verion of getuid, getguid)

this meeting:

  • the dual NIC issue isn't completely fixed. Doug has a workaround. Wei will discuss more with Andy.

xprep warnings

last week:
  • Does xprep (and dq2-get -xprep) give a warning if site's xrootd cluster is not configured for xprep. At least we need to give sites enough warning so that they don't miss this issue during configuration.
  • Protocol is best effort
  • Where should the warning go? You do it against the local redirector. If local cluster isn't config'd, then what? User isn't known.
  • Andy will look into this. In the config file this is exported.
  • Wei will track.

  • Message back to user whether xprep is configured or not would be useful. This appears in the cmsd log on the manager host.
  • Can also check for .failed files - again, an admin action, not generally for users.
  • Note the data server has an internal queue frm queue; by default, 2 requests are queued at a time.
  • Can dq2-ls be used against a local storage system to check for existence of files without consistency checking?
  • Wei found only 3 failures out of hundreds of dq2-get xprep tests.
  • Notes that one could modify the stage-in script to add re-try to easily achieve 100% success rate.
  • Want to be able to do a dq2-ls and get the namespace back, but thats not possible now without triggering other actions (downloading, consistency checking).
  • Is the dq2 client calculating checksums (i.e. part of the verification step of dq2-ls). Wei: I can provide a mechanism to put this into the extended attributes in release 3.1. Andy provides a Posix-like api to extract these - called by XrootdFS. Same attribute names. Will test this with dq2-ls.
  • Proper checksum evaluation from FRM will require 3.1.

this meeting:

  • maybe dq2-ls-global with MYXROOTD is a solution?

Logging, alerts

last week:
  • Notifying user of completeness or failure of dq2-get -xprep. It seems we favor letting users to check if files are ready via a dq2-ls against local file system/storage. As Doug pointed out, the global file name dq2-ls produced isn't quite identitical to the GFN we expected (and dq2-get produced?). In this case, who is in the best position to push this to be fixed (with ADC)
  • A nice to have - Andy wil investigate how this might be. Post 3.1 likely.
  • Could implement in dq2-ls, Hiro

  • This is similar to the discussion above.
  • Angelos' dq2-ls method requires a mounted fuse filesystem
  • Hiro - could do 'remote' - point to the redirector. Could also use pre-load library.
  • Hiro will talk w/ Angelos to add as possible option to dq2.
  • Need a version of dq2-ls that doesn't do the consistency verification; Doug is following up. Hiro could develop something quickly.

this meeting:

  • maybe dq2-ls-global with MYXROOTD is a solution?

FRM script standardization

last week:
  • Standardize FRM scripts, including authorization, GUID passing, checksum validation and retries.
  • A few flavors possible.
  • Setup a twiki page just for this.

  • Brings up the question again about checking completion of xprep commands. Failures do leave a .failed file. Are there tools to check the frm queues. Can we provide a tool for this?
  • Andy: suggests setting up a webpage to monitor the frm queues. frm_admin command. Hiro wil be looking into this.
  • a prototype of doing this:
for i in all_your_data_servers; do
    ssh your_dataserver_$i and do the following:
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:xrootd_lib_path
    export PATH=$PATH$xrootd_bin_path
    frm_admin -c your_xrootd_config_file -n your_xrootd_instance_name query xfrq stage lfn qwt 
done | sort -k2 -n -r

this meeting:

cmdsd+dcache/xrootd door

last meeting:
  • An updated CMSD that will work with the native dCache/xrootd door (Andy?)
  • A caching mechanism to allow the lookup done by the CMSD N2N[2?] plugin to be useable by the xrootd door (either dCache or Xrootd version) (Andy/Hiro/Wei/?)
  • Redirect to the xrootd-dcache door; will do the lookup and do memcached. cmsd will need the N2N plugin. N2N? must write to something the dCache sites can read.
  • Hiro will look into this; not critical path.

  • On-hold.

this meeting:

  • On-hold.

Authorization plugin

last meeting:

  • A "authorization" plugin for the dCache/xrootd door which uses the cached GFN->LFN information to correctly respond to GFN requests (Hiro/Shawn/?)

  • On-hold.

this meeting:

Sharing Configurations

last meeting:

this meeting:

Monitoring

last meeting:

this meeting:

Ganglia monitoring information

last meeting:

  • Note from Artem: Hello Robert, We've managed to do some progress since our previous talk. We build rpms, here is link to repo: http://t3mon-build.cern.ch/t3mon/, we have rebuilded versions of gangla, gweb in it. Ganglia people've issued ganglia 3.2 and new ganglia web (gweb), all our stuff was rechecked and works with this new software. It's better to install ganglia from our repo, instructions are here: https://svnweb.cern.ch/trac/t3mon/wiki/T3MONHome. About xrootd: we have created daemonized version of xrootd summary to ganglia script. It's available at the moment at https://svnweb.cern.ch/trac/t3mon/wiki/xRootdAndGanglia, it sends xrootd summary metrics (http://xrootd.slac.stanford.edu/doc/prod/xrd_monitoring.htm#_Toc235610398) to ganglia web interface. Also we have application which works with xrootd summary stream but at the moment we're not sure how it's better to present fetched data. We collect there user activity and accessed files, all within the site. Last week we installed one more xrd development cluster and we're going to test if it possible to get and then split information about file transfers between sites/within one site. WBR Artem
  • Deployed at BNL, works.
  • Anyone tried this out in the past week? Would be good to try this out before software week to provide feedback.

this meeting:

AOB


-- RobertGardner - 27 Oct 2011

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments

 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback