r5 - 10 Feb 2011 - 00:35:16 - MarkSosebeeYou are here: TWiki >  Admins Web > MinutesFeb9

MinutesFeb9

Introduction

Minutes of the Facilities Integration Program meeting, Feb 9, 2011
  • Previous meetings and background : IntegrationProgram
  • Coordinates: Wednesdays, 1:00pm Eastern
    • 866-740-1260, Access code: 7027475

Audio Details: Dial-in Number:
U.S. & Canada: 866.740.1260
U.S. Toll: 303.248.0285
Access Code: 7027475; Chair passcode: 8734
Registration Link: https://cc.readytalk.com/r/bd2w3deu2kkg

Attending

  • Meeting attendees: John, Dave, Karthik, Aaron, Rob, Nate, Kaushik, AJ, Fred, Saul, Sarah, Alex U, Shawn, Patrick, Tom, Rik, Alden, Bob, Hiro, Mark, Wei, Horst, Doug
  • Apologies: Michael, Joe I,

Integration program update (Rob, Michael)

  • IntegrationPhase16 NEW
  • Special meetings
    • Tuesday (12 noon CDT) : Data management
    • Tuesday (2pm CDT): Throughput meetings
  • Upcoming related meetings:
  • Program notes:
    • last week(s)
      • Next face-to-face facilities meeting co-located with OSG All Hands meeting (March 7-11, Harvard Medical School, Boston), http://ahm.sbgrid.org/. US ATLAS agenda will be here.
      • Starting up CVMFS evaluation in production Tier 2 settings: SWT2_OU, MWT2 participating, AGLT2 and NET2 (possible). Instructions to be developed here: TestingCVMFS.
      • Updates to SiteCertificationP16
      • A new OSG release will be coming out shortly that will up
    • this week
      • Capacity summary coming
      • Actionable tasks in the integration program - OSG 1.2.17 update, and TestingCVMFS (new installation notes provided by Nate Y - thanks)
      • Expect to run out of production jobs later in the week - good time for downtimes (email usatlas-t2-l@lists.bnl.gov).
      • News from ADC retreat in Napoli, Kors summary slides at this weeks ADC Weekly
        • Especially, look at the summary talks on friday. We had a lot of discussions about how to manage the huge data volume expected in 2011 and 2012.
        • We are thinking about an extreme PD2P, where very little data will be pre-placed and kept on disk (one copy of RAW and two copies of final derived datasets, ATLAS-wide). Rest of storage will be temporary cache space - centrally managed. Without this, we will run out of space a few months after LHC starts (note, Tier 1's are already completely full with 2010 data).
        • Important user analysis change: looping jobs will be killed after 3 hours (not 12 hours). Dan reported that ~20% CPU was being wasted on looping jobs.
        • LFC consolidation, PRODDISK at T1, data migrations and deletions... please look at the summary talks.
      • Looping jobs (evidently inactive jobs) may be mostly prun.

Tier 3 Integration Program (Doug Benjamin, Rik Yoshida)

Tier 3 References:
  • The link to ATLAS T3 working groups Twikis are here
  • T3g Setup guide is here
  • Users' guide to T3g is here

General Tier 3 issues

last week(s):
  • Rik reports 47 possible sites, 18 are functional 6 are setting up, 7 received hardware, 1 in planning stage; majority are using the wiki to setup their sites.
this week:
  • Continuing to update list of Tier-3's - web page with status
  • Doug was at Napoli
  • Xrootd-OSG-ATLAS meeting yesterday - progress from VDT on rpm packaging, gridftp-plugin, basic configuration

Tier 3 production site issues

  • Bellamine University (AK):
    • last week(s):
      • Met with IT director about the packet shaper, working to get a bypass.
    • this week:
      • IT director working with consulting firm to find solution. Removal of hangs.
      • Horst - running transfer tests, no hangs. 5 MB/s each direction. Throttled slightly.
      • Hope to get to 20 MB/s in near term.

  • UTD (Joe Izen)
    • last week(s):
      • No LFC errors this week, for the first time
      • In production most of the week
      • Caught 11 lost-heart beat jobs overnight. At the moment offline due to power blackout in Texas.
    • this week

Operations overview: Production and Analysis (Kaushik)

Data Management & Storage Validation (Kaushik)

Shifters report (Mark)

  • Reference
  • last meeting: Operations summary:
    Yuri's summary from the weekly ADCoS meeting:
    http://www-hep.uta.edu/~sosebee/ADCoS/ADCoS-status-summary-1_31_11.html
    
    1)  1/26: data transfer errors from SLACXRD_USERDISK to MWT2_UC_LOCALGROUPDISK ("source file doesn't exist").  From Wei: I think you can close this ticket. There is only a few missing files and they do not exist at WT2. I don't know why FTS were asked to transfer them (maybe they were there when the request was submitted?) Repeated transfer request created lots of failures simply because they don't exist.  ggus 66613 closed, eLog 21535.
    2)  1/27: all U.S. sites received an RT & ggus ticket regarding the issue "WLCG sites not publishing GlueSiteOtherInfo=GRID=WLCG value."  Consolidated into a single goc ticket, https://ticket.grid.iu.edu/goc/viewer?id=9871.  Will be resolved in a new OSG release currently being tested in the ITB.
    3)  1/27: from Bob at AGLT2 - At 1pm EST AGLT2 had a dCache issue.  Available postgres connections had been dropped from 1000 to 300 during a pgtune a few days ago, and was not noticed until this failure was noticed.  Unfortunately, this caused a LOT of job failures during the last 3 hours.
    Later that evening / next morning:
    We had some sort of "event" on our gate keeper around 11pm last night. Ultimately, condor was shot, and our load is lost. I have disabled auto-pilots this morning to both AGLT2 and ANALY_AGLT2 while we investigate the cause.  Indications of hitting an open file limit on the system were found, and we need to understand the cause.  Queues were set off-line.  Later Friday afternoon, from Bob: We increased several sysctl parameters on gate01 dealing with total number of available file handles.  Issues resolved, queues set back on-line.  eLog 21583.
    4)  1/30: AGLT2 - job (stage-out: "Internal name space timeout lcg_cp: Invalid argument") & file transfer errors ("failed to contact on remote SRM [httpg://head01.aglt2.org:8443/srm/managerv2]").  From Shawn: This morning around 8 AM Eastern time our postgresql server for the dCache namespace (Chimera) filled its partition with logging info (over 10 GB in the last 24 hours). This was traced to multiple attempts to re-register a few files over and over.  We have cleaned up space on the partition and modified the logging to be "terse" so this won't happen as easily in the future.  ggus 66794 in-progress, eLog 21616.
    5)  2/1: Maintenance outage at AGLT2 - from Bob: The outage will include all of Condor, as well as a dCache outage and upgrade.
    Update 2/1 late afternoon: outage extended in OIM to 10 p.m. EST.  Later, early a.m. 2/2: work completed, test jobs were successful, queues set back on-line.  eLog 21696.
    6)  2/2: UTD-HEP set off-line at request of site admin.  Rolling blackouts in the D-FW area (unfortunately).  eLog 21702.
    7)  2/2: WISC_DATADISK - failing functional tests with file transfer errors like " Can't mkdir: /atlas/xrootd/atlasdatadisk/step09]."  ggus 66897 in-progress, eLog 21695.
    
    Follow-ups from earlier reports:
    (i)  12/17, 12/20:  ANALY_SWT2_CPB was auto-blacklisted twice.  Hammercloud test jobs were failing due to the fact that a required db release file was not yet transferred to the site when the first jobs started up.  Once the transfer completed the test jobs began to complete successfully.  Discussion underway about how to address this issue.
    (ii)  12/21: NERSC file transfer errors - "failed to contact on remote SRM [httpg://pdsfdtn1.nersc.gov:62443/srm/v2/server]."  ggus 65617 in-progress, eLog 20810.
    Update 1/30 from a shifter: No more problems seen - closing this ticket (ggus 65617).
    (iii)  1/9: AGLT2 - low-level of job failures with the error "Put error: lfc_creatg failed with (2704, Bad magic number)."  Site is investigating.
    (iv)  1/14: AGLT2_PHYS-SM - file transfer failures due to "[USER_ERROR] source file doesn't exist."  ggus 66150 in-progress, https://savannah.cern.ch/bugs/?77036.  Also https://savannah.cern.ch/bugs/index.php?77139.
    1/25: Update from Shawn:
    I have declared the 48 files as "missing" to the consistency service. See https://twiki.cern.ch/twiki/bin/view/Atlas/DDMOperationProcedures#In_case_some_files_were_confirme and you can track the "repair" at http://bourricot.cern.ch/dq2/consistency/
    Let me know if there are further issues.
    (v)  1/19: UTD-HEP - job failures with missing input file errors - for example: "19 Jan 07:07:10|Mover.py | !!FAILED!!2999!! Failed to transfer HITS.170554._000123.pool.root.2: 1103 (No such file or directory)."  ggus 66284, eLog 21346.
    Update 1/27: from the site admin: These errors seem to have been resolved by the LFC cleaning -- closing the ticket.  eLog 21612.
    (vi)  1/19: BNL - user reported a problem while attempting to download files from the site - for example: "httpg://dcsrm.usatlas.bnl.gov:8443/srm/managerv2: CGSI-gSOAP running on t301.hep.tau.ac.il reports Error reading token data header: Connection closed."  ggus 66298.  From Hiro:
    There is a known issue for users with Israel CA having problem accessing BNL and MWT2. This is actively investigated right now. Until this get completely resolved, users are suggested to request DaTRI request to transfer datasets to some other sites (LOCAGROUPDISK area) for the downloading.
    (vii)  1/21: SLACXRD file transfer errors - "failed to contact on remote SRM [httpg://osgserv04.slac.stanford.edu:8443/srm/v2/server]."  Issue was reported to be fixed by Wei, but the errors reappeared later the same day, so the ticket (ggus 66346) was re-opened.  eLog 21409.
    Update 1/30 from a shifter: No more errors in the last 12 hours, 400 successful transfers, maybe migration comes to an end.  ggus 66346 closed, eLog 21611.
    (iix)  1/21: File transfer errors from ALGT2 to MWT2_UC_LOCALGROUPDISK with source errors like "FTS State [Failed] FTS Retries [1] Reason [SOURCE error during TRANSFER_PREPARATION phase:[USER_ERROR] source file doesn't exist]."  https://savannah.cern.ch/bugs/index.php?77251, eLog 21440.
    (ix)  1/24: ALGT2 job & file transfer errors - "[SOURCE error during TRANSFER_PREPARATION phase: [HTTP_TIMEOUT] failed to contact on remote SRM [httpg://head01.aglt2.org:8443/srm/managerv2]. Givin' up after 3 tries]."  ggus 66450 in-progress, eLog 21488.  Update from Bob at ALGT2: 
    Just restarted dcache services on head01. rsv srmcp-readwrite had been red. Hopefully that will clear the issue.  Since the queues at the site 
    (analy_, prod) had been set offline (ADC site exclusion ticket: https://savannah.cern.ch/support/?118828) test jobs were submitted, and they completed successfully (eLog 21497).  Are we ready to close this ticket?
    Update 1/26: The site team restarted dcache services on head01 (rsv srmcp-readwrite had been red). Test jobs completed OK.  ggus 66450 closed, eLog 21526.
    (x)  1/25: SLACXRD_DATADISK file transfer errors - "[Failed] FTS Retries [1] Reason [DESTINATION error during TRANSFER_PREPARATION phase: [NO_SPACE_LEFT] No space found with at least 2895934054 bytes of unusedSize]."  http://savannah.cern.ch/bugs/?77346.
    Update 1/26 from Wei: this can be ignored. I was moving data amount storage nodes and was filling the quota fast.
     
  • this meeting:* Operations summary:
    Yuri's summary from the weekly ADCoS meeting:
    http://indico.cern.ch/getFile.py/access?contribId=0&resId=0&materialId=0&confId=125577
    
    1)  2/3: AGLT2 - job failures (stage-out errors) & DDM transfer failures.  From Shawn: Last night I was working on getting the head02 setup as similar as possible to old head02.  
    I installed yum-autoupdate as part of the process.  This morning it upgraded postgres90 from 9.0.2-1 to 9.0.2-2.  The problem is the version on head02 is custom built.  This caused 
    postgresql to shutdown around 6:50 AM.  I reverted, put the exclude into /etc/yum.conf and got things running again.
    Also, there was a brief network outage which resulted in many "lost heartbeat" errors.  Everything resolved by ~noon CST.  eLog 21718.
    2)  2/3: MWT2_UC - job failures with lost heartbeat & stage-in errors.  From Nate at MWT2: We had a network outage at IU which caused those lost heartbeats. The nodes are still 
    down until someone there can replace the switch.  eLog 21729.
    3)  2/3: US sites HU_ATLAS_Tier2, UTA_SWT2, SWT2_CPB - job failures due to a problem with atlas release 16.6.0.1.  Xin reinstalled the s/w, issue resolved.  ggus 66992-94, 
    RT 19389-91 tickets closed, eLog 21732-34.
    4)  2/4-2/5: BNL-OSG2_DATADISK transfer errors such as "failed to contact on remote SRM [httpg://dcsrm.usatlas.bnl.gov:8443/srm/managerv2]. Givin' up after 3 tries]."  Issue was 
    due to excessive load on the dCache pnfs server - now resolved.  ggus 67005 closed, eLog 21745.
    5)  2/5: Number of running production jobs in the U.S. cloud temporarily decreased - from Michael: The reason for the reduced number of running jobs was a file system on one of 
    the Condor-G submit hosts filled up earlier today. An alarm was triggered and Xin started cleaning up the filesystem a couple of hours ago. You will see the US cloud at full 
    capacity shortly.  eLog 21797.
    6)  2/5-2/6: SWT2-CPB-MCDISK file transfer failures.  Issue understood and resolved - from Patrick: The SRM failed when the partition containing bestman filled up due to logging.  
    The logs were removed and the srm restarted.  ggus 67070 / RT 19394 closed, eLog 21902.
    7)  2/6: MWT2_UC - job failures with the error "Can't find [AtlasProduction_16_0_3_6_i686_slc5_gcc43_opt]."  Xin was eventually able to install this cache (initially had a problem 
    accessing the CE due to a load spike) - issue resolved.  ggus 67074 closed, eLog 21856.
    8)  2/7: IllinoisHEP lost heartbeat job failures.  From Dave at Illinois: These were caused by a problem on our NFS server early this morning.  The problem was fixed, but only 
    after the currently running jobs all failed.  ggus 67121 closed, eLog 21907.
    9)  2/8: NET2_DATADISK - failing functional tests with "failed to contact on remote SRM" errors.  Issue resolved - from Saul: Fixed (bestman needed a restart when we updated 
    our host certificate).  ggus  67145 closed, eLog 21912.
    10)  2/8: OU_OCHEP_SWT2_DATADISK failing functional tests with "failed to contact on remote SRM" errors.  Horst couldn't find an issue on the OU end, and subsequent 
    transfers were succeeding.  ggus 67146 closed, eLog 21913.
    11)  2/8: FTS errors for transfers to a couple of U.S. cloud site.  The messages indicated a full disk on the FTS host: "ERROR MSG: [FTS] FTS State [Failed] FTS Retries [1] Reason [AGENT 
    error during TRANSFER_SERVICE phase: [INTERNAL_ERROR] cannot create archive repository: No space left on device]."  Issue resolved by Hiro.  ggus 67132 closed, eLog 21905.
    
    Follow-ups from earlier reports:
    (i)  12/17, 12/20:  ANALY_SWT2_CPB was auto-blacklisted twice.  Hammercloud test jobs were failing due to the fact that a required db release file was not yet transferred to the site 
    when the first jobs started up.  Once the transfer completed the test jobs began to complete successfully.  Discussion underway about how to address this issue.
    (ii)  1/9: AGLT2 - low-level of job failures with the error "Put error: lfc_creatg failed with (2704, Bad magic number)."  Site is investigating.
    (iii)  1/14: AGLT2_PHYS-SM - file transfer failures due to "[USER_ERROR] source file doesn't exist."  ggus 66150 in-progress, https://savannah.cern.ch/bugs/?77036.  
    Also https://savannah.cern.ch/bugs/index.php?77139.
    1/25: Update from Shawn:
    I have declared the 48 files as "missing" to the consistency service. See https://twiki.cern.ch/twiki/bin/view/Atlas/DDMOperationProcedures#In_case_some_files_were_confirme and 
    you can track the "repair" at http://bourricot.cern.ch/dq2/consistency/.  Let me know if there are further issues.
    Update 1/28: files were declared 'recovered' - Savannah 77036 closed.  (77139 dealt with the same issue.)  ggus 66150 in-progress.
    (iv)  1/19: BNL - user reported a problem while attempting to download files from the site - for example: "httpg://dcsrm.usatlas.bnl.gov:8443/srm/managerv2: CGSI-gSOAP running 
    on t301.hep.tau.ac.il reports Error reading token data header: Connection closed."  ggus 66298.  From Hiro: There is a known issue for users with Israel CA having problem accessing 
    BNL and MWT2. This is actively investigated right now. Until this get completely resolved, users are suggested to request DaTRI request to transfer datasets to some other 
    sites (LOCAGROUPDISK area) for the downloading.
    (v)  1/21: File transfer errors from ALGT2 to MWT2_UC_LOCALGROUPDISK with source errors like "FTS State [Failed] FTS Retries [1] Reason [SOURCE error during 
    TRANSFER_PREPARATION phase:[USER_ERROR] source file doesn't exist]."  https://savannah.cern.ch/bugs/index.php?77251, eLog 21440.
    (vi)  1/27: all U.S. sites received an RT & ggus ticket regarding the issue "WLCG sites not publishing GlueSiteOtherInfo=GRID=WLCG value."  Consolidated into a single goc ticket, 
    https://ticket.grid.iu.edu/goc/viewer?id=9871.  Will be resolved in a new OSG release currently being tested in the ITB.
    (vii)  1/30: AGLT2 - job (stage-out: "Internal name space timeout lcg_cp: Invalid argument") & file transfer errors ("failed to contact on remote SRM [httpg://head01.aglt2.org:8443/srm/managerv2]").  
    From Shawn: This morning around 8 AM Eastern time our postgresql server for the dCache namespace (Chimera) filled its partition with logging info (over 10 GB in the last 24 hours). This was 
    traced to multiple attempts to re-register a few files over and over.  We have cleaned up space on the partition and modified the logging to be "terse" so this won't happen as easily in the future.  
    ggus 66794 in-progress, eLog 21616.
    Update 2/3: issue resolved by reducing the level of postgresql logging.  ggus 66794 closed, eLog 21717.
    (iix)  2/2: UTD-HEP set off-line at request of site admin.  Rolling blackouts in the D-FW area (unfortunately).  eLog 21702.
    Update 2/8: site recovered from power issues - test jobs completed successfully - set back on-line.  eLog 21901,
    https://savannah.cern.ch/support/index.php?119022.
    (ix)  2/2: WISC_DATADISK - failing functional tests with file transfer errors like " Can't mkdir: /atlas/xrootd/atlasdatadisk/step09]."  ggus 66897 in-progress, eLog 21695.
    Update 2/4: Site admin reported issue was resolved.  No more errors, ggus 66897 closed.
    
    
    • Automatic release not yet deployed everywhere. Would be nice to get all sites consistently using the same system.

DDM Operations (Hiro)

Throughput and Networking (Shawn)

  • NetworkMonitoring
  • https://www.usatlas.bnl.gov/dq2/throughput
  • Now there is FTS logging to the DQ2 log page at: http://www.usatlas.bnl.gov/dq2log/dq2log (type in 'fts' and 'id' in the box and search).
  • last week:
    • Off week; Tom is adding the latency matrix monitoring
    • DYNES - proposals all looked strong - will likely accept all proposals, with clarifications. Will be a BOF at joint techs meeting next week at Clemson, milestones and schedule. This will be available via remote.
  • this week:
    • Last week's meeting skipped - sent around the perfsonar performance matrix. Sites are requested to please follow-up.
    • LHCOPN meeting tomorrow in Lyon - a need for better monitoring; Jason will send summary notes.
    • DYNES - there will be a phased deployment; first are the PI, co-PI sites, then 10 sites at a time, etc. Meeting at joint-techs last week. Deploy all sites in the instrument by end of year. May be a separate call for additional participants. There was an announcement last week at joint-techs. Everyone applied has been provisionally accepted.

Global Xrootd: Tier 3 (Doug), Tier 2 (Charles)

last week(s):
  • Charles - working on HC tests running on using federated xrootd. Also working on xrd-lfc module - requires voms proxy; looking at lfc-dli, but performance is poor, and its being deprecated. Can we use a server certificate?
  • Wei - discussed with Andy regarding the checksum issue - may require architectural change.
  • HC tests running pointed at UC local xrootd redirector (through ANALY_MWT2_X). Few job failures (9/200) tracking down. Event rate not as good as dcap. May need more tuning on xrootd client. Hiro will set this up at BNL.
  • Local tests dcap versus xrootd - apparently factor 2 improvement
this week:
  • Development release in dq2 for the physical path
  • rpms from OSG - adler32 bug fixed; will work on testing re-installation

Site news and issues (all sites)

  • T1:
    • last week(s): Comprehensive intervention at the Tier 1 - Saturday to Monday - various services were upgraded. Doubled bandwidth between storage farm and worker nodes - up to 160 Gbps. Several grid components upgraded - move SL5 on some nodes. CE upgraded. A number of failure modes discussed with NEXAN - new firmware for disk arrays, to improve identification of faulty disks. Will further improve availability and performance. Hiro - on storage management, dcache upgraded to 1.9.5-23. 3.1.18 pnfs. upgraded postgres to 9.0.2, and backend disk area (hardware swap to get more spindles). Hot standby for postgres. All dcache changes went okay. LFC upgraded to 1.8.0-1, significant upgrade. Should discuss with OSG to package this version.
    • this week: Will be adding expansion chasis for IBM storage servers, tomorrow. Due to the maintenance work on the storage servers in order to add more capacities, some of many storage will go off line for short period tomorrow (Feb 10th). Since only small fraction of storage servers are affected, there is no scheduled downtime associated with this activity. However, it is expected that users(/production/DDM) will experience sporadic connectivity problems, particularly for reading. The impact to writing should be minimum (if none at all.) Hiro These are NEXAN shelves. Hiro busy testing storage - finding some strange pre-fecthing performance (ZFS).

  • AGLT2:
    • last week: Downtime yesterday - change to our LAN at MSU, including routers connecting to regional network. Work ongong: dcache upgrade, and Condor (more stable negotiator). Router work on-going at UM. Expect to be back later.
    • this week: Downtime next Monday to finish up tasks in advance of data taking. New SAS SSDs to arrive. Networking on VMWare systems suspect - requires full shutdown rather than simple restart. Also finding some packet loss to/from Condor-VM job manager, perhaps due to incomplete spanning-tree configuration (to be fixed on Monday). Working with Condor team to build some robustness.

  • NET2:
    • *last week(s):*Meeting with Dell on Monday to finalize purchases, including Dell network equipment, and some new worker nodes. An install issue being worked on with Xin. Continue to work on detailed planning for big move to Holyoke.
    • this week: pcache issue on the BU side. Release problem at HU with 16.0.2, 16.0.3. Will purchase Tier-3 equipment from Dell (for ATLAS and CMS). Will ramp up analysis production at HU - will require a 10G NIC at BU. There were some fiber channel problems - investigating.

  • MWT2:
    • last week(s): Continuing to install 88 worker nodes; will install new libdcap. Will install CVMFS.
    • this week: Working on connectivity/network issue with new R410s.

  • SWT2 (UTA):
    • last week: Working on Tier 3; working on federated xrootd monitoring. All running fine otherwise.
    • this week: Iced-in last week. Will work on mapping issue as discussed above. Will take a downtime in the next week.

  • SWT2 (OU):
    • last week: All is fine. Schedule installation of R410, may require electrical work. Bestman2 has been tested extensively in the past months.
    • this week: Shutdown last week, and today. Working on mapping issue.

  • WT2:
    • last week(s): Working with Dell on new purchase, want low-end 5400 rpm drives for "tape"-disk.
    • this week: All is well. Setting up a PROOF cluster, hardware is setup. 7 nodes each with 16 cores and 24G memory and 12x2TB disks. In April there will be power outages at SLAC. Considering 6248 to provide gigabit. May use 8024F as aggregation.

Carryover issues ( any updates?)

Release installation, validation (Xin)

The issue of validating process, completeness of releases on sites, etc. Note: https://atlas-install.roma1.infn.it/atlas_install/ - site admins can subscribe, and get notified of release installation & validation activity at their site.

  • last report(s)
    • AGLT2 now running Alessandro's system - now in automatic installation mode. Will do other sites after the holiday.
    • MWT2_UC is using the new system, for only 16 series releases. If it works well, will enable this for the new system.
    • Next site - BU - once BDII publication issue resolved, will return to this.
    • WT2, IU, SWT2 - depending on Alessandro's availability.
  • this meeting:

AOB

  • last week
  • this week


-- RobertGardner - 08 Feb 2011

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments

 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback