r1 - 20 Aug 2008 - 17:21:26 - RobertGardnerYou are here: TWiki >  Admins Web > InitialLFCConsiderations

InitialLFCConsiderations

Questions and Issues

From Hiro:
  • Should DQ2/DDM be used for Tier3 facilities (eg. support issues, CERN)
    • How does T3 get files and from where? dq2-get? from anywhere?
    • What is the catalog for a T3 if T3 has a decent space, which is likely since there are fairly inexpensive single 1TB SATA HD available (and more to come). You can easily build few ~10TB cheap yet reasonable performance storage server for T3 for proof session for examples.
  • ROOTD files for PROOF analysis jobs
    • LFC does support rootd file as any other file just like LRC. But, LCG client does not. Who is going to fix/deal with it? 1)LCG client? Or 2) DQ2 client? Or 3) LFC client?
    • To give you a concrete example:
      • In LRC, suppose it has a file with lfn=test1.root and following PFNs: srm://dcsrm.usatlas.bnl.gov/pnfs/usatlas.bnl.gov/test1.root and root://acas0420.usatlas.bnl.gov/test/test1.root. In LRC, I changed the LRC interface to return only srm end point in the default. (So, I basically did the 3rd choice above since I can change it.) Then, dq2 clients did not have to change a bit for the most people. To get root daemon server files (for proof farm), users have to pass a special flag to the web interface. If we go to LFC, someone has to change it. And, depending on the location of change, something else has to change too.
  • About the lcg client works with full path for srmv2 (for bestman)? I need to test this. If it does not work, again, who will fix at which stage?
  • About T2s, What is our deployment model for LFC? Centralized or de-centralized? With which backend Oracle/Mysql?
  • About T3s, What catalog are they going to use? If US ATLAS uses centralized model, T3s use BNL LFC too? (LFC has to work with multiple front ends in this case. And, BNL needs to have really good backend oracle.) Or T3s use something else. Or, if we use de-centralized model, are they going to have own or use associated T2s?

Migration status (Hiro)

  • Down time would be very minimum. Here is the complete process.
    1. As soon as the "production" (not "test") database is up, the database will be copied to different machine and the official migration will start. Meantime, LRC is still going to be used for production.
    2. Once all files are copied from this different machine(in 1 week less), new copy of the catalog will be made to be used for rest of the migration. Meantime, LRC still be going to be used for production. And, the announcement for LRC downtime in the next day (or within 24 hours) will be announced.
    3. Once this 2nd round of copy is done, there are very few files left to migrate. LRC will be shutdown. The final migration will be done (very short time). And, LFC will be up.

Status: June 11 (John)

  • Production hardware has been assigned and LFC with an Oracle back-end installed. So the nitty-gritty details of starting the service are worked out.
  • Hiro's LRC to LFC script is running now and has been running since yesterday. Number of clients has been increased until machine load is at 5. We've confirmed that each of the threads in the LFC server can run on different cpus/cores--so it is indeed taking advantage of the hardware.
  • Hiro estimated that it will take about a week to migrate BNL's catalog. At that point, any new entries in LRC can be queried by timestamp and migrated quickly. So it appears that a flag-day switchover is feasible with a <24 hour downtime, probably less than 12 hrs.
  • Once all 20 million entries are migrated, we'll do some performance tests. EGEE did testing with 40 million entries (with good results), but it isn't clear if they had an elaborate directory structure or not. Ours does.


-- RobertGardner - 20 Aug 2008

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Attachments

 
Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback