r2 - 25 Jun 2014 - 12:08:57 - RobertGardnerYou are here: TWiki >  Admins Web > SupportingOSG



  • Description: The OSG VO can be used by individual researchers or small groups to get up to speed on using the OSG and understanding the benefits.
  • Contact: OSG User Support <user-support@opensciencegrid.org>
  • Submission system: HTCondor
  • Workload system: GlideinWMS?


  • Worker node OS should be RHEL5 or RHEL 6
  • Worker nodes require outbound internet access (nodes can be behind NAT)
  • Worker node memory: 2 GB / slot
  • Worker node scratch: assume 10 GB per job slot
  • OSG CA certs should be available on the worker nodes
  • Site squid (optional but strongly desired): used for access to glidein software, OASIS
  • Job preemption is okay, but we prefer 4 hours minimum wall time to prevent thrashing

Security profile

  • For every job slot, a pilot job process starts up.
  • The pilot job spawns a condor master, which spawns a condor startd, which spawns condor starters, which spawn jobs from end-users.
  • The pilot job makes TCP connections for HTTP access to either the glidein factory at UCSD or IU.
  • The pilot job may send outbound UDP and or TCP to a port on the glidein collector.
  • The startd and starter sends outbound TCP traffic to two ports on one of the Condor submit machines, to communicate with the condor_schedd and condor_shadow.
  • Parrot uage:
    • Some OSG jobs are run inside of Parrot, using Parrot's CVMFS support to access OASIS software via HTTP. Not needed if sites have a CVMFS mount for OASIS mounted.
    • Parrot makes TCP connections for HTTP access to the OASIS repository.
    • If the site provides an HTTP proxy via OSG_SQUID_LOCATION, this is used by Parrot. Otherwise, a central OASIS proxy is used.
  • Some jobs may make outbound TCP connections to access data via http, gridftp, srm, irods, ... from sites across the US.

Hosts and ports:

  • The OSG front-end is osg-flock.grid.iu.edu
  • The GOC factory is glidein.grid.iu.edu
  • The Condor submit machines
    • login host at IU: OSG-XSEDE
    • login.osgconnect.net
    • submit1.bioinformatics.vt.edu
    • iplant-condor.tacc.utexas.edu
    • glide.bakerlab.org
    • workflow.isi.edu

Setting up access

  • In GUMS, enable the OSG VO for your gatekeeper under https://your_gums_host:8443/gums/hostToGroupMappings.jsp
  • Make sure the username "osg" exists on the gatekeeper(s) and worker nodes, and that the homedir is mounted and ownership is correct.
  • On the gatekeeper(s), run: gums-host-cron There should be no output. Then check /var/lib/osg/supported-vo-list.txt and user-vo-map.txt to make sure they list the new VO. If they do not, check the logfile at /var/log/gums/gums-host-cron.log

To test access

  • Ask the VO contacts above to test access
  • Or, test by mapping yourself in gums to the OSG VO and running a job. On the Manual account mappings page https://your_gums_host:8443/gums/manualAccounts.jsp, set up a local mapping of yourself to osg. Also look at Manual user groups https://your_gums_host:8443/gums/userGroups.jsp and make sure you are a part of group 'localusers'. Then, as yourself in a shell with wn-client or wlcg-client loaded, run
    globus-job-run your_gatekeeper/jobmanager-condor /usr/bin/id
    While it is still running, on your gatekeeper check that the job is queued, with the correct username and any jobmanager settings specific to your site ( ex. priority, Condor accounting groups, PBS queue )

-- Mats Rynge (rynge@isi.edu) - 25 Jun 2014
-- RobertGardner - 25 Jun 2014

About This Site

Please note that this site is a content mirror of the BNL US ATLAS TWiki. To edit the content of this page, click the Edit this page button at the top of the page and log in with your US ATLAS computing account name and password.


Powered by TWiki
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback