1.4.4.2.4. Accessing the grid storage system from worker nodes¶
In the last section, we have seen how to connect to the grid storage system from a computer, including the submit nodes that we maintained.
We will now see how it can be accessed from within worker nodes.
If you haven’t done already, you will need to setup the user-side access on our submit nodes by following the User credentials section. You will also need to run Creating a proxy periodically.
As usual, you can create a proxy using
voms-proxy-init --voms souk.ac.uk --valid 168:0
This will creates an Attribute Certificate (AC) to /tmp/x509up_u$UID
.
1.4.4.2.4.1. Example job¶
From now on we assumes you already created a proxy recently and it has not been expired.
In gfal.ini
, we set use_x509userproxy
, and HTCondor will automatically copy from the standard location of the generated AC above and transfer it to the worker node for us.
executable = gfal.sh
log = gfal.log
output = gfal.out
error = gfal.err
use_x509userproxy = True
should_transfer_files = Yes
when_to_transfer_output = ON_EXIT
request_cpus = 1
request_memory = 512M
request_disk = 1G
queue
And in gfal.sh
,
#!/bin/bash -l
# helpers ##############################################################
COLUMNS=72
print_double_line() {
eval printf %.0s= '{1..'"${COLUMNS}"\}
echo
}
print_line() {
eval printf %.0s- '{1..'"${COLUMNS}"\}
echo
}
########################################################################
PROJ_DIR='bohr3226.tier2.hep.manchester.ac.uk//dpm/tier2.hep.manchester.ac.uk/home/souk.ac.uk'
for PROTOCOL in davs root; do
print_double_line
echo "Testing gfal-ls with $PROTOCOL"
print_line
gfal-ls -alH --full-time "$PROTOCOL://$PROJ_DIR"
print_double_line
echo "Testing gfal-mkdir with $PROTOCOL"
gfal-mkdir -p "$PROTOCOL://$PROJ_DIR/$USER/testing"
print_line
gfal-ls -alH --full-time "$PROTOCOL://$PROJ_DIR/$USER"
print_double_line
echo "Testing gfal-rm with $PROTOCOL"
print_line
gfal-rm -r "$PROTOCOL://$PROJ_DIR/$USER/testing"
print_double_line
echo "Testing gfal-copy with $PROTOCOL"
echo "hello $PROTOCOL" > "hello-$PROTOCOL.txt"
gfal-copy -f "hello-$PROTOCOL.txt" "$PROTOCOL://$PROJ_DIR/$USER"
done
Note that any gfal commands from this section can be used so that you can either copy files from the grid storage system to the worker nodes in the beginning of your script, or copy files from the current worker node to the grid storage system by the end of your script.
Lastly, submit and see what happens[1]
condor_submit gfal.ini; tail -F gfal.log gfal.out gfal.err
After the job finished, you can check your output files copied to the grid storage system, like so
gfal-ls davs://bohr3226.tier2.hep.manchester.ac.uk//dpm/tier2.hep.manchester.ac.uk/home/souk.ac.uk/$USER/
gfal-cat davs://bohr3226.tier2.hep.manchester.ac.uk//dpm/tier2.hep.manchester.ac.uk/home/souk.ac.uk/$USER/hello-davs.txt
...