Cutch
Cutch

Reputation: 215

FTP File Transfers Using Piping Safely

I have a file forwarding system where a bunch of files are downloaded to a directory, de-multiplexed and copied to individual machines. The files are forwarded when they are received by the master server. And files normally arrive in bursts. (Auth by ssh keys)

This script creates the sftp session, and uses a pipe to watch the head of a fifo pipe.

HOST=$1
pipe=/tmp/pipes/${HOST%%.*}
ps aux | grep -v grep | grep  sftp | grep "user@$HOST" > /dev/null
if [[ $? == 0 ]]; then
    echo "FTP is Running on this Server"
    exit
else
    pid=`ps aux | grep -v grep | grep  tail  | tr -s ' ' | grep $pipe`
    [[ $? == 0 ]] && kill -KILL `echo $pid | cut -f2 -d' '`
fi
if [[ ! -p $pipe ]]; then
    mkfifo $pipe
fi
tail -n +1 -f $pipe | sftp -o 'ServerAliveInterval 60' user@$HOST > /dev/null &
echo cd /tmp/data >>$pipe #Sends Command to Host
echo "Started FTP to $HOST"

Update: I ended up changing the cleanup code to use "ps aux" to see if an ftp session is running, and subsequently if the tail -f is still running. Grep by user@host and the name of the pipe respectively. This is done when the script is called, and the script is called whenever I try to upload a file.
IE:

FILENAME=`basename $1`
function transfer {
    echo cd /apps/data >> $2 # For Safety
    echo put $1 .$FILENAME >> $2
    echo rename .$FILENAME $FILENAME  >> $2
    echo chmod 0666 $FILENAME  >> $2
}
./ftp.sh host
[ -p  $pipedir/host ] && transfer $1 $pipedir/host

Files received on the master server are caught by Incron which writes a put command and the available file's location to the fifo pipe, to be sent by sftp (rename is also preformed).

My question is, is this safe? Could this crash on ftp errors/events. Not really worried about login errors.
The goal is to reduce the number of ftp logins. Single Session/Minute(or more) intervals.
And allow files to be forwarded as they're received. Dynamic Commands. I'd prefer to use standard ubuntu libraries, if possible.

EDIT: After testing and working through some issues the server simply runs with

[[ -p $pipe ]] && echo FTP is Running on this Server
ln -s $pipe $lock &> /dev/null || (echo FTP is Running on this Server && exit)
[[ ! -p $pipe ]] && mkfifo $pipe
( tail -n +1 -F $pipe & echo $! > $pipe.pid ) | tee >
        ( sed "/tail:/ q" >/dev/null && kill $(cat $pipe.pid) |& rm -f $pipe >/dev/null; )
        | sftp -i ~/.ssh/$HOST.rsa -oServerAliveInterval=60 user@$HOST &
rm -f $lock

Its rather simple but works nicely.

Upvotes: 3

Views: 2330

Answers (1)

Zoltán Haindrich
Zoltán Haindrich

Reputation: 1808

you might be intrested in setting up a more simpler(and robust) syncronization infrastructure: if a given host is not connected when a file arrives...it never recieves it (if i understand correctly your code)

i would do something like

 rsync -a -e ssh user@host:/apps/data pathToLocalDataStore

on the client machines either periodically or by event...rsync is intelligently syncronizes the files by their timestamp and size (-a contains -t)

the event would be some process termination like: client does(configure private key usage in ~/.ssh/config for host):

#!/bin/bash
while :;do
  ssh user@host /srv/bin/sleepListener 600
  rsync -a -e ssh user@host:/apps/data pathToLocalDataStore
done

on the server /srv/bin/sleepListener is a symbolic link to /bin/sleep server after recieving new file:

killall sleepListener

note: every 10 minutes a full check is performed...if nodes go offline/online it doesn't matter...

Upvotes: 1

Related Questions