Reputation: 854
I have tried setting up a logical way of using subversion (or any other VCS really but i've played more with svn).
Let me start with explaining my problem, We are a few coworkers which needs to be changing our customers websites on a daily basis, however the customers also can change some files via their "admin panel" on the website. As well as some files are generated on the servers, logfile, export/import data, sitemaps etc.
We are now working over FTP, and to maintain some kind of file history we make .bak.revision#.php files but it is almost unmanagable. On top of that i am using Netbeans as my IDE, which requires local copies of the files.
To start a project i need to sync changed files with my customer (or worst case download full project) (which can take forever).
I really enjoy working with Netbeans, but now i sometimes end up using notepad++ instead.
I set up a pretty nice way of working with SVN with a post-commit hook that exports the project into a http folder that way i can do an update on my local copy, make changes and then commit -done-
But i don't want to overwrite files that might have been changed on the server, for example the customers CSS-file which they can make changes to themselves.
So i thought i cannot be the only one to have this problem, how to work with SVN if not everybody in the team does? (in my case the customers)
i may be able to use svn:ignore on those files i guess?
(this is test environment) My post commit hook is very simple; svn export file:///svn/repo/LIVE /var/www/html/ --force
EDIT Plausible solution i will look into, if i create a pre-commit, or even better ?pre-update? hook to import overwrite files already in the repo, that would solve my problem, would it not?
NVM The Import runs commit. commit and pre commit would not work well, since it would create an endless loop
Kindest Regards Iesus
Upvotes: 1
Views: 229
Reputation: 854
:::My findings:::
(@LazyBadger This answer was to long to put in a comment.)
Export does not work, i need a real working directory. But i can deny access directly in the httpd.conf, in other words no hassle with the .htaccess files.
I can use UPDATE to keep the web directory up to date in a post-commit script. (example below)
#!/bin/sh
REPOS="$1" REV="$2" cd /var/www/html/
MESSAGE=$(svnlook propget --revprop -r $REV $REPOS svn:log)
Author=$(svnlook propget --revprop -r $REV $REPOS svn:author)
CHANGES=$(svnlook changed -r $REV $REPOS)
echo "" >> /$REPOS/SVN.Log
echo "Commit | $REPOS | $REV | $MESSAGE | $CHANGES | $Author" >> /$REPOS/SVN.Log
echo update >> /$REPOS/SVN.Log
svn update >> /$REPOS/SVN.Log
echo klar >> /$REPOS/SVN.Log
To keep the customer-edited files in sync i can either use ignore (to simply not have those files in repository), will most probably be used for binary files and log files to keep the size of the repo under control.
Then we can use a commit daemon to keep the repository up to date, unfortunately the EasySVN solution (great software by the way, free as well, also it can be used with any server) was not for Linux environment (just yet).
By the way the EasySVN could with ease be used for backing up files to any SVN with the great advantage of file history. Heads up! in order to use svn+ssh support you will need to download a sshclient which can handle -q flag (for example from this package: http://sharpsvn.open.collab.net/servlets/ProjectProcess?pageID=3794). And use pageant from putty
But i could create a commit script or run the commits by using CRON once in a while, it really should be embedded into the php save function but that's a major job, perhaps in future releases of our software it will :)
Example commit script:
#!/bin/sh
cd /var/www/html/ && svn commit -m webedit -q
That one can of course be improved alot, for example we coule be checking for changes first ^^.
Example Daemon like script to check for changes and commit if needed
#!/bin/bash
SVNREPOS=( "/var/www/html/" )
cd /PATH/TO/THIS/SCRIPT/
LCK_FILE=`basename $0`.lck
if [ -f "${LCK_FILE}" ]; then
echo LOCK exists
MYPID=`head -n 1 $LCK_FILE`
if [ -n "`ps -p ${MYPID} | grep ${MYPID}`" ]; then
echo `basename $0` is already running [$MYPID].
exit
else
rm "${LCK_FILE}" -f
sh $0
exit
fi
else
echo $$ > $LCK_FILE
while :; do
for i in "${SVNREPOS[@]}"
do
cd $i
#What are we doing here? Basicly we are testing if anything is to commit, if so commit it
test ! $(svn st|wc -l) -eq 0 && svn ci -m SERVERCOMMIT -q
sleep 5s
done
done
echo ENDED LOOP IT SHOULD NOT HAVE DONE THAT
rm "${LCK_FILE}" -f
sh $0
fi
echo $$ > $LCK_FILE
Init.d script to start/stop daemon
#! /bin/bash
#SVN CHANGED MONITOR DAEMON
. /etc/init.d/functions
RETVAL=0
SCRIPTDAEMON=/PATHTO/changemonitor.sh
getpid() {
pid=`ps -eo pid,command | grep "sh $SCRIPTDAEMON" | grep -v "grep" | awk '{ print $1 }'`
}
start(){
gprintf "Starting SVNChangeMonitor: "
echo_failure
getpid
if [ -z "$pid" ]; then
sh $SCRIPTDAEMON &
fi
getpid
if [ -n "$pid" ]; then
echo_success
else
echo_failure
fi
echo
return $RETVAL
}
stop() {
gprintf "Stopping SVNChangeMonitor: "
echo_failure
getpid
if [ -n "$pid" ]; then
kill -s kill $pid
rm $SCRIPTDAEMON.lck -f
getpid
if [ -z "$pid" ]; then
echo_success
else
echo_failure
fi
else
echo_failure
fi
echo
return $RETVAL
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
gprintf "SVNChangeMonitor is restarted\n"
;;
status)
getpid
if [ -n "$pid" ]; then
gprintf "SVNChangeMonitor (pid %s) is running." "$pid"
echo_success
echo
else
RETVAL=1
gprintf "SVNChangeMonitor is stopped\n"
echo
fi
;;
*)
gprintf "Usage: %s {start|stop|restart|status}\n" "$0"
echo
exit 1
;;
esac
exit $RETVAL
Thanks for your kind help and pointers!
Upvotes: 0
Reputation: 5129
You could use a DVCS locally on the webserver. Compared to centralized a VCS, the decentralized one decouples what happens on the customer side from what happens within your team. Also, this scenario is all about merging and any DVCS worth its salt is all about merging, and does it best.
I give here a concrete workflow with Git, but that is doable with any other DVCS. I will assume there is a repository {TEAM}
that is reachable both from the team and from the webserver to make the workflow here simpler, but there are multiple ways to do otherwise. You just can't push for your development team directly to the webserver's repository in a way that changes the files, see my explanation on why you can't push to a checked out branch in Git.
In , you have one main branch, TEAM/dev where your development occurs. You create your webserver's repository like this:
~$ git clone -b dev {TEAM} /var/www/mystuff
~$ cd /var/www/mystuff
mystuff$ git checkout -b locall-changes
At some points, you will commit whatever the webserver has done. It could be a cron job, some PHP code or a manual intervention:
mystuff$ git ci --all -m "Webserver modifications"
The --all option tells Git to commit every modification to files already tracked by Git. If the webserver may create new files that you want to track, you'll have to put ignore patterns (for log files and such) and commit this way:
mystuff$ git add .
mystuff$ git ci -m "Webserver modifications"
Note at this point, no communication has occurred between the webserver and your team since the initial cloning. That's the strength of a DVCS: the operation of the webserver and its keeping history is totally autonomous. You can go bankrupt or be wiped clean by a nuke, not only will it continue to work, but someone can fill in and continue what you were doing seamlessly.
You can track the webserver by having its repository be a remote on your side (you do that once only):
myrepo$ git remote add webserver ssh://webserver/var/www/mystuff
myrepo$ git fetch webserver
myrepo$ git branch changes webserver/local-changes
When you want to publish your new code to the webserver, you first get its history and check how it merges:
myrepo$ git co changes
myrepo$ git pull
This will fetch new commits from the remote webserver and update the changes branch to what local-changes is now on the webserver.
myrepo$ git merge dev
Now you check how the merge went and when you have resolved conflicts and committed the merge, you make it available in {TEAM}:
myrepo$ git push {TEAM} changes
One the webserver, you now just have to pull that merge. Only problem is, some new commits may have been made in the meantime. To be sure, you can force updating the branch only when your merge is a direct descendant of the current commit (Git calls this a fast-forward merge):
mystuff$ git pull --ff-only {TEAM}/changes
If no FF is possible, this command doesn't change the files and exits with a non-zero error code, which makes it possible to let a automated script trigger it and let you know if it didn't really happen.
Upvotes: 1
Reputation: 10582
Rather than trying to force the customer to use svn (knowingly or not), just treat the customer site as a replica, and use a tool like Unison to keep the unmanaged side (the customer) with the managed side (yours). Unison is somewhat like rsync except that it handles changes in both directions, so files that the customer changes will get copied down to your system.
So your workflow is:
Upvotes: 0
Reputation: 97395
Most logical and technical way in your case (but not easiest) will be
More exotic way: Subversion for everybody (for some users - behind the scene)
For developers this way change nothing. For technically mediocre customers you suggest, prepare and configure EasySVN (and, therefore, Assembla's repository) - they'll just get local folder (or tree), all changes in which are (automagically) synced with related repository (reverse direction also work - background updates from repo delivered to WC). SSH or FTP tools of Assembla space transfer changes to site (automatically on commit or on demand by hand)
Upvotes: 1