Reputation: 143795
I am working on a small toy project who is getting more and more releases. Until now, the documentation was just a set of pages in the wordpress blog I setup for the project. However, as time passes, new releases are out and I should update the online documentation to match the most recent release.
Unfortunately, if I do so, the docs for the previous releases will "disappear" as my doc pages are updated to the most recent version, therefore I decided to include the documentation in the release package and to keep the most recent documentation available online as a web page as well.
A trivial idea would be to wget the current docs from the wordpress pages, save them into the svn and therefore into the release package, repeating the procedure at every new release. Unfortunately, the HTML I get must be hacked by hand to fix the links (or I should hack wordpress to use BASE so that the HTML code is easily relocatable, something I don't want to do).
How should I handle the requirements of having at the same time:
Thanks
Edit: started a bounty to see if I can lure more answers. I think this is a quite important issue, and it would be nice to have multiple hints and opinions for future readers.
Upvotes: 1
Views: 645
Reputation: 2846
I think there are two problems to be solved here
For 1 i think it's best to:
To do "2" there are several tools that may generate a static site. One of them is Jekyll it's in ruby and looks quite complete and customizable.
Assuming that you use a tool like jekyll and keep the files and source in SVN you might setup your repo in this way:
repo/
tags/
rel1.0/
source/
documentation/
rel2.0/
source/
documentation/
rel3.0/
source/
documentation/
trunk/
source/
documentation/
That is:
So to publish the documentation (point 2 above):
The trick here is to publish the documentation always under a proper release identifier (in the URL, on the filesystem) and use a link (or a redirect) to make sure that the "current documentation" on the web server points to the current release.
Upvotes: 1
Reputation: 8582
WGet can convert all the links in the document for you. See the convert-links option:
http://www.gnu.org/software/wget/manual/html_node/Advanced-Usage.html
Using this in conjuction with the other methods could yield a solution.
Upvotes: 2
Reputation: 35149
For my own projects, if that were a need, I would create a sub-dir for the documentation, and have all the files refer from the known-base of there relatively. For example,
index.html -- refers to images/example.jpg
README
-- subdirs....
images/example.jpg
section/index.html -- links back to '../index.html',
-- refers to ../images/example.jpg
If the docs are included in the SVN/tarball download, then they are readable as-is. If they are generated from some original files, they would be pre-generated for a downloadable version.
Archive versions of the documentation can be unpacked/generated and placed into named directorys (eg docs/v1.05/)
Its a simple PHP script that can be written to get a list the subdirs of the /docs/ directory from the local disk and display a list, and highlighting the most recent, for example.
Upvotes: 1
Reputation: 52201
I have seen some programs use help & manual. But I am a Mac user and I have no experience with it to know if it's any good. I'm looking for a solution myself for Mac.
Upvotes: 1
Reputation: 54600
I would check your pages into SVN, and then have your webserver update from its local SVN working copy when you're ready to release. Put everything into SVN--wordpress, CSS, HTML, etc.
Upvotes: 2