Reputation: 67163
The output from Sphinx, the python documentation generator, results in a large number of HTML files. Each one has a header with a large number of JavaScript and CSS includes:
<link rel="stylesheet" href="../_static/sphinxdoc.css" type="text/css" />
<link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
<script type="text/javascript" src="../_static/jquery.js"></script>
<script type="text/javascript" src="../_static/underscore.js"></script>
<script type="text/javascript" src="../_static/doctools.js"></script>
<script type="text/javascript" src="../_static/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<link rel="stylesheet" type="text/css" href="../_static/custom.css" />
<link rel="stylesheet" type="text/css" href="../_static/colorbox/colorbox.css" />
<script type="text/javascript" src="../_static/colorbox/jquery.colorbox-min.js"></script>
Most of these are minified individually, but this is still suboptimal because it requires separate requests to the webserver when the client's cache is empty. Is there a tool like YUI Compressor or the Closure Compiler that will take HTML files as input, compress all of the individual externally-linked scripts, and then rewrite the output? This would be similar to what django_compressor does.
Upvotes: 10
Views: 1461
Reputation: 11541
You can try Springboard. I think it will be well suited for your needs.
Upvotes: 1
Reputation: 82
Agree with Above answer.
You can do one other thing.
Put all Scripts after Body instead of header. May be it will increase your loading speed.
Upvotes: 1
Reputation: 7345
There are two components that you are asking for - one that combines and minifies your resources, and another that rewrites the static HTML files to use the minified resources.
For the first component, I believe you could use this minify engine; it is designed to serve pages dynamically but you could either figure out how to hook into the code directly or save the output to static files (the URL allows you to specify multiple files).
For the second element, it should not be too difficult to parse the page as XML (assuming it's valid XHTML) and find any <link>
or <script>
tags, storing a copy of the document without those elements, compile the minified resources and add them once the <head>
node closes, read the rest of the file, and store the built up XHTML document. If this is too much, you might also be able to use a regular expression to find and replace <link>
and <script>
tags; normally regular expressions can't perfectly parse XML but those tags should be OK because they won't be nested.
If you want to put together what I've desribed but need more help getting started, just ask.
Upvotes: -1