Reputation: 856
I have a customized Linux filesystem with some binaries and new folders in the root file system. This Linux filesystem is created for small board computer.
Currently, I compress the root folder of the customized filesystem to a tar.gz file. With this tar.gz file I can share it to my friends. Then this file have to be extracted to their SD card. With this method they can also update a libraries or binary for testing. However, this mechanism (creating and deploying) takes a lot of time.
My questions are: 1. How can I improve the creating and deploying customized linux image? 2. If I see the linux distributions, they use .iso or .img format. What is the reason using .iso or .img instead of tar.gz or zip file?
Thanks.
Upvotes: 0
Views: 1137
Reputation: 980
Automate as much as you can. You could even set up a Jenkins server which should be able to automate anything that looks like a repetitive job. (compiling, compressing, uploading or sending the image to friends)
Compress your image using all your cores. AFAIK gzip
does only use one core. pigz
should compress/decompress faster
This is the real time saver because you don't need to copy your rootfs to a sdcard anymore:
If your bootloader supports it, use nfs/tftp boot. Your board then boots directly from a network mount instead of a sdcard. I dont know which bootloader you use, but u-boot supports it, and others probably do as well.
Upvotes: 1
Reputation: 106
it depends from what exactly need to be updated. If this is entire filesystem that is always changing - then just distribute the compressed img (by xz for example). if this is some small part of the system - just use the package manager, i.e. extract this part to package, next create this new package using the standard distro tools and distribute only it.
So if your SD card image has the fixed size - you can use rsync/xdelta to distribute only changes, but their efficiency depends from the exact changes percentage.
Upvotes: 0