Chortle
Chortle

Reputation: 171

C - Storing a large group of files as a single resource

Please forgive me if there is a glaringly obvious answer to this question; I haven't found it because I'm not entire sure what I'm looking for. It may well be this duplicates a question I haven't found; sorry.

I have a C executable that uses text, audio, video, icons and a variety of different file types. These files are stored locally; the folder structure is large and deep and would need to be installed alongside the application for it to operate correctly (not that I anticipate it being distributed I'm looking to package my own work for convenience).

In my own opinion it would be more convenient if the file library was stored in a single file that remained accessible to the application for example alongside /usr/bin/APPLICATION or in the most appropriate location; accessed by the executable when required.

I searched for questions similar and found suggestions that indicated two possible options Resource Files which appear to be native to Windows and Including files at compile. The first question leads to an answer similar to the second and doesn't answer the question relating to the existence of resource files for linux executables. It (like the second) looks at including the datafile in the compilation process. This is not so useful as if I only want to update my resources I'm forced to recompile the entire application (the media is dynamically added).

QUESTION: Is there a way to store a variety of file types in one single file accessible to an executable in linux, and if so how would you implement this?

My thoughts on this initially were to create a .zip or .gz file which might also offer compression as an added bonus but I have no idea how (or if it is even possible) to access data within such a file on the fly. I'm equally uncertain if there is a specific file type or library that offers a more suitable solution. Also I know virtually nothing about .dat files could these be used in this context on a linux system?

Upvotes: 5

Views: 159

Answers (4)

Vineet1982
Vineet1982

Reputation: 7918

I have a C executable that uses text, audio, video, icons and a variety of different file types. These files are stored locally; the folder structure is large and deep and would need to be installed alongside the application for it to operate correctly.

Considering the added complexity of associated differrent file types alongwith folder structure large and deep and required installed with application. Adding a single resources file would be difficult or would say near to immpossible to trace changes in case if you want to change resources dynamically. Certainly, adding resources to executable file is not an option as it will be increase the size of executable file and needed frequent re-complation in case of update of resources.

After giving consideration on all aspects of your project it seems to me the solution would be using INI file. INI would be stored at definate location and other resources location should be prived in INI File. As with INI you can store the locations of resources, hash keys and sizes easily and would easy check the changes or update the resources.

Since you are using already compressed versions of File type and thus General Zipping algos would not work as the rate would be very low. Thus recommend to use 7z algos for compression. From various algo I would suggest to opt of xz zipping algo as it is currently used by many opensource project to compress the binaries and decrease the size.

Foreach file compression its crc32 or hash value should also included in INI file to check the validity of data transfered.

Upvotes: 1

3442
3442

Reputation: 8576

You have several alternatives (TODO: add more ;)):

You can read some archiver file format specifications, writting code to read/write to those archivers, and waste your time doing so.

You can invent a dirty, simple file format, for example ("dsa" stands for "Dirty and Simple Archiver"):

#include <stdint.h>

// Located at the beginning of the file    
struct DSAHeader {
    char            magic[3];            // Shall be (char[]) { 'D', 'S', 'A' }
    unsigned char   endianness;          // The rest of the file is translated according to this field. 0 means little-endian, 1 means big-endian.
    unsigned char   checksum[16];         // MD5 sum of the whole file. (when calculating checksums, this field is psuedo-filled with zeros).
    uint32_t        fileCount;
    uint32_t        stringTableOffset;   // A table containing the files' names.
};

// A dsaHeader.fileCount-sized array of DSAInodeHeader follows the DSAHeader.
struct DSANodeHeader {
    unsigned char   type;              // 0 means directory, 1 means regular file.
    uint32_t        parentOffset;      // Pointer to the parent directory, or zero if the node is in the root.
    uint32_t        offset;            // The node's type-dependent header starts here.
    uint32_t        nodeSize;          // In bytes for files, and in number of entries for directories.
    uint32_t        dataOffset;        // The file's data starts at this offset for files, and a pointer to the first DSADirectoryEntryHeader for directories.
    uint32_t        filenameOffset;    // Relative to the string table.
};

typedef uint32_t    DSADirectoryEntryHeader;    // Offset to the entry's DSANodeHeader

The "string table" is a contiguous sequence of null-terminated character strings.

This format is greatly simple (and portable ;)). And, as a bonus, if you want (de)compression, you can use something like Zip, BZ2, or XZ to (de)compress your file (those programs/formats are archiver-agnostic, i.e, not dependent on tar, as commonly believed).

As last last (or first?) resort, you may use an existent library/API for manipulating archivers and compressed file formats.

Edit: Added support for directories :).

Upvotes: 1

FractalSpace
FractalSpace

Reputation: 5685

Lets say you have:

top-level-folder/
  |
   - your-linux-executable
   - icon-files-folder/
   - image-files-folder/
   - other-folders/
   - other-files

Do this (inside top-level-folder)

tar zcvf my-package.tgz top-level-folder

To expand, do this:

tar zxvf my-package.tgz

Upvotes: 0

Nominal Animal
Nominal Animal

Reputation: 39326

I do not understand why you would use a single file at all. Considering the added complexity (and increased chance of bugs creeping in) of file extraction and the associated overheads, I do not see how it would be "more convenient".

I have a C executable that uses text, audio, video, icons and a variety of different file types.

So do many other Linux applications. The normal approach, when using package management, is to put the architecture independent data (icons, audio, video, and so on) for application /usr/bin/YOURAPP in /usr/share/YOURAPP/, and architecture dependent data (like helper binaries) in /usr/lib/YOURAPP. It is extremely common for the latter two to be full directory trees, sometimes quite deep and wide.

For locally compiled stuff, it is common to put these in /usr/local/bin/YOURAPP, /usr/local/share/YOURAPP/, and /usr/local/share/YOURAPP/ instead, just to avoid confusing the package manager. (If you check ./configure scripts or read Makefiles, this is the chief purpose of the PREFIX variable they support.)

It is also common for the /usr/bin/YOURAPP to be a simple shell script, setting environment variables, or checking for user-specific overrides (from $HOME/.YOURAPP/), ending up with exec /usr/lib/YOURAPP/YOURAPP.bin [parameters...], which replaces the shell with the actual binary executable without leaving the shell in memory.

As an example, /usr/share/octave/ on my machine contains a total of 138 directories (in a hierarchy of up to 7 directories deep) and 1463 files; about ten megabytes of "stuff" all told. LibreOffice, Eagle, Fritzing, and KiCAD take hundreds of megabytes there each, so Octave is not an extreme example in any way either.

Upvotes: 3

Related Questions