SaAtomic
SaAtomic

Reputation: 749

Create a file of a specific size with random printable strings in bash

I want to create a file of a specific size containing only printable strings in bash.

My first thought was to use /dev/urandom:

dd if=/dev/urandom of=/tmp/file bs=1M count=100
  100+0 records in
  100+0 records out
  104857600 bytes (105 MB, 100 MiB) copied, 10,3641 s, 10,1 MB/s

file /tmp/file && du -h /tmp/file
  /tmp/file: data
  101M  /tmp/file

This leaves me with a file, of my desired size, but not only containing printable strings.

Now, I can use strings to create a file only containing printable strings.

cat /tmp/file | strings > /tmp/file.txt
file /tmp/file.txt && du -h /tmp/file.txt 
  /tmp/file.txt: ASCII text
  7,0M  /tmp/file.txt

This leaves me with a file containing only printable strings, but with the wrong file size.

TL;DR

How can I create a file of a specific size, containing only printable strings, in bash?

Upvotes: 10

Views: 15292

Answers (8)

wryfi
wryfi

Reputation: 564

Combining a couple ideas here, this one-liner works for me:

dd if=<(base64 < /dev/urandom) of=tmpfile bs=1K count=102400

For more control of the output (but slower performance) use tr:

dd if=<(tr -dc '[a-zA-Z0-9]' < /dev/urandom) of=tmpfile bs=2K count=51200

Note that these only appear to work with block sizes of up to around 2K on macOS Monterey and ~5K on debian buster (not sure why); larger block sizes result in smaller than expected files.

Upvotes: 1

F. Hauri  - Give Up GitHub
F. Hauri - Give Up GitHub

Reputation: 70722

Quick solution

Based on your request, using string on urandom

dd if=<(strings </dev/urandom) bs=4K count=25600 of=/tmp/file

.. or maybe better (this avoid newline and by using tr do require less random generated bytes):

dd bs=4K count=25600 if=/dev/urandom |
    tr \\000-\\037\\177-\\377 \\040-\\077\\040-\\077\\040-\\150 >/tmp/file

Demo:

On a standard 80 chars width terminal:

dd bs=$COLUMNS count=5 if=<(
  tr \\000-\\037\\177-\\377 \\040-\\077\\040-\\077\\040-\\150 </dev/urandom )
<h<&9.[,&> p)hMp8)s 8|S5&Q 1hD9:b7o"B$%hDc99@h8C!9uflMwu)hFZ($h:& Tl a,X1s?&29n(
h.7\)h`- X24Tq-9g6hvaVqh]E"/vRK30=.L-J&9*/ZFMz<@%h$;cN[&Xu4hJ ?1:-"II+SQD$\h;h$M
f0}7'7i"*m*d$CFAn/X%<c'] h;}?Oe4d?pFP<f+i0:ohh3dUC5m4_*F!d`#I,4)99*42hVh8A#a8 .6
/~.g3!Vd>h8h>6=h_`A:ha/8ZHVY{QIh4?/Mc]#&b&*h*t6V#=:j9$\-6#ERr8]-Y]U?*\h4+m37c841
8rh#?58;)4'X4Ghh4Z :h7h#!6hhh?"8\$$U/"@ek,N)?;MJ(>(uh\_^I41+080;h2!S#)04(Dhnh"%h
5+0 records in
5+0 records out
400 bytes copied, 0.0005598 s, 715 kB/s

Here are exactly 400 printable characters (whithout any newline).

Upvotes: 0

Hank Phung
Hank Phung

Reputation: 2149

You can try:

 cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1 > filename.txt

with tr -dc 'a-zA-Z0-9' is the characterset fold -w 32 is the length and head -n 1 is the number of lines

Upvotes: 0

user8017719
user8017719

Reputation:

A conversion of @MarekNowaczyk answer to plain bash:

#!/bin/sh
(( $# )) || {  echo "Pass file size as initial parameter" >&2; exit 1; }
size=$1
mk_range(){ name=$1; shift; printf -v "$name" '%b' "$(printf '\\U%08x' "$@")"; }
add_chars(){ local var; mk_range var "$@"; chars+=$var; }
    ## uncomment following lines to use each range.
    add_chars {48..57}    # 0-9 numbers
    add_chars {65..90}    # A-Z LETTERS
    add_chars {97..122}   # a-z letters
    add_chars {32,{33..47},{58..64},{91..96},{123..127}}     # other chars.
    # convert list of characters to an array of characters.
    [[ $chars =~ ${chars//?/(.)} ]] && arr=("${BASH_REMATCH[@]:1}");
    alphabet_len=${#arr[@]} 
    # loop to print random characters
    for ((i=0;i<$size;i++)); do
        idx=$((RANDOM%alphabet_len))
        printf '%s' "${arr[idx]}"
    done
    # Add a trailing new line.
    echo

This code does not ensure that the resulting random distribution is uniform, it was written as an example. To ensure a random distribution in the output, we would have to use careful arbitrary precision arithmetic to change the base (count of output characters).
Also, RANDOM is not a CSPRNG.

Upvotes: 2

Marek Nowaczyk
Marek Nowaczyk

Reputation: 257

You can do it in awk way and customize character set.

This solution is dedicated for Windows bash users - MINGW, because there is no dd, random tools at default MINGW environment.

random_readable.sh Bash script that randomize N characters from defined alphabet:

#!/bin/sh

if [ -z $1 ]; then
    echo "Pass file size as initial parameter"
    exit
fi

SIZE=$1
seed=$( date +%s )

awk -v size="$SIZE" -v seed="$seed" '
# add characters from range (a .. b) to alphabet
function add_range(a,b){
    idx=a;
    while (idx <= b) {
        alphabet[idx] = sprintf("%c",idx)
        idx+=1
    }
}
BEGIN{
    srand(seed);  
    NUM=size;  
    idx=0;  

    # creating alfphabet dictionary
    add_range(32,126)   # all printable
    ## uncomment following lines to random [a-zA-Z0-9<operators>]
    # add_range(48,57)    # numbers
    # add_range(65,90)    # LETTERS
    # add_range(97,122)   # letters
    # add_range(33,47)    # operators: !"# .. etc

    # alfphabet to alphanums array
    idx=0
    for (k in alphabet){
        alphanums[idx]=alphabet[k]
        idx+=1
    }
    alphabet_len = idx
    i=0

    # and iterate to random some characters
    idx =0
    while (idx < NUM){                         
        dec =0
        char_idx=int(rand() * alphabet_len)
        char = alphanums[char_idx]
        printf("%s",alphanums[char_idx])
        idx+=1
    }  
}  
' 

Creating file:

random_readable.sh 100 > output.txt

Upvotes: 1

user8017719
user8017719

Reputation:

The correct way is to use a transformation like base64 to convert the random bytes to characters. That will not erase any of the randomness from the source, it will only convert it to some other form.
For a (a little bit bigger) file of 1 MegaByte of size:

dd if=/dev/urandom bs=786438 count=1 | base64 > /tmp/file

The resulting file will contain characters in the range A–Za–z0–9 and +/=.

Below is the reason for the file to be a little bigger, and a solution.

You could add a filter to translate from that list to some other list (of the same size or less) with tr.

cat /tmp/file | tr 'A-Za-z0-9+/=' 'a-z0-9A-Z$%'

I have left the = outside of the translation because for an uniform random distribution it is better to leave out the last characters that will (almost) allways be =.

Size

The size of the file will get expanded from the original size used from /dev/random in a factor of 4/3. That is because we are transforming 256 byte values into 64 different characters. That is done by taking 6 bits from the stream of bytes to encode each character. When 4 characters have been encoded (6*4=24 bits) only three bytes have been consumed (8*3=24).

So, we need a count of bytes multiple of 3 to get an exact result, and multiple of 4 because we will have to divide by that.
We can not get a random file of exactly 1024 bytes (1k) or 1024*1024 = 1,048,576 bytes (1M) because both are not exact multiple of 3. But we can produce a file a little bigger and truncate it (if such precision is needed):

wanted_size=$((1024*1024))
file_size=$(( ((wanted_size/12)+1)*12 ))
read_size=$((file_size*3/4))

echo "wanted=$wanted_size file=$file_size read=$read_size"

dd if=/dev/urandom bs=$read_size count=1 | base64 > /tmp/file

truncate -s "$wanted_size" /tmp/file 

The last step to truncate to the exact value is optional.

Randomness generation.

As you are going to extract so much random values from urandom, please do not use random (use urandom) or your app will be blocked for a long time and the rest of the computer will work without randomness.

I'll recommend that you install the package haveged:

haveged uses HAVEGE (HArdware Volatile Entropy Gathering and Expansion) to maintain a 1M pool of random bytes used to fill /dev/random whenever the supply of random bits in dev/random falls below the low water mark of the device.

If that is possible.

Upvotes: 20

hek2mgl
hek2mgl

Reputation: 157927

What about this?

size=1048576 # 1MB
fname="strings.txt"

while read line ; do
    # Append strings to the file ...
    strings <<< "${line}" >> "${fname}"
    fsize="$(du -b "${fname}" | awk '{print $1}')"
    # ... until it is bigger than the desired size
    if [ ${fsize} -gt ${size} ] ; then
        # Now truncate the file to the desired size and exit the loop
        truncate -s "${size}" strings.txt
        break
    fi 
done < /dev/urandom

I admit that it is not very efficient. I faster attempt would be to use dd:

size=1048576
fname="strings.txt"

truncate -s0 "${fname}"

while true ; do
    dd if=/dev/urandom bs="${size}" count=1 | strings >> "${fname}"
    fsize="$(du -b "${fname}" | awk '{print $1}')"
    if [ ${fsize} -gt ${size} ] ; then
        truncate -s "${size}" strings.txt
        break
    fi
done

Upvotes: 1

Farhad Farahi
Farhad Farahi

Reputation: 39237

You can use one of the following:

  1. truncate You should have a baseline textfile with a size larger than what you need. then use the following:

    truncate -s 5M filename
    DESCRIPTION
       Shrink or extend the size of each FILE to the specified size
    
    [...]
    
     -s, --size=SIZE
          set or adjust the file size by SIZE
    

2.Use tail: this options requires reference text file too.

tail -c 1MB reference_big.txt> 1mb.txt

Upvotes: 0

Related Questions