Described below are two different methods for quickly generating a certain size of fluff, or filler, to be inserted into a text file for testing purposes. The first method is to use a single line command which will very quickly fill a file with random text (must have
base64 installed). The second method is to use a script that will quickly fill a file with text that the user specifies (there are some limitations).
For the first method, let’s use this scenario: we need to quickly come up with a 2MB file to test a text scanning function within our program and it does not matter what the text is. Running the following command will create this file for us and call it random.txt.
base64 /dev/random | head -c 2000000 > random.txt
For the second method, let’s use this scenario: we need to quickly fill a file with the string “abcdefghijklmnopqrstuvwxyz 123456890n” repeated until the result is 2MB in size in order to test a parsing algorithm in our program. The following script will generate this file for us and call it result.txt. I have tested this script up to 20MB and it took only a few seconds, however, the larger the size, the longer this script will take to run (very inefficient for really large files). At first glance, it seems like a complicated script, but upon second glance you’ll notice that it’s mostly user defined variables and safe guard checks for user input.
#!/bin/sh # Variables; 2000000 = 2MB max_size=20000000 filename=result.txt tmpfilename=tmp.txt string="abcdefghijklmnopqrstuvwxyz 123456890n" # File checks so we don't overwrite somebody's important files if [ -e $tmpfilename ]; then echo "File: $tmpfilename exists!" echo "This script is going to use this as a temporary file, so either" echo "modify the script variable $tmpfilename or remove the file." exit 1 fi if [ -e $filename ]; then echo "File: $filename exists!" echo "Remove it and restart this script" exit 1 else echo -e $string > $filename fi # Initialize the variable size=$(stat -c %s $filename) # Start the loop, increasing the size of the file 2x until reaching max_size while [ $size -lt $max_size ]; do cat $filename > $tmpfilename cat $tmpfilename >> $filename size=$(stat -c %s $filename) done # Chop off any excess head -c $max_size $filename > $tmpfilename mv $tmpfilename $filename
I believe the first method is fantastic. It creates a file very quickly and it’s easy to understand. I can see room for improvement on the second method. If you have any suggestions, please feel free to share them with me! I would really love to hear it.