Archive for the ‘Scripting’ Category


Use CDPATH to Quickly Change Directories

You can create a shortcut to frequently accessed directories by adding them to the CDPATH environment variable.  So, say I frequently access /var/www/html/.  Instead of typing cd /var/www/html, I can add /var/www/ to CDPATH and then I only have to type cd html.

Open ~/.bashrc (or ~/.bash_profile) and add the following line with your frequently used directories separated with a colon (similar to PATH variable).

export CDPATH=$CDPATH:/var/www/

Here’s an example usage:

[email protected]:~> export CDPATH=$CDPATH:/var/www/
[email protected]:~> cd html
[email protected]:html>

There’s one caveat to using this that I’ve ran into in the past: if you are working with Makefiles and building c/c++ apps, this can potentially confuse the Makefile script. So, if you suddenly can’t build your project after adding this variable, try removing it.


Self Documenting Scripts

As a “good” programmer I like to put comments at the top of my scripts to say what the script does, and how it is used. I also like the script to output a useful help message when a user gets options/arguments wrong, or when they use the option ‘-h’. I found it was a pain keeping the 2 in step, and developed this simple scheme to only have one place for this info.

I’ll use bash here but I’m sure people can adapt to other scripting languages.

## Usage: helpdemo 
## This demos a self documenting scheme for scripts.

me=`basename "$prog"`

dohelp () {
  grep '^##' "$prog" | sed -e 's/^##//' -e "s/_PROG_/$me/" 1>&2

echo "Program name is: $me"
echo "Program file is: $prog"


Prefix any lines you want to be output as “help” by ‘##’ at the beginning of the line. All such lines are printed out to stderr by the dohelp function. ‘sed’ in this function also strips off the leading ‘##’ from the lines and substitutes the filename of the invoked script for ‘_PROG_’, so that if you change the name of the script, it still magically refers to the new name.

It’s a simple scheme, and can obviously be extended, e.g. one could change the dohelp function thus…

dohelp () {
  if [ "$pfx" = "" ]; then pfx='##' ; fi
  grep "^$pfx" "$prog" | sed -e "s/^$pfx//" -e "s/_PROG_/$me/" 1>&2

dohelp can now be called to select out lines with a different prefix, but its default behaviour when given no prefix, is as before.

This has formed part of my standard shell script template for many years. Hope others find it useful.

Jim Jackson


Roll your own ‘lsof’ – sort of

Ever read the ‘lsof’ utility man page? It’s a shame, because the idea of the utility is wonderful – list open files and who is using them.

I had need to see what files were in use by imapd on my server. “lsof | grep imapd” gave me far more than I needed, along with long lines of info I didn’t really need all of, and all the libraries in use, etc, etc. That’s when I thought, RTFM[1] … so I did, and eventually gave up. Maybe I was having a bad day, but the manual seemed impenetrable, so I turned to
rolling my own.

I’d explored around the /proc file system a bit before, and was sure everything I need was there, and it turned out a lot easier than I’d thought. So here goes…

Under /proc, every process has a directory, with an entry called “cmdline” which gives access to that process’s command line arguments. There is a sub-directory called “fd” that contains links to each file the process has open. With this info I knocked together this little script, I called it ‘pof’:

cd /proc
while [ "$1" != "" ]; do
  for n in [1-9]* ; do
    if [ "$n" != "self" -a -f $n/cmdline ] then
      if grep "$1" $n/cmdline > /dev/null ; then
        cmd=`tr '\0' ' ' < $n/cmdline`
        printf "%-8d %s\n" $n "$cmd"
        for m in $n/fd/* ; do
          printf " %-14s %s %s\n" `stat --printf="%N" $m`

It is called with an string or regular expression that you want to search each command line for. If it is found, the open file links are displayed for that process. So now ‘pof imapd’ gives me what I want.

The ‘cmdline’ file contains each argument of the command line as a null (‘\0’) terminated string, so I need to use ‘tr’ to convert the nulls to spaces before printing out.

On my system the imapd daemon is launched from inetd, so it’s first 3 files, stdin/stdout/stderr, are network sockets – so I need to dig a bit deeper and find out how to get the remote IP address of the socket.

Anyway I’m sure someone will tell me how easy ‘lsof’ is to use to get any info you want, but delving into /proc is fun!

Jim Jackson

[1] //


Script to Watch for Changes to Disk and Run Script Immediately

By utilizing inotifywait from the inotify-tools package, we can monitor changes to disk and immediately run a script or command on that file. No need for a cron job here…

To install inotify-tools, we simply use apt-get (or yum, depending on your distro):

sudo apt-get install inotify-tools

Here is one example script which utilizes inotifywait:


inotifywait -mr -e create -e delete -e modify --timefmt '%Y-%m-%dT%H%M%S' --format '%w;%f;%e;%T' "$watch_dir" | while IFS=';' read -r DIR FILE EVENT TIME; do

  echo "${EVENT} ${TIME}: ${DIR}${FILE} >> /home/user/watched

  case "$EVENT" in
      echo "Run create/modify script on file: ${DIR}${FILE}" >> /home/user/watched
      echo "Run delete script on file: ${DIR}${FILE}" >> /home/user/watched

Give your script a name, executable rights, and run it in the background. For example:
chmod +x && ./ &

Obviously, you’ll need to modify the script to fit your needs, but this should give you a jump start on your project.

Note: There’s an infinite loop bug in the script above. Remove the ‘modify’ options and this will clear things up. However, if you do that, then you’re really only looking for file creation or deletion. You could add more conditionals to the script above to prevent the loop. If anybody has a suggestion, I’d be happy to update the script.

Take this a step further by following my guide on Automatically Start a Script at Linux Bootup


What Connections Have I Got

It is easy to use the utility netstat to list the active connections you have to your machine

netstat -t

provides the information. But often it is good to know which process/program the connection is to. Again netstat obliges

netstat -tp

However, if like me you use 80 column xterms (or even one of the linux VTs in 80 column mode), then the long lines make the output less than easy reading. So using the “cut” utility we can cut down the verbiage to what is important

netstat -tp | cut -c21-63,80-

I often like to use the “--numeric-hosts” option to netstat too.

Another use of netstat is to show those programs that are listening for connections. This can help check if services have crashed, or whether you have unwelcome “services” running on your machine.

netstat -tpl

Again “cut” can be used to trim the fat and keep the output manageable

netstat -tpl | cut -c21-36,56-63,80-

It is worth giving the man page a netstat a check, it is a very useful utility.


Python Script to Recursively Search For and Generate Thumbnails for Video Files

This script will recursively search for files based on listed extension names and then use the ‘ffmpegthumbnailer’ program to generate a thumbnail for each file it finds.

If you’re on a Ubuntu system, you should be able to install ffmpegthumbnailer using apt-get install ffmpegthumbnailer.


import os

for root, dirnames, filenames in os.walk('/var/www/media/'):
  for filename in filenames:
    if filename.lower().endswith(('.m4v', '.mov', '.mpeg', 'mp4')): 
      ifile = os.path.join(root, filename)
      ofile = os.path.splitext(ifile)[0] + ".jpg"
        with open(ofile) as f: pass
      except IOError as e:
        print "Generating thumbnail for: " + ifile

        fftoptions = "-s0 -f"
        command = "ffmpegthumbnailer -i %s -o %s %s" % (ifile, ofile, fftoptions)

        p = os.popen(command,"r")
        while 1:
          line = p.readline()
          if not line: break
          print line

I had a specific purpose for this script, but you could easily modify it to fit your needs. When you save the file, don’t forget to mark it as executable with chmod +x


Script to Calculate Number of Hours from Minutes

The following is a quick script that will convert minutes to hours and minutes as well as total hours in decimal form (1 hour, 30 minutes is 1.5 hours). Simply save the contents of the script below to a file, execute chmod +x filename on that file, and then run it with ./filename. Feel free to modify it to fit your needs.



hrs=`echo "$minutes / 60" | bc`
min=`echo "$minutes % 60" | bc`
if [ $min -gt 0 ]; then
   if [ $min -le 2 ]; then hours=`echo "$hrs + .0" | bc`; fi
   if [ 3 -le $min -a $min -le 8 ]; then hours=`echo "$hrs + .1" | bc`; fi
   if [ 9 -le $min -a $min -le 14 ]; then hours=`echo "$hrs + .2" | bc`; fi
   if [ 15 -le $min -a $min -le 20 ]; then hours=`echo "$hrs + .3" | bc`; fi
   if [ 21 -le $min -a $min -le 26 ]; then hours=`echo "$hrs + .4" | bc`; fi
   if [ 27 -le $min -a $min -le 32 ]; then hours=`echo "$hrs + .5" | bc`; fi
   if [ 33 -le $min -a $min -le 38 ]; then hours=`echo "$hrs + .6" | bc`; fi
   if [ 39 -le $min -a $min -le 44 ]; then hours=`echo "$hrs + .7" | bc`; fi
   if [ 45 -le $min -a $min -le 50 ]; then hours=`echo "$hrs + .8" | bc`; fi
   if [ 51 -le $min -a $min -le 56 ]; then hours=`echo "$hrs + .9" | bc`; fi
   if [ 57 -le $min -a $min -le 60 ]; then hours=`echo "$hrs + 1.0" | bc`; fi

echo "Minutes Entered: $minutes"
echo "$hrs Hours, $min Minutes ($hours Hours)"

Please let us know if you think you have a better solution or have a suggestion by using the commenting system below.


Simple Stopwatch Script

The following is a short and plain shell script that will start a timer when you run the program that counts up. I think you could argue that it’s not a stopwatch because it doesn’t support laps, but it’s close enough for me. You can easy get started by copying the following code block into a text editor, saving as, running chmod +x to make it executable, and finally starting it with ./ To stop it, hit Ctrl+c.


BEGIN=$(date +%s)

echo Starting Stopwatch...

while true; do
   NOW=$(date +%s)
   let DIFF=$(($NOW - $BEGIN))
   let MINS=$(($DIFF / 60))
   let SECS=$(($DIFF % 60))
   let HOURS=$(($DIFF / 3600))
   let DAYS=$(($DIFF / 86400))

   # \r  is a "carriage return" - returns cursor to start of line
   printf "\r%3d Days, %02d:%02d:%02d" $DAYS $HOURS $MINS $SECS
   sleep 0.25

The previous script will allow you to track from the time you say “go” until you stop it. It’s also nice to the real-estate on your terminal and will use backspace to remove the characters printed from before to make room for the new ones.

If you’re not worried about starting it “now” or real estate in the terminal, you could always use uptime and throw it into a while loop like this:


while true; do uptime | cut -d' ' -f2; done

Both are simple, both have their own advantages and disadvantages. Choose wisely. 😉

Source: forums

I used the scripts to help me figure out how long it was taking my desktop to lock up so I could troubleshoot it better. Another use might be to keep track of how much time you’re spending on the computer vs the amount of time spent skiing! Have fun either way.

A subscriber, Jim, came up with a much better stopwatch script than my thrown together example. Jim sent it via email, but I’ll post it here for all to see.

It doesn’t start counting till you press the spacebar, pressing the
spacebar again pauses it counting, until the spacebar is pressed to
continue counting.

Press ‘q’ to quit
Press ‘r’ to reset to zero


# sets stdin to no echo and give a char every tenth of a sec.
stty -echo -icanon time 1 <&0 chkspace () { if ! read -t 0 ; then return 1 ; fi # no char pressed read -n 1 ans if [ "$ans" = " " ]; then return 0 ; fi case "$ans" in r|R) COUNT=0 ; BEGIN=$(date +%s) printf "\r%3d Days, %02d:%02d:%02d" 0 0 0 0 ;; q|Q) stty echo icanon <&0 echo "" exit 0 ;; [1-9]) echo " - $ans" ;; esac return 1 } echo "Stopwatch: to start and stop press the SPACEBAR..." printf "\r%3d Days, %02d:%02d:%02d" 0 0 0 0 COUNT=0 IFS= while true ; do while true; do if chkspace ; then break; fi sleep 0.1 done BEGIN=$(date +%s) while true; do NOW=$(date +%s) let DIFF=$(($NOW - $BEGIN + $COUNT)) let MINS=$(($DIFF / 60)) let SECS=$(($DIFF % 60)) let HOURS=$(($DIFF / 3600)) let DAYS=$(($DIFF / 86400)) # \r is a "carriage return" - returns cursor to start of line printf "\r%3d Days, %02d:%02d:%02d" $DAYS $HOURS $MINS $SECS if chkspace ; then break; fi sleep 0.1 done COUNT=$DIFF done [/bash]


Extract Extension Name from Filename in Bash Shell

The following commands will extract the file extension string from a given filename. The only trick to these commands is they will give you the final extension after the last ‘.’. In other words, they will not work for extensionless files and files with two dot extension names (like file.tar.gz or similar).

for i in *; do echo $i | sed -e 's/.*[.]\(.*\)/\1/'; done
for i in *; do echo $i | awk -F. '{ print $NF }'; done

This command will grab the first dot extension even if there are two (it will return tar for file.tar.gz).

for i in *; do echo $i | cut -d'.' -f2; done

I’d love to hear your suggestions in the comments if you have a better, more optimized way of doing this.


Python Script to Grab All CSS for Given URL(s)

While at work, I needed a script that would grab all CSS elements that a webpage was using, both internal and external, given a URL and concatenate these elements into a single file. I came up with the following Python script. There’s a very minimal amount of pre-setup work in order to run it though. First, you must have python installed. I’m going to assume that 1) you do, and 2) you know how to use it. The next step is to download and install the beautifulsoup package. This is how I accomplished this on my Ubuntu box (could vary depending on your distribution):

sudo apt-get install curl
curl -O
sudo python
sudo easy_install beautifulsoup

After that, you should be good to go. Copy / paste the following into a python script (I called mine and then run it with python

# -*- coding: utf-8 -*-
import urllib2
from urlparse import urlparse
from BeautifulSoup import BeautifulSoup

def fetch_css( url ):

       response = urllib2.urlopen(url)
       html_data =

       soup = BeautifulSoup(''.join(html_data))

       # Find all external style sheet references
       ext_styles = soup.findAll('link', rel="stylesheet")

       # Find all internal styles
       int_styles = soup.findAll('style', type="text/css")

       # TODO: Find styles defined inline?
       # Might not be useful... which <p style> is which?

       # Loop through all the found int styles, extract style text, store in text
       # first, check to see if there are any results within int_styles.
       int_css_data = ''
       int_found = 1
       if len(int_styles) != 0:
          for i in int_styles:
              print "Found an internal stylesheet"
              int_css_data += i.find(text=True)
           int_found = 0
           print "No internal stylesheets found"

       # Loop through all the found ext stylesheet, extract the relative URL,
       # append the base URL, and fetch all content in that URL
       # first, check to see if there are any results within ext_styles.
       ext_css_data = ''
       ext_found = 1
       if len(ext_styles) != 0:
          for i in ext_styles:
              # Check to see if the href to css style is absolute or relative
              o = urlparse(i['href'])
              if o.scheme == "":
                 css_url = url + '/' + i['href']  # added "/" just in case
                 print "Found external stylesheet: " + css_url
                 css_url = i['href']
                 print "Found external stylesheet: " + css_url

              response = urllib2.urlopen(css_url)
              ext_css_data +=
           ext_found = 0
           print "No external stylesheets found"

       # Combine all internal and external styles into one stylesheet (must convert
       # string to unicode and ignore errors!
       # FIXME: Having problems picking up JP characters:
       #    html[lang="ja-JP"] select{font-family:"Hiragino Kaku Gothic Pro", "ããè´ Pro W3"
       # I already tried ext_css_data.encode('utf-8'), but this didn't work
       all_css_data = int_css_data + unicode(ext_css_data, errors='ignore')

       return all_css_data, int_found, ext_found
        return "",0,0

# Specify URL(s) here
urls = {
    'jaresfencing': "",
    'derekhildreth': "",
    'thelinuxdaily': "",
    'myurl1': ""

for k, v in urls.items():
   print "nFetching: " + v
   print "--------------------------------------------------------------------------------"
   out, int_found, ext_found = fetch_css(v)
   if ext_found == 1 or int_found == 1:
      filename = k + '_css.out'
      f = open( filename, 'w')
      print "Styles successfully written to: " + filename + "n"
   elif out == "":
      print "Error: URL not found!"
      print "No styles found for " + v + "n"

# MUST INSTALL CSSUTILS with 'sudo easy_install cssutils'
#include cssutils
#sheet = cssutils.parseString(all_css_data)
#f2 = open('temp2', 'w')

Seems to work for me! I could see potential for many improvements, but it’s pretty robust as it is. Enjoy.

Next Page »