Articles

Adventures in Red Hat Enterprise Linux, CentOS, Fedora, OpenBSD and other open source solutions.

Traffic Jam Graphs

Check out this graph, it represents the all traffic jams in kilometers in the Netherlands. I use it to determine the best time to leave home for office and vice versa.

graph of current traffic jams


There are a few steps in creating these graphs, let's go over them one by one.

Step 1: Get the data

First thing ofcouse is to get (and store) the data. Think of a simple script that output some information, like this:

#!/bin/sh
date=$(date +'%s')
value=$(uptime | awk '{print $NF}')
echo "$date $value" >> alldata.txt

Step 2: Store the data frequently

Now, that was easy, next thing is to add your script to the crontab, so it's run every 5 minutes or so. You can edit the crontab like this:

crontab -e

You will now get an editor on your screen, add this line to it:
*/5 * * * * /home/you/script.sh

Change the path name to reflect your situation, same for the scriptname.
You will notice that this file alldata.txt will be filled with lines. Good, you'll need it.

Step 3: Create an RRD database

Here is where the magic begins. Please read the RRD manuals to get detailed information.

rrdtool create load.rrd DS:load:GAUGE:300:0:U RRA:AVERAGE:0.5:1:10800

What this mean:

  1. rrdtool - The tool that is used to do things with rrd. Download it from the official RRD website.
  2. create - Make a new file used as a round robin database.
  3. load.rrd - Call that file load.rrd. Choose something normal.
  4. DS: - This is a Data Store, you can create more than one store in a file.
  5. load: - Call this Data Store "load".
  6. GAUGE: - The values act like a gauge, they represent a current value. Comparable to a speed meter.
  7. 300 - This interval to store a value in this Data Store is 300 seconds. (5 minutes) This can be modified, and is heavily dependant on what information you want to caputure.
  8. 0: - The minimum value is 0.
  9. U - The maximum value is Unknown. (Making the graph scalable.)
  10. RRA - This is a Round Robin Archive.
  11. 0.5: - Ratio between known and unknown Primary Data Points. Forget it, put it on 0.5.
  12. 1: - Average the value with this number. (1: no average, just the one value.)
  13. 10800 - Store this amount of samples (rows) in the Round Robin Archive. More results in larger files. (But fixed in size, wether it's full or empty.)

You are prepared now to start entering data into this rrd file. Please be aware that you can't enter "old" information into this rrd. Old mean data before this rrd was created. If you have "old" data, you need to create your database with the --start option.

Step 4: Entering data into your rrd

Now, this is actually quite an ease step. Remember you have created that script to store all the data? Lets dump that data into an rrd. (You can also do this without storing it in a file first.)

cat alldata.txt | while read date value ; do
rrdtool update load.rrd "$date":"$value"
done

Step 5: Creating a graph

Finaly some magic, creating a graph with rrdtool.

rrdtool graph load.png --vertical-label load DEF:load=load.rrd:load:AVERAGE LINE2:load#FF0000:'15 minutes load'

What do all these things mean?

  1. rrdtool - Again the program rrdtool.
  2. graph - Now make a graph.
  3. load.png - Call the file load.png
  4. --vertical-label load - Put "load" on the left side, vertically.
  5. DEF: - Define a variable
  6. load=load.rrd: - The variable is called "load", it comes from the load.rrd file.
  7. AVERAGE - The line represents an average of the measured values.
  8. LINE2: - Draw a 2-pixel line. (LINE1, AREA are other cool things to play with.)
  9. load#FF0000: This line represents the load variable, and is colored #FF0000.
  10. '15 minutes load' - In the legend, call this color: "15 minutes load".

This step determines the looks of your graph, including the scale. (hour/day/week/etc.) To set the scale, you will need the --start and --end options. Without these options, a week will be plotted.

To get the cool looks (a red line, filled with an orange color, combine the LINE2 and AREA items, like this:

rrdtool graph load.png --vertical-label load DEF:load=load.rrd:load:AVERAGE AREA:load#FF6600: LINE2:load#FF0000:'15 minutes load'

You are done now, getting this all together, you'll want to script some things. Use the crontab facility to initiate these scripts.

Will Linux make it?

I have been into Linux since Red Hat 5.2, it must have been 1999 or so. Back then, nothing really worked, everything needed to be configured. A few years later, say 2001, 2002 maybe, many things had improved. From that year until now I keep hearing: "Linux will break into the desktop market -t-h-i-s- year." It does not seem to happen, lets see if it could ever happen, and why it did not happen until now.

Could Linux make it to the desktop market?

Linux has lots of strong arguments to be a success on the Linux market, lets try to summerize them:
  • It's free! - This is a strong argument, or actually, there is no licensing hassling. My guess is that companies are willing to pay something, but maintaining the liceses is quite a lot of work. Besides that; there is quite some risk that unlicensed copies exist.
  • It's solid! - True, but it Microsoft Windows XP that unstable? I have only seen a few blue-screens, just as many, or less compared to Linux. But; there is one super strong argument; using Linux, you'll be able to find out what happened and how to solve it.
  • It's complete - Not True, there is quite some software available for Linux, but not everything, especially the "Enterprise" or "Office" software is not as complete as available for Microsoft operating systems. Does Outlook work under Linux, Access, Excel, Visio, etc. These are key elements in an office enviroment. I am not a Microsoft fan, but I would rate these pieces of software "good".

Why did Linux not make it to the desktop market?

There are so many distributions available! There is even a website dedicated to keep track of all this distributions. There is no Linux, there are a few hundered distributions available. This availability might be good for development purposes, but I figure Enterprises are not waiting for this redundancy. Besides that, there are alternatives, well intergrated into most companies. Replacing these alternatives is not an easy task. Don't get me wrong, I do like Linux, but not all the way. Please note that I dislike Microsoft more than Linux!

The undervalued program "Screen"

We all know the power of the terminal and also the power of the desktop manager. A terminal is great, but only one terminal is not much. In your graphical environment, you will have the ability to see more screens, compare files, watch on thing, work on another.

You can to this in a sigle terminal as well.

Using GNU Screen

Let me get you up to speed with this excellent program, that you will want to use for the rest of your life!

Starting and Stopping

  • Starting screen - can be done by typing "screen". You will not notice anything different.
  • Stopping screen - can be done by typing "exit". (Advanced: when multiple screens are open, screen exists when the last screen is "exit"ed.)

Now, type screen, and see if you can still type, not a lot of differences? Let's go further into screen.

Please be aware, the use of the word screen, is a bit difficult to follow, it can be: screen the GNU Screen command or screen a session.

Basic commands

You trigger actions in screen by using a key combination of Control and A, press these keys together to get the desired result. This will be written down as [CTRL]+[a]
After the [CTRL]+[a], another key is followed to finish that action. Here are these actions.

  • [CTRL]+[a] [c] - Create another shell in this screen. This can be used later on to switch from and to, or to split screens with.
  • [CTRL]+[a] [d] - Detach this screen. This combination detaches the current screen, and all its shells. You can pickup this screen later on with the command "screen -r". Commands run in the background, even if you disconnect from the machine! Output will be stored.
  • [CTRL]+[a] [x] - Lock your screen. Requires the password of the initiator of screen.
  • [CTRL]+[a] ["] - Select a shell with you screen.
  • [CTRL]+[a] [A] - Rename your shell to a custom name, like "my script" or "machine1".
  • [CTRL]+[a] [?] - Help, lots of information, hard to read.
  • [CTRL]+[a] [[] - Scroll your shell. This is a [ sign. From now on you are able to scroll back into the history. Easy when you have collected some output over the last night and want to read back.
  • [CTRL]+[a] [S] - Split your screen into two shell. One on top, one on bottom. If you first split, an empty screen is started, without a shell. uses [CTRL]+[a] [c] to start a shell there.
  • [CTRL]+[a] [X] - Unsplit. Remove one split screen.

Well that should give you enough information to get going. I know it's hard to get started with screen, but it is an absolute power-tool! Learn to use it!

Comments in scripts

Have you ever been in the situation where you need to support/maintain/alter scripts that somebody else wrote? (Or it was you afterall, but were in a different state of mind. ;-) ) Sometimes it can be quite difficult to read what input comes from where, where output goes, what a specific routine does, etc.

It seems logical to document this behaviour.

But it's quite ugly to enter all this redundant information into your script.

Dillema. Let me be the one to conclude this: Please comment throughout your script. It makes it easier to read and maintain for others. Also do not forget that your script could be an educational asset for a beginning scripter.

Using #!/bin/sh or #!/bin/bash?

Did you ever wonder when or why you would use Bourne Shell (/bin/sh) or Bourne Again Shell? (/bin/bash) I do, and there are just a few differences. First of all, lets compare reasons why to use Bourne or Bourn Again shell:

Bourne Shell Bourne Again Shell
Installed on all machines I have ever logged in to. Installed on most machines I have logged in to.
Contains a lot of functionality Contains more functionality
Is Smaller (approximalty 100K in total) Is 5 times larger (approcimatly 500K in total)
Is always installed in /bin Can also be installed in /usr/local/bin, /opt/bin
Is compiled staticaly (all libraries included) Can be compiled with dynamic libraries.

Well; lets go into functional changes now, comparing the two shells. After reading that, I hope you are able to select you preferred shell.

Solution in /bin/sh Solution in /bin/bash
variable=`echo "1+1" | bc` variable=$((1+1))
not available arrow-up/arrow-down for historic commands
not available [CTRL]+[r] to search while typing historic commands

As you see, the differences are little. I always try to use /bin/sh, because it is more generic, installed on just about any machine. Sometimes it seems better to use /bin/bash.

Efficient variable positions, without sed, awk, cut, etc

Have you ever wondered if a nasty script like this one could be improved?

#!/bin/sh
variable="Hello World"
echo "$variable" | awk '{print $2}'

You can use this syntax to improve your code:

#!/bin/sh
variable="Hello World"
echo "${variable:6}"

What this does: print variable, from position 6 and further. If you want to limit this, use this syntax:

#!/bin/sh
variable="Hello World"
echo "${variable:0:5}"

The output will be Hello

Some other great tricks:

  • ${#variable} - Print the length of $string
  • ${variable/substring/replacement} - Replace first match of $substring with $replacement
  • ${variable//substring/replacement} - Replace all matches of $substring with $replacement

Nasty - because is uses all external commands, which is slow, and could be installed on a different location on another box...

When to use quotes around variables

When I write shell script, mostly, I put quotes (these characters: ") around variables, but why? Here is a piece of code that uses these quotes:

With quotes: Without quotes:
#!/bin/sh

variable="Hello world"

echo "$variable"
#!/bin/sh

variable=Hello world

echo $variable

Here are some reasons why in favor, and NOT in favor of using quotes around variables:

In favor of using quotes around variables.

  • comparing in an "if" statement could fail when not using quotes.

This code will fail. If quotes would have been used around "$variable", it would have continues.

#!/bin/sh

variable="1 2"

if [ $variable = 1 ] ; then
echo "Variable is one."
fi

NOT in favor of using quotes around variables.

  • It's alot of work to put quotes around all variables.

There is one exception, where quote cannot be used. In a for loop, the variable must be "naked", wihtout quotes. This code would work unexpectedly if $variable was used, without qoutes:

#!/bin/sh

numbers="1 2 3"

for number in $numbers ; do
echo "$number"
done

The unexpected effect when using quotes, is that "1 2 3" would be treated as one piece, so instead of the expected output:

1
2
3

The output would be:
1 2 3

My personal conclusion:
Use quotes around variables whenever possible, as a good habbit.

Functions in a shell script

Have you ever wondered what these constructions are in shell scripts?

doThis() {
commands
}

These are functions. Lets explain what functions are, why to use them, why not, etc.

I first started using functions in a shell script when there were multiple locations in a script, where the same set of commands was typed over and over again. To illustrate this a bit, it must have been a script like this:

#!/bin/bash

verbose=yes

if [ $verbose ] ; then
echo "Starting this script"
fi

commands

if [ $verbose ] ; then
echo "Finishing this script"
fi

Now, that is quite a clean piece of shell script, but; it has a routine/trick/function in it that is almost the same. (That is the "echo" thing) There must be a way to do this more efficiently. There is, it's called "using functions". These functions are actually a combination of commands, named to a functionname. This funtion is defined in a local shell (shell script for example) and is only to be used there. So, to illustrate this a bit, the same script, but now with a function in it.

#!/bin/bash

verbose=yes

showoutput() {
if [ $verbose ] ; then
  echo "[email protected]"
fi
}

showoutput "Starting this script"

commands

showoutput "Finishing this script"

The function is calles "output" here. It needs to be specified before it is going to be used. Normally you would have the functions on top of a script, and the call to execute the funtion later in the script.

Now, why is the use of functions an improvement? Because you only need to write code once! Also; when there is an error in a function, you need to change that function, not the whole script.

To trigger yourself to use functions, keep this rule in mind:
If you are writing a shell script, and discover a piece of code that is used more then once; place it in a function, and call that function.

There are some comparable alternatives to a function:

  • shell-script - You can also just write another, external shell script and call that. Good habit, but, it is harder to maintain, as it lives in a different file. There are some functions though, that you could use in many, many script. The solution could be to declare these functions in your shell initialization file (.bashrc, .profile, etc.) so that every child-shell can make use of that function.
    To print that pretty "OK" in green or "FAILED" in red, you could source the /etc/rc.d/init.d/functions file on redhat. An example piece of code then:
    #!/bin/bash
    source /etc/rc.d/init.d/functions

    settitle() {
    echo -n $"Setting XTERM title"
    PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME}: ${PWD}\007"' && success || failure
    echo
    }

    settitle
  • aliases - Aliases are also an option to use, you can define aliases like this:
    alias dfk "df -k"

    From now on, when you type dfk, "df -k" is executed. Cool right? But; aliases were not created to capure while script/functions into them, although you could, most other UNIX/Linux users would not expect a function to be hidden in an alias. By the way, to check your defined aliases, type "alias" without any arguments.

Hope this article helped explaining what a function is, when and how to use it.

Reasons for using shell scripts

Shell (BASH,KSH,SH,TCSH,etc) can be used to solve just about every situation. From simple tasks, to complete solutions.
Using shell-scripts has some benefits over Perl, Java, Python, and so on. Here are the reasons that seem valid.

  1. Everyone can read shell-scripts - If you are known to using UNIX/Linux, you can already "program" using shell-script. Imagine a simple script as such:
    rm `ls | grep myfile`

    It's not a full script, more a set of commands, a one-liner. Still, you were able to read it, that's what I mean!
  2. Compatible on all UNIX/Linux platforms - This is a bit more tricky. Most administrators will not agree on this one, but if your script was written to run on multiple platforms, it is easily transportable. For these os-specific things, you could alway implement this:
    case $(uname -s) in
    SunOS)
      echo "Run SunOS specific things."
    ;;
    Linux)
      echo "Run Linux specific things."
    ;;
    *)
      echo "Run other specific things."
    ;;
    esac
  3. Shell-scripts don't need compiling - Unlike C-code, perl and python (last two do it better; they compile when required, automatically) all use a form of compilation. This makes these programs run faster, okay, that is very true. For an end-user/administrator this is another step, we are quite busy doing useful things. By the way; I cannot recall a situation where a shell-script was just too slow.
    About this slowness, some performance can be gained by using the correct, minimalistic syntax.
    less efficient
    more effecient
    ls | grep file
    ls file
    variable=`ls file` ; if [ $variable = value ] ; then ; echo yes ; fi
    if [ `ls file` = value ] ; then ; echo yes ; fi
    value=`echo "1+1" | bc`
    value=$((1+1))
  4. No binary & source versions, just source - In your SVN, CVS, whatever repository, just keep the source, nothing else. Also when making packages from your script, just the script is enough.

US Daylight Savings Time (DST) 2007 Energy Act on Solaris

A change in Americas Daylight Savings Time will soon (11th of March 2007) be applied. From then on, the summer will be longer, which is positive.

To check you Solaris 8 system, if it is able to roll over to summertime, use this mini piece of code.

# Unset interactive if you are testing from a console,
# otherwise, use the non-interactive mode. (Interactive
# shows you the rollover by setting the timezone and
# changing the date to the critical moment. ([email protected]:00)
#interactive="yes"

storeddate=$(date '+ %m%d%H%M%Y.%S')
storedtz=$(echo $TZ)
export TZ="America/Adak"
if [ $interactive ] ; then
echo "Setting time to 11 March 2007 (01:59)"
echo "You should now see the clock roll over."
date 031101592007.58 > /dev/null
counter=3
while [ $counter -gt 0 ] ; do
  sleep 1
  date
  counter=$(($counter -1))
done
export TZ="$storedtz"
date $storeddate > /dev/null
else
date | grep 'HA.T' > /dev/null
if [ $? -gt 0 ] ; then
  echo "Unable to switch timezone."
  status=$(($status+8))
fi
export TZ="$storedtz"
fi

Good luck with the code!

Syndicate content