Tuesday, September 27, 2011

System Activity Report (sar) and You

sar is an acronym for System Activity Report. It takes a snapshot of the system periodically. On most distributions it comes with the sysstat package. On Redhat and derived distributions the package will install a set of cron jobs in /etc/cron.d/sysstat. There are two cron jobs to take note of. The first one runs every ten minutes as root. It runs the script /usr/lib/sa/sa1 -S Disk 1 1. This script saves its output in report files. The files are written to /var/log/sa/sar[dd], where [dd] is the two digit date for today's date. (e.g. Today is 8/26/2011. The log file is /var/log/sa/sar26)

The second cron job runs at 23:53. This cron job summarizes the days activity. Both of the reports are saved as binary data, so normal tools are useless here.

There are many flags to use with sar when running it interactivly. Some of the flags have additional atguments that are required when used. One example of thses flags is -n which al requires an additional argement suach as DEV or NFS ro IP. This specific example has eightteen (18) potential arguments, not including ALL.

With sar liberal use of the man pages are highly suggested. Not only are the flags and any arguments documented, but the headers for each one and what they represent are explained as well. This comes in handy if you get overzealous with flags and aren't quite sure what you're looking at.

I'm only going to cover some of the most notable flags, what they show and their headers. Be careful however as some flags exist in both upper and lower case and report vastly different metrics. One example of the is -b which reports on I/O transfer but -B reports on paging stats.

First off the plate is -b which as I've already stated reports on I/O transfer stats and has the following headers:
    tps: Transfers per second to a physical device.
    rtps: Read transfers per second to a physical device.
    wtps: Write transfers per second to a physical device.
    bread/s: Blocks (since kernel 2.4 = sectors = 512 bytes) read from devices per second.
    bwrtn/s: Blocks written to devices per second.
Example output from a production server running: sar -b 1 1
  Linux 2.6.18-274 el5PAE (server.domain.com) 8/26/2011
  09:23:57 PM    tps             rtps         wtps          bread/s     bwrtn/s
  09:23:58 PM    7829.00    133.00    7696.00    1528.00    76960.00
  Average:           7829.00    133.00    7696.00    1528.00    76960.00

-B will report paging stats. Some metrics / headers are only available in kernels 2.5 and newer.
    pgpgin/s: Kilobytes paged in from disk per second.
    pgpgout/s: Kilobytes paged out from disk per second.
    magflt/s: Major faults per second (hits to disk, this is a bad thing...)

-c Process creation stats.
    proc/s: Processes created per second.

-d Activity for each block device.
    tps: Transfers per second
    rd_sec/s: Sectors (512 bytes) read from block device per second.
    wr_sec/s: Sectors (512 bytes) written to block device per second.
    avgrq-sz: Average # of sectors
    avgqu-sz: Average queue length
    await: Average time in milliseconds for queue + servicing request.
    svctm: Average servicing time.
    %util: CPU percentage while I/O requests where issued. Close to 100% = device saturation.
 
-n DEV Network interface stats.
    IFACE: Interface Name
    rxpck/s: Packets received per second.
    tcpck/s: Packets sent per second.
    rxbyt/s: Bytes received per second.
    txbyt/s: Bytes sent per second.
    rxcmp/s: Compressed packets received per second.
    txcmp/s: Compressed packets sent per second.
    rxmcst/s: Multicast packets received per second.

-P ALL Per processor (or core) stats

-p Print pretty device names
    Shows block devices as sda instead of dev8-0. Has no effect on Network device names.
-A same as: -bBcdqrRuvwWy -I SUM -I XALL -n ALL -P ALL

After you install the sysstat package you really need to let it run for a while and gather stats to see the real beauty of it all.  However you can run it interactively if required. When running sar interactively the syntax is sar -FLAGS Interval Duration. (e.g. sar -b 2 60) runs I/O stats every two seconds for a minute.  This is very hand to run if you're troubleshooting a slow system or watching it under load.

Monday, September 26, 2011

Changing the from e-mail address in Nagios

This is going to be another short one.  I recently had a need to change the FROM address for e-mail from our Nagios installation. E-mail was coming from nagios@host.domain.com, which is non-route-able  from outside our network.

The change is very simple. Change the two command lines in your Nagios commands.cfg dealing with notify by e-mail. The command names are "notify-host-by-email" and "notify-service-by-email".

By default these lines read:
 /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/        Time: $LONGDATETIME$\n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$

and

/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE        $\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$

To change  the from address you append " -- -f nagios@domain.com", that is without the quotes of course.  So the new lines look like:

/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/        Time: $LONGDATETIME$\n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ -- -f nagios@domain.com

and

/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE        $\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$  -- -f nagios@domain.com

To explain it a little what we appended was a space followed by two dashes, which forces mail to pass those along to sendmail. Send mails sees the addition of  a dash followed by the letter f and then another space and the e-mail address you want to send from.

On my system this is an alias for myself so that all replies come to my inbox.

All of this works on CentOS which is what I'm currently running.  It should work on other distributions as well, but I haven't the time to verify that.

Tuesday, September 20, 2011

Powershell Get-ChildItem Count 1 result

This is going to be a quick one. I'm working on a new Powershell script (my second) and I'm doing some sanity checks before I actually try to continue the workflow.

If you run the command below and it returns only one item and then either try to print out $fileCount or run an if statement or something against it like if ($fileCount -gt 0){do some stuff} the "do some stuff" won't happen.
#$fileCount = $(get-childitem C:\ -filter *.zip).count

This is because the count method returns an array, which if only one item is in it doesn't get created and thus is 0.   To get around this you must create the array first. You can do this in the same line of code as shown below.

#$fileCount = @(get-childitem C:\ -filter *.zip).count

The at sign there will create the array and you will have one item in it.


Monday, September 19, 2011

Yum 5.x Local repository and the new Continuous Release Repo

A few months back, possibly a year ago I started running my own local mirror of the CentOS YUM repository. The main benefit is speed in updating all of my servers. I also was able to setup a local repo of just tools that we use but aren't available in the Base CentOS repos. (I've since started adding the EPEL repository to servers that needed it.) For those curious I mirror the os, updates, and now the cr repos.

A few weeks ago the CentOS announce list and twitter feed introduced yet another repo the Continuous Release repo or cr for short. This new repo contains security and bug fixes from the upstream 5.7 release and I imagine will continue with further releases (5.8 & 5.9). CentOS has also promised a 6.0 Continuous Release repo, but have yet to deliver on that.


First off create the directories that will become the repositories.
mkdir -pv /Storage/yumRepo/{5,5.4,5.5/{os,updates,cr}/i386
mkdir -pv /Storage/yumRepo/local/el5/i386


So with out further ado, lets get into the guts of actually getting everything pulled down, synced and a repository created.

Below is the script I use to keep my mirror in sync with the closest mirror that offers rsync:

      1 #!/bin/sh
      2
      3 rsync="rsync -avrt --bwlimit=796"
      4
      5 mirror=rsync://centos.mirrors.tds.net/CentOS
      6
      7 verlist="5 5.5 5.6 5.7"
      8 archlist="i386"
      9 baselist="os updates cr"
     10 local=/Storage/yumRepo/centos
     11
     12 for ver in $verlist
     13 do
     14  for arch in $archlist
     15  do
     16   for base in $baselist
     17   do
     18     remote=$mirror/$ver/$base/$arch/
     19     $rsync $remote $local/$ver/$base/$arch/
     20     createrepo -v --update $local/$ver/$base/$arch
     21   done
     22  done
     23 done
     24 hardlink /Storage/yumRepo/centos

Now, lets break this script down line by line.  Line one is obvious, it tells the program loader what interpreter to use when running this file. Line three sets up the rsync command I'll be using.  In my case I need to limit the bandwidth usage to something sane, even though I run it off peak hours. Line five sets the mirror I'm running against. Line seven sets the versions of CentOS I'm interested in.  I really could trim this back to just 5 and 5.7 now, but I'm going to leave it in there for now. Line eight tells me I'm only going to sync i386 architect and not the x64 architect. I'm doing this because as of right now we only run x86. Line nine gives me which repositories in the i386 architect I'm going to mirror. Line ten sets up where on the local server I'm going to store everything.

Lines twelve through twenty three run rsync against all of the variations of repository and architect type for each version. In my case it would be twelve different syncs. If I where also running x64 architect it would double to twenty four. Line nineteen is the actual rsync command while line twenty creates the necessary repository files for yum to use.

Line twenty four is rather new to this script and requires you have the hardlink command installed (yum install hardlink). The hardlink command will run through everyfile in the directory specified and compare it to every other file in the directory.  It is looking for the same contents with the same permissions, but it can have a different filename.  If it finds a match it will hardlink one of them to the other, thus reducing the space required to store two copies. My only wish is that I had some hard stats for how much disk space this is saving me on the repository alone.


Now create the repository file for Yum.
#vi /etc/yum.repos.d/MyRepo.repo
This file contains:
[base]
name=CentOS-$releasever - Base
baseurl=repo.mydomain.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5
protect=1
enabled=1

[update]
name=CentOS-$releasever - Updates
baseurl=http://repo.mydomain.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5
protect=1
enabled=1

[CompanyName]
name=CentOS-$releasever - Local
baseurl=http://repo.mydomain.com/local/el$releasever/$basearch
enabled=1
gpgcheck=0
protect=0
enabled=0


We need to also tell Apache about these directories and how to serve them out over HTTP.
I changed the DocumentRoot to
DocumentRoot "/Storage/yumRepo"
then I changed the Default directory to

Options +Indexes +FollowSymLinks
AllowOverride All
Allow from all


Restart apache.
#service httpd restart


Populate Company Local repo with custom RPMs.
On every server we have installed Webmin. In the past this was installed either by hand or via our install script, but that had to be updated by hand with each update. So I will move the latest into our Company Local repo.
#mv /Storage/rpmDownloads/webmin-1.510.rpm /Storage/yumRepo/local/el5/i386/

You can move as many files into here as you want, including built from source RPMs that you may have made. After you move in any custom RPMs you have to update the Repository.
#createrepo -v --update /Storage/yumRepo/local/el5/i386/

Tuesday, September 13, 2011

File Checksum Integrity Verifier utility or SHA1 / MD5 checksum utility for Windows

On Linux if you need to verify the file you just downloaded is untouched you run either sha1sum or md5sum on the file to get the checksum and verify it against the published checksum. There are some utilities available for Windows, but today I'm going to point the spotlight at a Microsoft offering, File Checksum Integrity Verifier or fciv for short. You can grab the download from Microsoft's site.

There are a few options available when you run the command, such as -md5 to get the md5 checksum of a file or -sha1 to get the sha1 checksum of a file. But you can also use the -both flag to get both checksums of a given file.  You can even create you own database of checksums to verify the files later.

To generate the checksums and store them in the database (xml file) run:
>fciv -both -xml fcdatabase.xml filetocheck.txt

The command above will obviously store both checksums and on a large file could take a while to run, as it needs to run twice on the file, once for the sha1 sum and once for the md5 sum.  You of course can use one or the other check sum algorithm if you prefer (I prefer sha1 for the time being).

To later verify a file against the database you run the command shown below:
>fciv -sha1 -xml fcdatabase.xml filetocheck.txt

Now if you make changes to filetocheck.txt and run the command below you will see that verifying the file against the database it has changed. If this where an executable it may either be corrupt or compromised. If it was the project you've been working on for weeks, now is a good time to restore from backup, if the change wasn't expected of course.

>fciv -v -xml fcdatabase.xml filetocheck.txt

You'll see something like:

c:\users\mymcp\fcdatabase.xml
        Hash is         : 48a5967227b85c6805f3210832a155da
        It should be    : 04518689efadfdf2393b533dc9c7c8b5

My only real complaint is that fciv doesn't show any sort of progress while processing a large directory.  It would be nice to see a file names flying by the screen or something.  All in all though this is a tool worth checking out if you run Windows and don't have another tool already in your arsenal.

I would like to know if any of you have a tool in your arsenal already for this purpose and what it is / where I can find it. Sadly I don't recall any of the tools I've used in the past, but this one may become my tool of choice for this job.