Thursday, December 24, 2009

Locking Down Mozilla Firefox

One of the main draw backs to Mozilla Firefox from most corporate IT people is its inability to be locked down, or taken care of by Group Policy. While you cannot lock down Firefox from within Group Policy alone, you can script these fixes into place to lock settings down.

To start you will need a program to byte shift a config file so that Firefox can read it. I used Byte Shifter.exe. There are also websites that do it all in a browser, but I haven't used them.

You will also need to edit the all.js file in "C:\Program Files\Mozilla Firefox\greprefs\" to include:
pref("general.config.filename", "mozilla.cfg");
I put that line at the absolute bottom, but it might not matter where it goes.

To start create an empty file. I called mine mozilla.txt since the result file will be mozilla.cfg.
The file must start with // on its own line.
Add in any settings you want to lock down. You can peruse about:config for settings, and again after you lock them down to check for the "locked" option.

Below is my mozilla.txt file, with host names changed to protect the innocent. I have included comments (They start with //) to explain things a bit further.

//Lock the option for startup page. 0 = "Show a blank page", 1 = "Show my home page", and 3 = " Show my windows and tabs from last time"
lockPref("", 1);
//Set the home page. Use a pipe to include many home pages as tabs.
lockPref("browser.startup.homepage", "|");
//Set the browser history to something a bit longer than the default 7 days.
lockPref("browser.history_expire_days", 90);
lockPref("browser.history_expire_days.mirror", 90);
// Clean up certain things every time Firefox shuts down. This keeps things clean and running smooth for us, your results may very.
lockPref("privacy.sanitize.sanitizeOnShutdown", true);
//We do not want to clear the history on shutdown.
lockPref("privacy.clearOnShutdown.history", false);
lockPref("privacy.item.history", false);
//We can clean up the downloads history. I have seen things get really slow if this doesn't happen.
lockPref("privacy.item.downloads", true);
//Clear the cache.
lockPref("privacy.item.cache", true);
//Clean up cookies.
lockPref("privacy.item.cookies", true);
//Remove any session info.
lockPref("privacy.item.sessions", true);
//We do not want to keep passwords saved.
lockPref("privacy.item.passwords", true);
//Do not prompt to do this, just do it.
lockPref("privacy.sanitize.promptOnSanitize", false);

lockPref("signon.rememberSignons", true);
//Do not allow the "Show passwords" button.
lockPref("pref.privacy.disable_button.view_passwords", true);
//Don't use a proxy.
lockPref("network.proxy.type", 0);
//We keep one version of Firefox for a while. The newest version breaks things in our application, so we currently need to just run what we have.
lockPref("app.update.enabled", false);
//Disable extensions.
lockPref("config.lockdown.disable_extensions", true);
//Disable themes.
lockPref("config.lockdown.disable_themes", true);
//Show the downloads window when downloading a file.
lockPref("", false);
//Close the downloads window when all downloads are done.
lockPref("", true);
//Save files to:
lockPref("", true);
lockPref("", "c:\\%homepath%\\Desktop");
lockPref("", "c:\\%homepath%\\Desktop");
lock{ref("", 2);
//Always ask me where to save files.
lockPref("", false);
//Always check to see if Firefox is the default browser.
lockPref("", false);
//New pages should open in a new window.
lockPref("", 2);
lockPref("", 2);
//New pages should open in a new tab.
lockPref("", 1);
lockPref("", 1);
//Warn me when closing multiple tabs.
lockPref("browser.tabs.warnOnClose", false);
//Warn me when openining multiple tabs might slow down Firefox.
lockPref("browser.tabs.warnOnOpen", false);
//Always show the tab bar.
lockPref("browser.tabs.autoHide", false);
//When I open a link in a new tab, switch to it immediately.
lockPref("browser.tabs.loadInBackground", false);
//Block pop-up windows.
lockPref("dom.disable_open_during_load", false);
//Load images automatically. 1 = check 2 = unchecked.
lockPref("permissions.default.image", 2);
//enable JavaScript.
lockPref("javascript.enabled", true);
//Some of the advanced JavaScript options.
//Disable the Advanced Button.
lockPref("pref.advanced.javascript.disable_button.advanced", true);
//Move or resize existing windows.
lockPref("dom.disable_window_move_resize", true);
//Raise or lower windows.
lockPref("dom.disable_windows_flip", false);
//Disable or replace context menus.
lockPref("dom.event.contextmenu.enabled", false);
//Hide the status bar.
lockPref("dom.disable_window_open_feature.status", false);
//Enable Java.
lockPref("security.enable_java", false);

Of course there are others and many more settings, but that covers a good number of them. Also I have only tested this on our version of Firefox, which is

Monday, October 19, 2009

Nagios: check_http, using the --invert-regex option

Some times you want to check that something is running or working correctly and you work out tests for that. Other times you want to know when something is broken and throwing error messages. This is about the latter, a proper HTTP 200 code is great and all, but what if the page is just showing "Too Many Connections" instead of your home page? My old check_http command for this server used to look like, well, check_http. I didn't check anything about it specifically, just that it was returning a 200 code.

Today however I knew I needed something more in depth. Our database server lost its local network connection, but still was available over the public IP, which is what I test against. Once we re-directed the SQL requests to the public IP address of the server everything started working again, until we ran across "Too Many Connections". The database server kept all of the "local" connections open and thus we ate up the rest.

So, how to test for this scenario? After reading through the man pages of check_http I saw this little gem "--invert-regex Return CRITICAL if found, OK if not". This I knew was exactly what I was looking for! If it sees our error codes it will go Critical! Now to put this gem into practice. Here is where the man pages fall short. There is no explanation on HOW to use this, just that it exists. I tried the obvious to me "check_http -H -w 3 -c 5 --invert-regex 'Some string'", but that didn't work. OK, lets try "check_http -H -w 3 -c 5 --invert-regex='Some string'" nope that errored out with " option `--invert-regex' doesn't allow an argument".

Third times the charm right?
"check_http -H -w 3 -c 5 -r 'Some string' --invert-regex '"
# HTTP OK HTTP/1.1 200 OK - 0.355 second response time |time=0.354966s;3.000000;5.000000;0.000000 size=12975B;;;0

Yes, as it turns out third time is the charm. So that got me thinking some more. How can I ensure that the page is rendering correctly, and if it isn't fail but in a specific way?

"check_http -H -w 3 -c 5 -r 'Some string I want in my page' -r 'Some string I don't want to see' --invert-regex '"

You can add more than one -r to the check_http command and it will require all of them to be present for the test to pass, and if one of them fails then it will go critical! Perfect!

If you have any more insight into using the check_http command in Nagios I want to hear about it. We are always running into new failure scenarios that we didn't anticipate and I want to know about them before one of my users tells me about it.

Friday, September 11, 2009

Bash: Finding files between two dates in the current directory

Today my boss asked me for a bash command (or script) to find some files between two dates.
Thanks to Jadu Saikia over at Unstableme his post UNIX BASH scripting: Find Files between two dates, I had a starting point.

This will find all files between the two dates (20071019 & 20071121) in this case.
find . -type f -exec ls -l --time-style=full-iso {} \; | awk '{print $6,$NF}' | awk '{gsub(/-/,"",$1);print}' | awk '$1>= 20071019 && $1<= 20071121 {print $2}'

Now, if you want just PGP files you would do:
find *.pgp -type f -exec ls -l --time-style=full-iso {} \; | awk '{print $6,$NF}' | awk '{gsub(/-/,"",$1);print}' | awk '$1>= 20071019 && $1<= 20071121 {print $2}'

The second request that my boss was looking for with this is the file size, something that was being left out by awk. So we can fix that by updating the command to:
find *.pgp -type f -exec ls -lh --time-style=full-iso {} \; | awk '{print $6,$NF,$5}' | awk '{gsub(/-/,"",$1);print}' | awk '$1>= 20090624 && $1<= 20090901 {print $2,$3}'

We added in a $5 to the first awk command, and the final one had $3 added to it. Also I like human readable file sizes so I added -h to the ls command.

Howto Make a T1 Crossover Cable

I learned this one about a month ago while turning up the T1 connection for my fax server. The tech that installed the Cisco equipment left us with a crossover cable, but a data one, so I cut the ends off and re-crimped the cable like so:

You only have to worry about 4 of the 8 wires if you are using typical cat-5 cable like I did.

1 <—> 4
2 <—> 5
4 <—> 1
5 <—> 2
Or if you go by colors like I do
Side 1 (left is cable end, clip underneath):
Pin 1: Orange White
Pin 2: Orange Solid
Pin 3: none
Pin 4: Blue Solid
Pin 5: Blue White
Pin 6: none
Pin 7: none
Pin 8: none

Side 2 (left is cable end, clip underneath):
Pin 1: Blue Solid
Pin 2: Blue White
Pin 3: none
Pin 4: Orange White
Pin 5: Orange Solid
Pin 6: none
Pin 7: none
Pin 8: none

Then you are done!

Monday, August 03, 2009

How do I disable PS2 Parental Controls?

Again this is one of those things that I had to figure out and I wanted some place to keep it just in case I needed it again. I realize that this won't come in handy for anyone actually reading this blog, but still.

While watching a DVD hit "Select" on the controller. While on the menu find the icon called "Setup". Click it and then find "Custom Setup". Now go down to the "Parental Control" and change the settings to your liking. I choose "Off" because I never want to be bothered by the parental control features.

To save your changes eject and take out the DVD, which will kick the PS2 back to the setup screen. Then turn off the power or reset it.

Friday, July 24, 2009

Reset the maintenance counter on an HP 4000 Laserjet

There are two methods to resetting the maintenance counter on an HP 4000 Laserjet printer.

The first method is the fastest, but may not work due to many different board revisions.
1. Turn the printer off.
2. Hold down the "Item" key (the minus side of the button) and "Value" key (the minus side).
3. Turn the printer on.
4. Wait for "RESET MAINTENANCE COUNT" to be displayed and then release both keys.

If this method fails, like it did for me, you will have to enter service mode. This mode is generally reserved for service technicians and really the only reason to go into it is to reset the maintenance counter.

To get into service mode:
1 Hold down "Select" and "Cancel Job" while turning on the printer until all of the lights on the Control Panel are lit. Note that if the Control Panel reads INITIALIZING, the keys were released too soon.
2 Press the right side of the "Menu" key, then press "Select". The message SERVICE MODE is displayed.
3. Press "Menus" once to display SERVICE MENU.
4. Once it says SERVICE MENU press ITEM to scroll through service mode items.
5. Once on the Maintenance Counter screen, press "Select" on each number in the Maintenance counter. You can change them by hitting the "Value" key to the left or right and move between the digits by using the "Item" key.
6. To exit the Service Mode press [Go].

Tuesday, July 21, 2009

Mediawiki System Requirements

I am going to be installing Meidawiki for our internal use and I had a little trouble finding the System Requirements for the software. Just posting it here so I can find it easily again in the future.

Tuesday, July 14, 2009

Ghostscript Convert PDF to TIFF

Again this is mainly for my reference, but someone may have trouble finding the solution like I did earlier today.

On Windows be sure to add "C:\Program Files\gs\gs8.64\bin" to your PATH, then run the following command.


Similarly on Linux you can run the command below.


There are also possibilities to script this, and when you can do that you should. I haven't found the need to bring it to that point yet as I just had to do four files today, but should this come up again I will be scripting it in some fashion and posting the results here.

Thank you to StackOverflow for the solution to my problem. Also one of the other solutions to this is to use a recursive search of a directory with PowerShell to convert files.

Friday, July 10, 2009

Creating a RAID 5 Array in software on CentOS 5.3

In order to create a RAID 5 array entirely in software on Linux you need to do a few things.

First I used three Identical drives, same speed, size, make, and model. This may not be a requirement, but
it will defiantly help the process. For RAID 5 you will need at least three partitions of the same size.

I picked up a four disk internal hot-swap enclosure from Addonics ( and
a hot-swap capable raid card. Once I got everything physically installed in the server case I booted into CentOS 5.3
and got on the command line. The first thing you need to do is create partitions on the blank drives and set them to be
Linux raid autodetect (Hex value of fd). To do this run fdisk /dev/sdX where X is the drive(s) that you want to partition for the RAID array.
For me this was SDC, SDD, SDE but you results will vary.

fdisk /dev/sdc
fd [ENTER]

You will want to create an Extended partition so choose n for New partition. If you get stuck you can hit m to get the help menu.
After you hit n type e for Extended. Then you will have to enter a partition number, I choose 4 for all of my drives.
After that you n again and then choose l for logical partition type. Again it will ask for a partition number 1-4 are for primary and thus not an option,
so I choose 5 for all of my drives.

Once the logical partitions are created hit t to change the partition type. If you are unsure what to use hit l, but in this case we already know that we want to
use type fd for the Linux RAID auto.

Once the type has been changed you can type w to write this info to the drive and start on the next one.
fdisk /dev/sdd
fd [ENTER]
fdisk /dev/sde
fd [ENTER]

Now that we have three (the minimum) partitions for our RAID device we have to create it in CentOS. Here we will use a tool called mdadm to create the
actual RAID device in Linux.
/sbin/mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdc5 /dev/sdd5 /dev/sde5 [ENTER]
Once this returns back, and it should be pretty quick, you will have a RAID 5 device. To check the status of it run:
/sbin/mdadm --detail /dev/md0
Version : 00.90.03
Creation Time : Wed Jul 8 09:14:19 2009
Raid Level : raid5
Array Size : 2930271744 (2794.52 GiB 3000.60 GB)
Used Dev Size : 1465135872 (1397.26 GiB 1500.30 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Fri Jul 10 11:20:02 2009
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

UUID : 53f6f95a:9e33f5ba:7ac8ef3e:0a40921a
Events : 0.2

Number Major Minor RaidDevice State
0 8 21 0 active sync /dev/sdc5
1 8 37 1 active sync /dev/sdd5
2 8 53 2 active sync /dev/sde5
Now for me after I first created the RAID device the state was listed as clean, degraded, rebuilding. There was a place for the percentage rebuilt and
/dev/sde5 was listed as spare. It took the better part of a day for the rebuild to finish, given that it was a three terabyte device I am not surprised by that.

Once the rebuild was done I had to create a Physical volume for LVM to be able to manage the RAID device.
pvcreate /dev/md0

After that I was able to use system-config-lvm to create the storage volume and format the drive.

Tuesday, July 07, 2009

Copy Directory Structure Only

This is a simple one liner for copying a Directory structure, and not the contents.

find * -type d -exec mkdir /new_directory/\{\} \;

Now there are a few caveots to this of course, but they are simple.
First the /new_directory/ has to exist.
Second, you have to run the command from within the directory that you want to copy the structure from.

For example, if I need to copy the structure of /Storage to /newStorage I would:
mkdir /newStorage
cd /Storage
find * -type d -exec mkdir /newStorage/\{\} \;

Friday, June 26, 2009

Installing VirtualBox machine Additions on CentOS 5.1 - 5.3

For a few weeks now I have been creating a VirtualBox VM for a demo server. The purpose is to give our sales staff a way to bring our technology to the client and show them, in a live environment how our systems works, even if they don't have an Internet connection available.

I have been working on this system in a ridicously small screen for a little while now and finally got sick of it. I had tried installing the Virtual Machine additions before, but it failed for one reason or another. This time around I was determined to get everything up and running properly, so A) I wouldn't have to work on such a small screen and B) I could just move my mouse between the VM and my Desktop with out hitting a button.

Anyways, on to the meat of this, how to install the pre-requsite libraries and the Virtual Machine additons on CentOS. Now I just did this on a CentOS 5.3 install, but it has reportidly worked as far back as 5.1.

First you need to install the kernel sources and gcc, if you don't already have them.

yum isntall -y gcc
yum install -y kernel sources kernel-devel

Then you need to create a symbolic link to the kernel source:
ln -s /usr/src/kernels/2.6.18-92.1.18.el5-i686 /usr/src/linux

After this it is best to reboot the machine:
shutdown -r now

Once the machine has come back online you can mount the Guest additions ISO and install them via one of two commands depending on your architecture:



After either of these commands are run you will have to once again reboot the Virtual Machine:
shutdown -r now

After the reboot you should be able to move the mouse between the VM and you Host OS without unlocking it and any of the other nifty features the Guest additions add.

Wednesday, May 27, 2009

Failed to modify password entry for user while adding user with smbpasswd

I was trying to create a user today in samba and was getting the error "Failed to modify password entry for user [USER]".

I read over the man pages for smbpasswd and saw that I needed to add the -n switch because this user will not have a password. So again I tried to add the user with "smbpasswd -a -n [USER]" and got the same error.

A quick Google search lead me to a newb mistake on my part... I didn't have the user I was trying to add in my UNIX password file. A quick "useradd [USER]" followed by the same "smbpasswd -a -n [USER]" and I was all set.

Granted there are other reasons why this process might fail, but for me this was the reason.

Friday, May 22, 2009

A simple file shredder for Windows

Working with sensitive data all day long you come to realize that what I download needs to be deleted securely, just as the paper copies need to destroyed securely.

I happened upon this script on Lifehacker a while ago, but I have been using it more and more lately.

First you will need to download sdelete from Microsoft. I copy this exe to the Windows directory on each machine as part of my install process.

The script if very simple:
@echo off
FOR %%F IN (%1 %2 %3 %4 %5 %6 %7 %8 %9 %10) DO sdelete -p 7 -s %%F

I save this as shred.cmd and place it in my C:\Scripts folder. It will take up to 10 files at a time and run sdelete with 7 passes on each file. Sdelete will also rename the file 26 times to obfuscate the file name.

About once a month I will run sdelete -p 3 -z to clean the free space on my PC and to make sure that any temp files I didn't shred are cleaned up. Now this won't obfuscate the file names at all, but the contents of the files are gone for good.

You can also place a short cut to the shred.cmd file into your Send To menu options and have an easy way to clean files from any folder.

Adding programs to the "Send To" menu

From time to time I write small scripts that accept Command line arguments. In the past I kept shortcuts to them on my desktop, but today I wanted to remove all icons from my desktop.

If you open up %APPDATA%\Microsoft\Windows\SendTo in Windows Explorer.

You see %APPDATA% is an environment variable that usually maps to something like C:\Documents and Settings\[YOUR USER PROFILE]\Application Data\ in Windows 2000/XP and "C:\Users\[YOUR USER PROFILE]\Application Data\" in Windows Vista.

Let's say you wanted to add an item to the Send To menu to shred files with sdelete. You could just drag a shortcut to the shred script this folder, or create a new shortcut.

This method should work for any application that allows you to open a file by using a command line argument.

Wednesday, May 06, 2009

mail command returns fseek Invalid argument, panic temporary file seek

Today I logged into one of our older servers, haven't logged into it for a while and saw that there was new mail (as always given the number of cron jobs running).
Below is the output of my command.

[root@server /]# mail 
"/var/spool/mail/root": 1832 messages 1777 new 1832 unread 
fseek: Invalid argument 
panic: temporary file seek

After a little bit of searching online I found two possible and simple solutions.
If you want to read the mail try using mutt instead of mail. It doesn't have a problem with the 2GB file size of the mail box that mail did.

If you don't care about the old stuff you can run:
rm -f /var/spool/mail/root
to remove the file and then
cat /dev/null > /var/spool/mail/root
to recreate a blank file.  

I ended up reading the mail I wanted and then blowing the file away.  But before I could recreate it there was already a new file with 62MB of mail in it.  

Thursday, April 23, 2009

Excluding Directories from updatedb on CentOS 5

Running the updatedb command will update the slocate database.  However if you want to exclude certain directories for any reason, such as not wanting to include a huge NFS file store or something to that affect you have two options.

1) use the -e switch with a comma seperated list of directories to not index. (updatedb -e /Storage,/home)

2)edit the /etc/updatedb.conf file.
  •  vi /etc/updatedb.conf
  • find the PRUNEPATHS section and add the directories to the list seperated by spaces. (PRUNEPATHS = "/afs /media /net /sfs /tmp /udev /var/spool/cups /var/spool/squid /var/tmp /Storage /home")
Also you can read up on updatedb by reading the man pages.
man updatedb

Friday, March 20, 2009

error: can't create transaction lock on /var/lib/rpm/__db.000

I got the error message today while upgrading Webmin.  The fix for this is pretty simple:

rm -f /var/lib/rpm/__db.0*
rpm --rebuilddb

The first command clears out any of the files that will lock an RPM from running.  The second command rebuilds the RPM database.

Then I was able to install Webmin 1.470 from the RPM that I had downloaded with:
rpm -Uhv /Storage/rpmDownloads/webmin-1.470-1.noarch.rpm

Tuesday, February 17, 2009

memcached init.d statup scripts for CentOS 5.2.

While working to setup memcached on my CentOS servers I came across these scripts.  They are the typical startup, restart, shutdown, status scripts I am sure you are used to using.  

Both sites have the same script contents but I feel that has the more complete instructions.

Please check your autoconf installation and the $PHP_AUTOCONFenvironment variable is set correctly and then rerun this script.

This is a very simple fix if you are running Redhat Enterprise / CentOS 5.2.  

yum install autoconf

I came across this while setting up memcache / memcached.  I had tried running phpize from the memcach-2.2.4 directory but was getting the error "Please check your autoconf installation and the $PHP_AUTOCONFenvironment variable is set correctly and then rerun this script."

Once I had autoconf installed I was able to finish the install process
make install

I know there are easier ways to do all of this on CentOS, but we are using a newer version of PHP (5.2.6) than came with CentOS (5.1.6), so using yum to install this would not have worked.

Tuesday, February 10, 2009

How to: mod_expires & mod_deflate in Apache running on CentOS 5

While trying to improve the performance of our internal workflow I was tasked with setting up mod_deflate and mod_expires, based off of Yahoo's! Best Practices for Speeding Up Your Website.

The nitty gritty of these two is to make sure they are both listed in your httpd.conf file:
LoadModule expires_module modules/
LoadModule deflate_module modules/

Since this server is running the default CentOS 5 apache I already had the support I needed.  I then continued on to find the documentation online. Both modules support vhost context so that is the route I went with. I added the lines below to each vhost that I wanted the settings to take place for.

ExpiresActive On
                ExpiresByType image/gif A2592000
                ExpiresByType image/jpeg A2592000
                ExpiresByType image/png A2592000
                ExpiresByType text/css A2592000
                ExpiresByType application/x-javascript A2592000
                ExpiresByType application/pdf A2592000

The syntax for the ExpiresByType directive is:
ExpiresByType mime/type TimeInSeconds. 

The A2592000 is Access time + one month.  If I wanted the file to be cached for one month from modification date I would have done M2592000 instead.

Other handy numbers in seconds:
86400 = One Day
604800 = One Week
2592000 = One Month
31536000 = One Year (365 Days)
157680000 = Five Years
3153600000 = Ten Years

Why might these large numbers come in handy?  Well you want to set far future expiration dates for things that don't change often.  The catch is if you need to update one of these items with a far future expiration date.  You have to change the file name.  Modify the case of the filename will work in the short term but really you want to start adding version strings to all of your static files like CSS and JavaScript.  

mod_deflate is much easier to setup.  Simply add the line:
AddOutputFilterByType DEFLATE text/html text/plain text/xml application/x-javascript text/css

Again based on the mod_deflate documentation you can add this directive to the vhost directives.  Please note that this directive can be placed on just about any context level and as such is very powerful.  Also note that you should not try to compress image file because they are already compress and it really is just a waste of CPU and time.

Using these two methods we where able to reduce our website on a first visit from 190.7Kb total to 157.3Kb total.
9.4K HTML/Text
32.1K Javascript
12.9K Stylesheets
0.4K CSS Images
135.9K Images
190.7K Total

3.1K HTML/Text
15.0K Javascript
2.9K Sytlesheets
0.4K CSS Images
135.9K Images
157.3K Total

I also used the yuicompressor to minify our main Javascript and our CSS file.
3.1K HTML/Text
12.3K Javascript
2.2K Sytlesheets
0.4K CSS Images
135.9K Images
154.0K Total

As you can see using mod_deflate to compress the Javascript and CSS is a more substantial savings in terms of bandwidth than trying to minify.

Granted based on my observations if we really wanted to save on bandwidth and make our pages load faster we would reduce the substantial image footprint.

In case anyone is wondering where I culled all of these numbers from I used Y! Slow for Firebug in Firefox. 

Tuesday, January 27, 2009

Webmin 1.450 is out!

Just a quick update that Webmin 1.450 is out.  If you are running a RedHat or a dirivative grab the rpm from  If you aready have Webmin installed run the command rpm -Uhv webmin-1.450-1.noarch.rpm.

Friday, January 23, 2009

Things learned over time, Part 1

This is going to be an on going series of quick posts.  Some of those quick little tid bits of information that once you know you never forget, but up until that point it makes things harder than they need to be.

I typically change directories to where I want files to end up if say I am using wget or xcopy.  I used to do xcopy z:\some\directory\* c:\Users\Steve\Desktop\Directory 

now I have either realized or learnd from someone that you can do 

xcopy z:\some\directory\* .

Saves a lot of typing and reduces mistakes.  Again you have to be in the directory for it to work.
Alternatively if you open a command prompt by default you are in your home directory, so if I wanted the same outcome but didn't want to change directories I would do 
xcopy z:\some\directory Desktop\Directory

Of course this works on Linux as well as Windows but the paths and slashes will be different.

Thursday, January 22, 2009

301 Permenant Redirects in Apache on CentOS

Hello and welcome back to 301 Redirects and you.  You might remember that I did a post on 301 Redirects on IIS a few years ago, way back in 2007 actually.

Today we are concerned with the Apache web server and doing SEO / SSL friendly 301 redirects for to

If you are just running and not using any named virtual hosts you can do:
RewriteEngine on
RewriteRule ^/(.*)$1 [L,R=301]

If you are running named virtual hosts you can then add the directives below to your virtual hosts.

<VirtualHost *:80>
       Redirect 301 /

Thursday, January 15, 2009

How to: Use chkconfig, or keeping your services running on a new run level

OK, so I didn't take my own advice and made sure that all of my services where set to the run on the correct run levels before I switched run levels and rebooted the servers.

chkconfig is a command-line tool for updating the /etc/rc[0-6].d directories.

So I ran chkconfig --list | more to see what was running and in what run levels. Next I ran chkconfig --levels 235 service on to tell the service to start when it enters that run level. For example:

chkconfig --levels 235 named on

Now since I have already rebooted the server, and I know this service didn't start on it's own I had to:

service named start

If I had been smarter and done all of this before hand but wanted to see if named was running or not I would have run:

service named status

I understand that all of this is basics, but some people learn from others mistakes, so hopefully I can save one person from doing the same thing.

Other useful chkconfig switches:
chkconfig --help (Used to display the help dialog)
chkconfig --add [Service Name] (chkconfig --add mysqld) (Adds a service to the chkconfig list)
chkconfig --del [Service Name] (chkconfig --del mysqld) (Deletes a service from the chkconfig list)
chkconfig --level [2,3,4,5][Service Name][on, off, reset] (chkconfig --levels 235 httpd on)(Sets the run levels a service should start in.)

chkconfig can also manage xinetd scripts via /etc/xinetd.d.

Oh, and while you are at it run chkconfig --list | more to review what services are running on your server, you might be suprised. For instance I had bluetooth support running, but not one computer in my company has bluetooth support, so I disabled it (chkconfig --levels 2345 off) and (service bluetooth stop).

Switching Run Levels in CentOS

I am assuming that anyone reading this is at least familar with run levels and linux but please have an understanding of what they are and how they will affect your system(s).


0 - halt (Do NOT set initdefault to this)

1 - Single user mode

2 - Multiuser, without NFS (The same as 3, if you do not have networking)

3 - Full multiuser mode

4 - unused or Admin

5 - X11

6 - reboot (Do NOT set initdefault to this)

The most commonly used runlevels in CentOS are 0, 1, 3, 5 and 6. Most systems will boot into runlevel 5 with a GUI login and either Gnome or KDE as a window manager running on top of X. This is exactly what someone using the computer as a desktop will want, but for a server you will want to boot to into runlevel 3 (Full multiuser mode). From there you may choose to "startx" manually once logged in. (A common setup for me. I like the ability to have a GUI, but I don't want it running all of the time. Runlevel1 (Single user mode) has been very handy, for instance if you have forgotten your root password or are having trouble booting for any number of reasons.

To change the runlevel of the server upon boot up edit the /etc/inittab

sudo vi /etc/inittab

Around line 18 you will see a line as shown below.


You simply change the "5" in this case to the runlevel you desire. (In my case 3) Save the file and exit.

Wednesday, January 14, 2009

Spiceworks releases 3.5!

Just found out that Spiceworks has been updated to 3.5. I never got the chance I wanted to beta test 3.5, but now I don't have to worry about it. Looking forward to the Network Bandwidth Analyzer and the Nagios integration.

I plan on doing a backup of my database tonight so I can do the update after hours. Wish me luck.

Friday, January 09, 2009

Configuring PuTTY to use PKI (passwordless) authentication

Since this is about using PuTTY I would recommend you download it if you haven't already. Also you will need to get PuTTYgen, it might be worth it to grab pscp as well. I like to keep putty.exe and puttygen.exe in my Windows directory, so I don't have to update my path, thus allowing me to run putty right from the command prompt.

Now run PuTTYGen and create a new pair of keys by clicking the “Generate” button. You will have to move your mouse around in the box to generate randomness, so keep doing that until the progress bar fills up. You can keep all the options at their default settings. Then, save both public and private key to a safe location. Name your public key [your_key_name].pub and the private key [your_key_name].ppk.

Now, upload your public key to a directory on your remote system. I used pscp to do this quickly to all of my servers (pscp [] user@remotesystem:)
Now you have to import your public key into the authorized_keys file (and authorized_keys2)

ssh-keygen -i -f [] >> .ssh/authorized_keys && ssh-keygen -i -f [] >> .ssh/authorized_keys2
Replace [] with the path to your key. Now log out and start PuTTY.
In Putty, you have to configure the following items:
In Connection/Data, add your remote user name
In Connection/SSH/Auth, browse to your private key file (.ppk)
In Sessions, fill in the FQDN or IP address of your remote machine, give your session a name [session_name] and click on Save.

Now you can use putty to SSH into your remote boxes with out a password. If you are a fan of having one or two click shortcuts create a shortcut to %windir%\putty.exe -load [session_name]. If you gave your [session_name] a name with spaces use double quotes to encase it like putty.exe -load "session name".

Configuring CentOS 5.2 to accept passwordless authentication via PKI

I am sure most people reading this blog know how to do this, setup password less authentication using Public Key Infrastructure, but there are a few minutea that I was missing.

The .ssh directory needs to be read/writeable/executabe by the owner only (chmod 700 .ssh)
authorized_keys and authorized_keys2 need to be read/writeable by the owner only (chmod 600 authorized_key*).

On CentOS 5.2 I also dropped in a .config file into the .ssh directory I was able to connect with Putty and not use a password.

nagstamon: A Nagios system tray monitor

First a little bit about my setup. While its true that I love nagios I don't always want to wait for e-mails to come before I know about a problem. I run a dual PC setup with Synergy2 to connect the two monitors and use only one keyboard and mouse. I keep the second monitor on my helpdesk tickets and nagios service detail page. I also use the Nagios Checker for Firefox on my second screen, but since I run Chrome as my main browser I cannot use that option on my main screen.

Today I found nagstamon and instantly fell in love with it! Configuration is easy. Below you can see what settings I am using for optimal performance in my opinion. Of course nagios-server is my actual FQDN for my nagios server and I don't use nagiosadmin to login to my nagios server.

If you notice on the last tab "Executables" I keep putty in the Windows directory so I don't have to update my path. This allows me to run putty right from the command line anywhere I am once I am at a command prompt.

Also if you don't choose to put NagStaMon in the system tray it will float in it's own tiny window like you see below. Look near the upper left corner by the Firefox Icon.

Monday, January 05, 2009

How-To: Install, Configure, and Import your first SVN (Subversion) repo on CentOS 5

This might only be useful to me, I don't know but at least I will know its out there and when I have to do this again I will know what to do.

Use wget to pull down the latest source.
tar xzfv subversion.
cd subversion.
make && make install
svnadmin create --fs-type fsfs /path/to/repo
touch /path/to/repo/.htpasswd
htpasswd /path/to/repo username
svn import -m "First Import" --username=username /source/path/ file:///path/to/repo
Setup the apache config file. Since I use named virtual hosts and typically have multple repos here is a sample.

<VirtualHost *:80>
        DocumentRoot "/var/www/html"
        ServerPath /html/
        DirectoryIndex index.php index.htm
        <Location /svn/repo1>
                DAV svn
                SVNPath /path/to/repo/repo1
                AuthType Basic
                AuthName "repo1 Repository"
                AuthzSVNAccessFile /path/to/svn-acl-conf
                AuthUserFile /path/to/repo/repo1/.htpasswd
                Require valid-user
        <Location /svn/repo2>
                DAV svn
                SVNPath /path/to/repo/repo2
                AuthType Basic
                AuthName "repo2 Repository"
                AuthzSVNAccessFile /path/to/svn-acl-conf
                AuthUserFile /path/to/repo/repo1/.htpasswd
                Require valid-user
        <Location /svn/repo3>
                DAV svn
                SVNPath /path/to/repo/repo3
                AuthType Basic
                AuthName "repo3 Repository"
                AuthzSVNAccessFile /path/to/svn-acl-conf
                AuthUserFile /path/to/repo/repo3/.htpasswd
                Require valid-user

The /path/to/svn-acl-conf file contains
username = rw
username = rw
username =rw

This file needs to be updated everytime a new repository is added with the appropriate username and permissions.

I am sure I have missed some steps with this, but it's a decent starting point.