Thursday, December 30, 2010

ImageMagick convert -geometry weirdness

I don't know if this is really documented anywhere, but when using the ImageMagick program convert with the -geometry or -resize flags it also converts to 16bit color, at least when used against 1bit (Black and White) image.

The reason this is important to note is that using -compress Group4 will not work on these images because they aren't 1bit color images and they need to be. To work around this limitation you need to force the color depth with -monochrome. The downside to all of this is time.

Ripping an Image file from a PDF takes roughly .206 seconds for an 8.5x11 PDF with pdfimages (useful if a PDF has OCR embedded in it.)
Converting the resultant PBM file without -geometry 1700x2200! -monochrome takes roughly .194 seconds.
Converting the resultant PBM file with -geometry 1700x2200! -monochrome takes roughly 4.594 seconds. This is an increase of 10 - 10.5 % adding in the geometry.

*all of the above numbers are from one file, but testing of different files showed similar results. All tests where done at 200 DPI.

Here are some examples of the commands we are running for anyone curious.
convert ${outputTiffRoot}-000.p*m -density 200 -compress Group4 {outputTiffRoot}_200.tiff

convert ${outputTiffRoot}-000.p*m -density 300 -compress Group4 {outputTiffRoot}_200.tiff

convert ${outputTiffRoot}-000.p*m -density 200 -compress Group4 -geometry 1700x2200! -monochrome ${outputTiffRoot}_200.tiff

convert ${outputTiffRoot}-000.p*m -density 300 -compress Group4 -geometry 2550x3300! -monochrome ${outputTiffRoot}_300.tiff

Thursday, December 16, 2010

Gawker Media Account Database hack

I'm sure you have heard about the release of about 1.5 million username / password combinations (encrypted with DES). I have gotten e-mails from a few web companies saying that I should re-set my password. These companies include LinkedIn and Blizzard (for my World of Warcraft account).

Since my roommate didn't get an e-mail from Blizzard (as he doesn't have an account on any Gawker Media website) and I did, I can only assume that Blizzard downloaded the hacked account database and compared it to their account database. Any matches where to get this e-mail.

I honestly hope this is what happened and that Blizzard and LinkedIn didn't just randomly send out password reset e-mails. In this case the most responsible thing to do is download the file and cross reference it with your own data.

If you are wondering where you affected by this breach visit http://www.didigetgawkered.com/.

Yesterday I read both an Analysis of the hack and an Analysis of the Analysis. I am fairly certain that my password was unique in the database, but I do know that I used to use that password all over the place. A few months ago I started changing password and using KeePass to store them securely. I disagree with Wikidsystem's Analysis of the Analysis.

Yes, I was just as owned as the person using something insecure such as letmein or password. But the "owning" wasn't because of a weak password. Yes, I do have to copy and paste my passwords, but that doesn't make me a loser. I treat all of my online identities the same, as they are a representation of ME. The real losers are the ones that use any sort of username / password combinations on multiple sites. If you don't care that some sites might get hacked with the same username / password then fine, but I do. I want any potential fallout to be minimal.

Also, for things I truly care about, if a two-factor authentication mechanism is available I use it. The other thing more developers need to account for is LONGER passwords. My pseudo-random password generator generates long passwords, sometimes too long for an account. Please make the password field huge and don't store it in plain text. I hate having to cut down a password from 30+ characters to 8 because that is the longest your application will allow.

You see I don't care if my password is 30+ characters, because I don't need to remember it! Thats what I have KeePass for!

Thursday, October 14, 2010

Enabling Promiscuous Mode on vSphere 4

We are in desperate need of a testing server. The testing server needs to be able to listen to the network, through port mirroring, to record VoIP calls. In the physical world this is no problem, but we don't have the budget for another physical box, especially for something as non-critical as testing a program upgrade (even though the program it self is very critical to our business).

A year ago the software vendor said that they don't support virtual environments and their product doesn't work in a virtual environment. During discussions this year however they told the truth that they simply don't know if it will work, and thus "don't support" it. I am calling it lazy. It doesn't take much these days to get a VM host going and configured for testing things out.

Well we have done the leg work for them. I have confirmed that vSphere does allow for promiscuous mode, even from the physical network. To be honest I was a little shocked to see this worked.

Here are the steps to enable promiscuous mode on your vSphere host and the guest VM.
  1. Enable promiscuous mode on the virtual adapter.
  2. Enable promiscuous mode on the vSwitch.
  3. Enable promiscuous mode for the guest.
  4. Enable port mirroring on the physical switch (not covered here).
  5. Test capturing network data.
Now that we have the general synopsis of the procedure we can begin.

We enable promiscuous mode on the virtual adapter by logging into the vSphere Client, going to the VM Host, clicking on the "Configuration" tab, then on "Networking" in the "Hardware" section on the left. Now click on "Properties..." for the Virtual Switch.


Now enabling promiscuous mode on the vSwitch is pretty simple. Click on the "vSwitch" on the "Ports" tab, as shown below and then click on the "Edit..." button.



Now that we are editing the vSwitch properties click on the "Security" tab. Change the option for Promiscuous mode to "Accept" if it isn't already and hit "OK".


To enable promiscuous mode for the guest we need to break down to the command line. I used putty to SSH into my vSphere host, which I had previously setup. Now you need to edit the .vmx file of the guest that will be listening to the network.

# vi /vmfs/volumes/datastore1/testServer/testServer.vmx

I did a search for ethernet, so the promiscuous mode configuration would be with the rest of the ethernet config. Add the following line to the configuration file:

ethernet0.noPromisc = "FALSE"



And save when you are done. I rebooted my testServer just as a precaution, but I'm not certain it is required.

At this point everything is configured on the VM side of things. Make sure you have port mirroring enabled on your physical switch and give it a test. In my environment I commonly use Wireshark. I did my testing by pinging a server on a mirrored port.


Using PSExec to Defragment your PCs

Way back in 2000 / 2001 I was an IT intern. My job was to do the really manual processes for a small department with in a much larger company. One of those was monthly defrags of all PCs and Laptops (if they where available). I had to go to each and every PC, log on, and start the defrag process manually. I believe this to be my first moment of "I should automate this". Problem was I didn't know how or really where to start. I knew script was the answer, but just couldn't get things to work. I believe I eventually created a scheduled task on each machine (by hand) to do this for us.

Fast forward 10 years and I am still in need of the same thing, as we don't have Vista or Windows 7 deployed. But now I have a much better understanding of what needs to happen and even better I know where to start!

PSExec, which is part of the System Internals PsTools suite is my answer these days. A simple script, run from my PC (still manually for the time being) is able to handle defragmenting all of our PCs.

psexec @C:\Updates\Comps\AppPCs.txt -n 10 -c -f -d JkDefragCMD.exe

I use the files with PC names to speed up re-deployment of everything. The -n 10 flag tells psexec t0 wait 10 seconds before it times out the PC, instead of 60 (I believe this is the default). -c copies the file (jkdefragcmd.exe) to the remote system. The flag -f forces copying, even if the file exists. The flag -d doesn't wait for the process to terminate, this is as asynchronously as I can do.

My next step is to hook this up to task scheduler and have it run the first Sunday or something.

Monday, October 04, 2010

Documentation, not always the How To Dos

Documentation for me is often just How to do something, but I have been forgetting the WHY part of the equation. For instance, my documentation says to do our weekly maintenance window after 21:30 and to only reboot one particular server after 21:45, but I didn't say why and had forgotten myself. So I started doing the maintenance earlier and at one point rebooted the one particular server at 21:30. This caused the last of our production cron jobs to not run, and thus a customer didn't get their batch for the day.

Now this could have been avoided a few different ways. Since that first time was a mistake any thing that happens after it is a failure on my part, and thus can't happen again.
  1. I could have followed our procedure to the T.
  2. Read my e-mail to see that the cron job hadn't run yet.
  3. Run the cron job by hand after the server restart.
  4. Did all of the precursor work and waited for the cron job to run.
  5. Did all of the precursor work and run the cron job by hand.
I have chosen to go with option 1, follow our procedure to the T. We started doing the weekly maintenance for a few reasons and it was to be done after 9:30, with that one particular server last to ensure this didn't happen. Now once again I have my WHY, and I have it written down so if I question it again I know why.


This is just one instance where the WHY is critical, but there are others. So please when you are writing documentation, and you should be please include the WHY you do something the way you do it. It also helps train the new guy, or your replacement.

Enabling the Administrator user in Windows 7

This is a very simple fix. The reasons to actually log on as the administrator are shrinking, but for my environment I need to log on as the Administrator user occasionally (for software patching).

  1. Open up a command prompt as Administrator (not the same thing we are trying to do)
  2. Run "net user /active:yes administrator"
  3. Now it is a great idea to set a password for the Administrator user so we can do that right now
  4. Run "net user administrator password" but replace password with something more complex similar to "Som3C0mpl3xP@$$w0rd"
  5. Exit out of the command prompt.
For my situation I need to run commands as the Administrator user on other machines and need the Administrator shell. I did a lot of testing to make sure that the "Run as Administrator" doesn't work as I need.

Enabling Ping responses in Windows 7

Out of the box if you Ping a Windows 7 host you will recieve "Request Timed Out". This is because the Windows 7 firewall is blocking ICMP echo requests. If this is causing you problems and you need to open up access to ping requests it is fairly simple.

  1. Go to the "Windows Firewall" in the Control Panel.
  2. On the left hand side click "Advanced Settings"
  3. Click on "Inbound Rules"
  4. Right click on "Inbound Rules" and choose "New Rule"
  5. Select "Custom (Custom Rule)" and press "Next"
  6. Select the (default) "All Programs" and press "Next"
  7. Change the Protocol type from "Any" to "ICMPv4"
  8. Unless you want to restrict the ping response choose the option "Any IP address". (This is for your adapter.)
  9. Unless you want to restrict which hosts can ping you choose "Any IP address".
  10. On the Action screen ensure it is an "Allowed" connection.
  11. Leave all three check boxes check on the Profile screen.
  12. On the final screen, Name, give it a meaningful name such as "Echo Ping Request"
  13. Finally click finish.

Monday, September 13, 2010

How to Reset the Password(s) on a Linksys SRW248G4 Switch

  1. Connect to the Serial port on the back of the switch with a serial cable. (Putty on Windows works well)
  2. The defaults for connecting to the serial port are 38400 baud, 8 data bits, no parity, 1 stop bit, no flow control.
  3. Confirm that you have an active serial connection by pressing enter a couple of times. You should receive a login screen
  4. Once you have successfully connected to the serial port restart the switch by unplugging the power. (Either from back or from the power strip, my preferred method is the back)
  5. The switch will start its POST process.
  6. Look for the line “Autoboot in 2 seconds - press RETURN or Esc. to abort and enter prom.” (Please note: Do not hold down Esc. or Enter. Only press it once.)
  7. You will know that you interrupted the boot sequence when you are prompted with a startup menu. Select option 3 “Password Recovery Procedure”.
  8. The screen will display “The current password will be ignored!”. Press Enter to reboot the switch.
  9. Once the switch has restarted login with the default admin user and no password. This will work on either the terminal or the web interface. Proceed to create / modify users and passwords. This does not reset the rest of the configuration, just user accounts.
  10. Once you have your new credentials reboot the switch again. Document the credentials.
  11. Verify that your new credentials work by logging into either the web interface for the command line interface.
  12. Disconnect the cable and store it safely.
This is the second time I have required these instructions, so I am posting them here for safe keeping. The last time I did this I removed the default account and setup individual accounts, and then promptly forgot everything about it. This is why documentation is so critical.

Thursday, August 19, 2010

Rebuilding the Icon Cache in Windows 7 (and Vista)

If you ever need to rebuild the Icon Cache and dearly miss TweakUI for this duty there are three simple command lines to run and a process to kill and then start.

1. Open Task Manager
2. Kill all explorer.exe processes (but leave Task Manager Open)
3. Open a command prompt
4. type "CD /d %userprofile%\AppData\Local"
5. type "DEL IconCache.db /a"
6. type "Exit"
7. Go to Applications Tab in Task Manager and click the "New Task..." button.
8. type "explorer.exe"

How to Enable Windows 7 GodMode

God mode in Windows 7 is similar to TweakUI in Windows XP, but is really simple to setup and requires no installation.

1. Create a new folder.
2. GodMode windowRename the folder to

GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}

(note that you can change the “GodMode” text, but the following period and code number are required).

I create this folder in my home directory.

Wednesday, August 11, 2010

Fixing Adobe Error 148:3 Licensing for this product has stopped working

This is a pretty simple fix. Go over to Adobe's Licensing website and download the License Recovery tool. Run it and follow the prompts.

Before this I had wanted to Repair my installation and went to the Control Panel, but my only option was to uninstall the suite and re-install, so I knew there had to be a simpler way to do it and there was!

Monday, July 26, 2010

Installing Backup Exec System Restore on CentOS 5.5 and First backup / restore

As I said in my review I would telling you how to install this on CentOS 5.5, so here it is.

Normally one would simply run the Symantec_Backup_Exec_System_Recovery.bin with ./Symantec_Backup_Exec_System_Recovery.bin. But on a non-supported distribution / kernel that won't work.

However all is not lost! You can install this with a little shell-fu.
Instead of ./Symantec_Backup_Exec_System_Recovery.bin run
./Symantec_Backup_Exec_System_Recovery.bin --noexec --target .

This will extract the contents of the binary to the current directory. Next you need to install a few RPM packages, but you need to know what version of kernel you are running (for me it was either PAE or non-PAE). So I ran uname -a to see what version of the kernel that machine was running.

rpm -Uhv SymSnap/symbdsnap-1.0.1-36146.2.6.18_8.el5PAE.i686.rpm
rpm -Uhv symmount-1.0.1-36352.i686.rpm
rpm -Uhv besr-1.0.1-36352.i686.rpm

Once all of these packages are installed I was able to do my first backup. And by first backup I mean first three (/boot, /, and SWAP). While I probably don't need to do all three (/boot and /) I don't want to have to create a SWAP system after a restore. So I ran all three.

besr -b/dev/sda1 -d /Storage/backups/hostname/boot.v2i -use-aes-encryption standard -p SoM3SupErSecureP@ssW0rd -compress Standard
besr -b /dev/mapper/VolGroup00-LogVol00 -d /Storage/backups/hostname/Vol00Log00.v2i -use-aes-encryption standard -p SoM3SupErSecureP@ssW0rd -compress Standard
besr -b /dev/mapper/VolGroup00-LogVol01 -d /Storage/backups/hostname/Vol00Log01.v2i -use-aes-encryption standard -p SoM3SupErSecureP@ssW0rd -compress Standard

Now in /Storage/backups/hostname I have three files
boot.v2i
Vol00Log00.v2i
Vol00Log01.v2i

If I needed to restore these I would use the Backup Exec System Recovery Windows based LiveCD. During the restore I would choose all three of these in the order shown above.

After the restore process was finished I need to boot using the gparted LiveCD. Once in I would open up a terminal and install grub on to my sda hard drive.

#grub
#root (hd0,0)
#setup (hd0)
#quit

Once grub is installed on sda I needed to update fstab to no longer look for / on the /dev/mapper/VolGroup00-LogVol00 and swap to not be on /dev/mapper/VolGroup00-LogVol01.

Backup Exec System Recovery Review


I asked a question over on Serverfault about backup software that supported Windows and Linux and did Bare-Metal Recovery and the response I got was to try Backup Exec System Recovery, which so far has been pretty good.

The bad stuff:

1) Restore CD creation on CentOS is impossible from what I have tried and can tell. They really do mean Redhat Enterprise Linux only.
2) Bare-metal recovery with MBR restore doesn't work. Which means I have to install grub after a restore.
3) LVM support is once again not there or well hidden.
4) Centralized management. It is a separate 1GB download as an Addon. I did the download and install, but now I can't find the Management console. Setting up each of our servers isn't too big of a problem, but I really wanted one window to view for backups.
5) There is no built in scheduler with Linux.
6) There is no incremental backup in Linux.

The good stuff:
1) It works on CentOS with a little bit of hacker-y to install (more on that in another post).
2) It is easy to install and configure on Windows, even though it requires a reboot. (BOO!)
3) It is pretty quick to get a backup started.
4) The backup procedure is different on Windows than on Linux. Windows is GUI based 100%. (I haven't looked to see if I can configure a backup from the command line.) While Linux is 100% CLI.
5) The Windows based restore CD is a pretty useful tool with out limits (unlike the Acronis Boot Disk).
6)Restores are quick and pretty simple to do.

All in all after trying three different products and looking at half a dozen I think this may be the winner.

CHECK_NRPE: Error - Could not complete SSL handshake

Recently I have been auditing our servers versus what we check in nagios versus what we need to do when a system is rebooted during routine maintenance. I found that two of our servers have been left out of nagios monitoring even though they are on my maintenance checklist, and they have software that I still start by hand (I KNOW IT'S NOT A BEST PRACTICE!).

Both servers had NRPE installed and configured (mostly), one of them even had a configuration file on the nagios server but it wasn't enabled (hostname.disabled instead of hostname.cfg). The other server needed a configuration file, but even then it wasn't working, so here are my troubleshooting steps:

1) Check to see that NRPE is compiled and installed. [It was]
2) Check that NRPE was listening (netstat -an | grep 5666) [It was]
3) Check that NRPE was listed in /etc/services [It wasn't]
4) Check the NRPE config file (/etc/xinetd.d/nrpe) for "only_from = 127.0.0.1 192.168.100.31" [It wasn't]

So I added the service definition to /etc/services and the nagios server IP to the only from line and restarted xinetd (service xinetd restart) and I was finally able to connect from my nagios server.

All of this is on CentOS 5.5 for both the server and the client.

Thursday, June 10, 2010

Spiceworks Update 4.7.51900 Now Available

Spiceworks just released an update to their fantastic software and the patch notes have something I have been looking forward to for a while, yet it is really simple, LOG ROTATION!

Spiceworks 4.7.51900 is now available. It includes the following changes:

  • Application startup performance improvement
  • If a device is HP and scannable, and product number comes back as "Not Found", the message "All your warranties will be scanned soon" is displaying - fixed
  • Power Manager and Performance Monitor plugins cannot coexist because of scheduler conflict - fixed
  • Daily rotation of log files to reduce performance impact from large log files
  • Server freezes after permission denied error in log - fixed
  • Query inventory devices to locate IPs for VLAN data
  • Undo action link keeps getting added when you close/reopen a ticket on the portal - fixed
Full Patch Notes

Tuesday, May 04, 2010

Acronis Backup and Recovery Advanced Server 10

Getting up and running with Acronis Backup and Recovery Advanced Server 10 is pretty quick, especially for the demo. It took under an hour to install, configure and start backing up a Windows 2008 Standard server. Getting the Backup agent installed on our Linux servers took a little bit more work because of the SnapAPI kernel modules. After a little bit of digging around in the Acronis knowledge base I was able to resolve all of the issues I had getting the Acronis Backup Agent installed on our CentOS 5.4 servers. If we go with Acronis Backup and Recovery Advanced Server 10 I will need to add some additional lines to our Linux Post-Install script to add in the kernel-devel package and the additional RPMs that the agent needs. (DKMS and SnapAPI) both of which are already on our Storage drive.

With our second trial run here we where able to resolve most of the outstanding issues we had from the first trial. Namely excessive recovery times and recovering to dis-similar hardware. I have yet to truly attempt a Linux restore to dis-similar hardware, but I have the base system recovered and the instructions, so I will be attempting one later this week (with results to follow). Restoring our Windows machines to either dis-similar hardware or a VM is pretty straight forward with the Universal Restore CD. The main problem holding us back in those scenarios is not having the drivers readily available for post install. This can be remedied by always going to VM and installing the VM tools as they also contain the drivers for the system.

Each backup policy allows us to modify settings for the backups such as Encryption, Compression, and throttling resources. The encryption can have a separate encryption key and varying levels of encryption from none to AES-128, AES-192, and finally AES-256. The automated backups can do a simple backup plan with full and incremental that are stored as a file on the storage system. The policy also allows us to run custom commands before and after the backup so can could do a virus scan, or shred temp files or what ever we desired.

One of the strangest problems I came across was while doing the restores to VM, if started from the VMs Console after picking the Disk to recover to the process could take hours to complete. Whereas if I did the same restore on to physical hardware the process would take seconds. The work around is to boot the VM from the Universal Restore CD and use the Management Console to connect to the VM. Once connected to the VM from the Management console I was able to start the restore in a matter of minutes, more akin to a physical machine.

Something to remember when setting up the backup policy for Linux machines that utilize LVM is to back up the disk as a whole. You don't want to backup the LVM by its self because after the restore it will fail to boot.

All in all I feel that Acronis would make a wonderful addition to our eco-system. It fulfills all of the requirements of our backup scheme, except the web based access, which is not uncommon for Windows based software.

Monday, April 19, 2010

NDO2DB daemon startup script

I would like to thank Chris over at http://sysengineers.wordpress.com for the excellent post on how to daemonize NDO2DB. His post NDO2DB startup script for RH (EL) / OEL does an excellent job getting everything working. I had to make one small change to the script as I kept my ndo2db executable as ndo2db-3x instead of just ndo2db.

For me this fixes a problem where I have to remember to manually start the ndo2db service after a server restart, which hasn't been happening the past few times. The second way I am going to fix this problem is by creating a checklist of things that need to happen during a server restart / boot up sequence for each PC. This will also reduce the effort needed for when I move the rack later this year. The third way I will be checking this is by adding a check to Nagios to make sure this is running at all times.

This also illustrates my general way of checking / double checking things. One go off of a checklist. The double check is to verify that Nagios is seeing the same thing.

Wednesday, April 07, 2010

Helpful Tech Support!

I don't often gush about good experiences, but I recently had an issue with Spiceworks, which ended up being a configuration problem on my end. Not only did the Support Engineer help me with my issue but they also saw another configuration issue and made some recommendations on how to fix it. Additionally they told me about a bug in the software that I was not aware of that has been affecting the performance!

Also the support rep was able to do all of this and not make me feel like an idiot!

TL;DR
Had a problem. It was fixed. Also fixed two other issues I didn't know I had!

Thursday, April 01, 2010

Installing Acronis Backup and Recovery 10 Linux Agent on CentOS 5.4

I know it has been a while since I have posted anything. I have been busy testing out backup and recovery software. The latest one is Acronis Backup and Recovery 10 Advanced Server. The management server is very easy to setup and configure. I had some issues however while installing the Agent for Linux. But with a little research I was able to find what I needed and get everything installed.

Step number one is to make sure you have the Kernel Development package for your kernel. The easiest way to find out which version you need is to run the command "uname -r". On one server I needed the "kernel-devel" package and on another I needed the "kernel-PAE-devel" package. So if you don't have them already install the correct one for your server.

After this is installed you can install another prerequisite package Dynamic Kernel Module Support Framework or DKMS for short. I got mine off of DAG's repo at http://dag.wieers.com/rpm/packages/dkms/. Once this is downloaded install the package.

The last prerequisite is an updated SnapAPI module. Thanks to the Acronis KB I was able to find the updated package at http://kb.acronis.com/sites/default/files/content/2009/10/4371/snapapi26_modules-0.7.47-1.noarch.rpm.

Now that we have all of the packages installed we can install the Agent for Linux. The install is rather painless, except the License Key that you have to type in, by hand, every time for the trial edition. Maybe its only a pain point for me because I had to do it so many times while trying to get it installed.

TL;DR

#uname -r
#yum install kernel-devel
#wget http://dag.wieers.com/rpm/packages/dkms/dkms-2.0.17.6-1.rh9.rf.noarch.rpm
#rpm -Uhv http://dag.wieers.com/rpm/packages/dkms/dkms-2.0.17.6-1.rh9.rf.noarch.rpm
#wget http://kb.acronis.com/sites/default/files/content/2009/10/4371/snapapi26_modules-0.7.47-1.noarch.rpm
#rpm -Uhv snapapi26_modules-0.7.47-1.noarch.rpm
#./AcronisAgentLinux.i686

Monday, March 15, 2010

3Com Switch find Uptime

We had another incident with our VoIP phones restarting on Friday. As part of my troubleshooting efforts I needed to be 100% certain that the POE switch providing power to our phone didn't reboot.

To start there is no display uptime command in the CLI. But if you run the display version command part of the output is "Switch 4500 PWR 50-Port uptime is 6 weeks, 5 days, 2 hours, 25 minutes". This is exactly what I was looking for!

It should also be noted that the web interface for the switch shows the uptime on the device summary page.


Wednesday, February 10, 2010

Mounting a LVM volume in Ubuntu (Live CD)

A while back my testing server crashed. This was no surprise to anyone as it was just a (Very) old workstation. However it was running my nagios install in a production setting. I had been meaning to move it to a proper server, but just hadn't gotten around to it. To make matters worse, I didn't back any of it up. So thankfully it was only the motherboard that failed and not the HDD.

I mounted the HDD in another PC I had sitting around and booted it using Ubuntu Live CD.

First, boot Ubuntu.
Second, install the needed tools:
$ sudo apt-get install lvm2
Third, load the modules to do our task:
$ sudo modprobe dm-mod
Fourth, scan the system for LVM volumes. Look for the volumes you want to mount. Typically this will be VolGroup00:
$ sudo vgscan
Fifth, we need to activate the volume(s):
$ sudo vgtchange -ay VolGroup00
Sixth, Look for the logical volume containing the root file system. Typically this will be LogVol00:
$ sudo lvs
Seventh, create the directory to mount the drive:
$ sudo mkdir /mnt/restore
Eighth, Mount the volume to the directory you just created.
$ sudo mount /dev/VolGroup00/LogVol00 /mnt/restore -o ro,user
Ninth, Copy your files off of the drive.
$ cp /mnt/restore/some/dir/and/path /some/dir/and/path
Tenth, Setup what ever backup means you have on the new server!


All in all this wasn't a terrible thing, it could have been much worse. I have since moved the nagios setup to a virtual machine and am backing it up nightly.

Upgrading the Firmware on a 3com 4500 switch

Again, this is mostly for my own notes, but someone else may find it useful. Last year we purchased a 3c0m POE switch for our new VoIP phone system. This year I needed to update the firmware on it, but with only 8MB of flash drive I ran into a few problems.

First off, BACK UP EVERYTHING!
I used the TFTP method to transfer files to and from the switch. I used Solarwinds TFTP Server on my PC.

File name Prefix / Suffix
s3n / .app = 4500 application software.
s30 / .btm = 4500 boot ROM software
s3p / .web = 4500 web file (HTTP management interface)
3comOScfg.def / .def = 4500 config file

So first things first backing up via TFTP:
<4500>tftp [IP OF TFTP SERVER] put flash:/s3004_01.btm
<4500>tftp [IP OF TFTP SERVER] put flash:/s3p04_03.web
<4500>tftp [IP OF TFTP SERVER] put flash:/s3n03_03_02s56p05.app
<4500>tftp [IP OF TFTP SERVER] put flash:/3comOScfg.def

Please change these files as you see fit. Do a dir on the root directory to get the listing for your particular switch.

Now that we have that backed up we need to clean up the flash:/ drive to make room for the updates.

<4500>delete s3004_01.btm
<4500>delete s3p04_03.web
<4500>detete s3n03_03_02s56p05.app

Now we also have to empty the recycle-bin. This is where I got stuck as I didn't know a CLI could have a recycle-bin, or have never seen it done before.

<4500>reset recycle-bin

See as we now have the free space we need to get the new files we can pull them down from the TFTP server.
<4500>tftp [IP OF TFTP SERVER] get s3p02_01.web
<4500>tftp [IP OF TFTP SERVER] get s3o01_01.btm
<4500>tftp [IP OF TFTP SERVER] get s3n03_02_00s56.app

Again, you will need to use the files that are the current firmware update.

One of the last steps is to tell the switch what files to use on next boot.
<4500>boot boot-loader flash:/s3n03_02_00s56.app
<4500>boot bootrom flash:/s3o02_01.btm

Finally we will save the configuration and reboot the switch.
<4500>save
<4500>reboot

Thats it! After the switch reboots you will be running the newest software, except for the .web file. For some reason this is left out of all of the documentation that comes with the update. I didn't write down the commands I used to get it to update, and I honestly don't think they worked.

Wednesday, January 13, 2010

Esker Fax 5.0 System Installation

This is mostly just for me. I recently had to build a new server to host our fax software and fax board. I ran into a few problems as I had lost my documentation, so I am posting this link here to remind me should I need to do this in the future.

http://doc.esker.com/edp/5.0/en/installation/index.asp?page=installationa.html

There are some very specific pre-requisets for Windows 2008 that I need to be mindful of in the future, and those can be found at http://doc.esker.com/edp/5.0/en/installation/Content/2008_requirements.html

In reality this was a very simple re-build, that I made more difficult by some rather poor planning on my part.