[UPDATE 3-6-2015:] Check out my newly posted Cacti Virtual Appliance. It is much easier to use than MRTG, and has a pre-loaded host template for F5 BIG-IP Load Balancers!
After getting MRTG setup and running in my MRTG Virtual Appliance as I call it, I started setting up all my networking devices for monitoring. One of the devices I really wanted to poll some more advanced data from is our Load Balancer. What I really wanted to be able to see was the number of concurrent connections to the LB and each of the Virtual Servers if possible. This proved to be much more complicated than I had anticipated.
My first problem is that my SNMP software in Ubuntu was not configured correctly. By default the SNMPd was looking for /usr/share/snmp/mib to load the mib files. In the version of Ubuntu that I had the path was /usr/share/snmp/mib2c-data so I had to update the snmp.conf file. Once I did that then SNMP was able to correctly load all the add-on MIBs so that I could have the OID definitions load correctly.
My second problem was the the MIB file that I had gotten from the web was incorrect, or more to the point it was outdated. The search that I did for F5 MIBs returned many hits, but the one that I went to for most of the information I started with was a nice post at vegan.net. Unfortunately, I didnt realize that this was really out of date. As a result the LOAD-BAL-SYSTEM-MIB.txt is invalid with the software version that my F5 is running and fails the OID lookups.
My F5 is hosted so I dont have direct access to the device. I was able to get the hosting company to grab the MIB files from the filesystem of the F5, and then I put them into my /usr/share/snmp/mib2c-data directory. After that my MRTG graphs for Virtual Server connections started working. One mistake that I made in this process was only putting some of the MIB files on my machine. Do yourself a favor and just get ALL the MIBs and load them to your SNMP MIBs folder.
I found a nifty script here called Buils_mrtg_cfg_for_virtual_servers.pl.txt that was able to do an snmpwalk and get all the information about my Virtual Servers that I needed to get current connections and bandwidth on a per VS basis. From there MRTG was up and running with some pretty good stats about concurrent connection rate and bandwidth utilization across all my domains.
[UPDATE 8-19-2015:] Check out my newly posted Cacti Virtual Appliance. It is much easier to use than MRTG! This MRTG appliance has never been updated, I have shifted all focus over to Caci.
Over the last couple of weeks I have been working to build an MRTG server in our VM environment. I wanted it to be very lightweight for CPU, RAM and Disk storage.
I’ve used MRTG quite a bit before and it can sometimes be tricky getting everything worked out just right so that it runs without babysitting. I finally got a pretty good install going and thought I would share it up here for anyone who might find it useful.
This is an MRTG Virtual Appliance that is running on Ubuntu Server for Virtualization. The install is very compact with just a 2gb virtual disk, 1 cpu and 512MB of RAM. You can download the MRTG_Appliance here. Total file size is ~430MB.
MRTG is setup and configured as well as lighttpd as the webserver. The server is configured for DHCP and SSH is enabled for console management. There is no GUI but there is a configuration page linked from the default webpage.
This is a first run at creating an Appliance/OVA file for me, so I’m sure I have missed some steps. I will update this page as well as the download file as any issues are identified.
Let me know how it goes so we can make it better!
One thing that I just learned overnight is that you really should keep an eye on the Snapshots in ESX/vSphere. We are running the AVVI backups from Backup Exec 2010, and it uses the vSphere storage APIs to do it’s business. BE has vSphere run a snapshot by calling the API and then it grabs that snapshot and sends it to whatever your backup medium is.
In the past I have seen where a snapshot gets left behind and not deleted. Last night I started getting paged from our monitoring system that one of our AD servers was offline. After having to jump through some hoops to get in via VPN (because the AD server was the one used to authenticate and give DHCP to VPN users) I was able to get onto the ESX server. There I saw that the snapshots had hogged up all available disk space on the ESX box and my AD server was stalled as a result. It turns out that the snapshots for my Exchange server were piled up and I had to delete them. Once there was free space again my AD server was back online and everything is OK again.
Now I need to figure out a way to monitor my ESX server for datastore space so that this does not happen again.
If you have had issues with Apple’s Aperture using referenced files then this post may be of some help. In my previous posts here and here I detailed the issues that I have been experiencing and some of the steps I tried to get back working. If, like me, you are still banging your head against the wall, you may want to consider manually updating the Aperture Library Database file.
The process described below is for more advanced/daring users. I’ve tried to recount everything I’ve done, but may have left some bits out. Please be careful and understand all the steps below before proceeding. As always YMMV. Continue reading Manually Change Aperture Referenced Files Volume
So the Aperture 3.2.1 update did not fix my issue. The images stored on my NAS device still would not show as online in Aperture.
I was able to get it all fixed and working, but the solution isn’t what I had hoped it would be. I had originally just wanted Aperture to work as advertised and be able to use the files on the NAS like it is supposed to. Unfortunately life had other plans for me.
I’ll start by telling you that I now have all my images back on my internal HD, managed by Aperture, stored in my library. This was a bit of a process which I will get into later. I’ll now tell the sad story of how I ended up with this configuration.
The most recent development in my troubles is that when I went to troubleshoot my library some more the NAS share was offline. I tried to connect with no success. I went out to the garage where I found that the NAS was having issues. It turns out that at some point the device just died and would not boot up any longer. So now all of the stored images on the devices are inaccessible. The drives are OK, so in a worst case scenario I could boot into a Linux OS and move all the images. Luckily for me I’m a backup FREAK and have all the images available to restore without too much trouble.
Now I have to figure out a new solution. I decide to upgrade the internal drive on my iMac to be able to hold all the data. The size of the drive is the reason I had images stored on the NAS in the first place. I get a 2tb drive and have it installed into my iMac. Once it is back up and running I transfer all of the images to a temp directory on the iMac.
Once all the files are in the temp location I fire up Aperture. I tire the reconnect referenced files and Aperture STILL will not let me complete the process. As before the interface simply will allow me to click reconnect all. After a good deal of messing around and Googling I find a post that talks about how to manually update the image location in the database itself for iPhoto. With this information I start poking around in the Aperture files and find the database file.
Long story long, I was able to manipulate the Library.apdb file and change where Aperture thinks the files were. This was successful, and Aperture seems to be running fine. I was then able to ‘Consolidate Masters’ so that all the files are managed by Aperture and stored in the Library file. So now I’m back to fully operational and all my images are usable again.
In a following post I’ll describe the steps I had to take to edit the Library.apdb file. Stay tuned!
[UPDATE]: 11-4-2011 – Check my post for how I solved this issue in my setup.