Category Archives: Networking

Cacti Network Grapher Virtual Appliance

cactiWow, it has been a long time since I have posted! It has been a really long time since I posted my MRTG Virtual Appliance. Today I hope to make up for some of that with this post of my Cacti RRDTool based Virtual Appliance. This virtual appliance is based on CentOS 67 and is designed to be lightweight, and stable. It has only a minimum of tools installed to make Cacti work.

The OS is set to DHCP, and Cacti is installed.

The username at the console, and passwords set for everything should be ‘cacti’ this will include root and mysql. **With the exception of the Cacti admin user, which has the password “Cactipw1!” (no quotes)

Cacti is all configured up and includes some plugins, which are not installed by default. It also has some additional host templates for Palo Alto firewalls, Cisco ASA Firewalls, F5 BIG-IP load balancers, and a few other things I have found useful over the years. [UPDATE] the new Cacti 1.x does not yet have all the old templates in place.

There is not a ton of documentation, as I simply have not had time. I have put together a minimal troubleshooting section below. If you are already familiar with Cacti it should be a breeze.  If there are any questions, please leave a comment, and I can assist and update as needed.




*** To upgrade a previous version of the appliance follow my step by step instructions.

[Update 4/2/2018]

  • Upgraded to new Cacti and Spine v1.1.37 released 3/25/2018
  • Updated all plugins

CentOS7 Appliance with v1.1.37 Cacti, OVA is ~2.2g

 


[Update 2/7/2018]

  • Upgraded to new Cacti and Spine v1.1.34 released 2/5/2018
  • Updated all plugins
  • Upgraded CentOS
  • Upgraded PHP to v7

LEGACY – CentOS7 Appliance with v1.1.34 Cacti, OVA is ~2.2g

 

[Update 1/31/2018]

  • Upgraded to new Cacti and Spine v1.1.33 released 1/22/2018
  • Updated all plugins
  • Misc other tweaks

LEGACY – CentOS7 Appliance with v1.1.33 Cacti, OVA is ~1.9g

[Update 1/5/2018]

  • Upgraded to new Cacti and Spine v1.1.30 released 1/3/2018
  • Back by popular request! Added syslog plugin. Configured to log to new syslog db.
  • Updated all plugins
  • Added a few misc officially supported plugins
  • Misc other tweaks

LEGACY – CentOS7 Appliance with v1.1.30 Cacti, OVA is ~1.9g

[Update 12/28/2017]

  • Upgraded to new Cacti and Spine v1.1.29 released 12/27/2017
  • Downgraded VM hardware version to v8 for compatibility all the way down to ESX v5.0. Let me know if there are any issues but it is working in ESX v6.5 for me.
  • Misc other tweaks

LEGACY – CentOS7 Appliance with v1.1.29 Cacti, OVA is ~1.9g

[Update 11/20/2017]

  • Upgraded to new Cacti and Spine v1.1.28 released 11/19/2017
  • Added Palo Alto Networks (PAN) host template
  • Added F5 BigIP host template
  • Added Advanced Ping latency graph
  • Misc other tweaks

LEGACY – CentOS7 Appliance with v1.1.28 Cacti, OVA is ~1.6g

[Update 10/17/2017]

  • Upgraded to new Cacti and Spine v1.1.26 release 10/15/2017

LEGACY – CentOS7 Appliance with v1.1.26 Cacti, OVA is ~1.6g

 

[Update 8/29/2017]

  • Upgraded to new Cacti and Spine v1.1.20 release 8/25/2017

LEGACY – CentOS7 Appliance with v1.1.20 Cacti, OVA is ~1.6g

 

[Update 6/13/2017]

  • Upgraded to new Cacti and Spine v1.1.13 release 7/13/2017

LEGACY – CentOS7 Appliance with v1.1.13 Cacti, OVA is ~1.6g

 

[Update 6/13/2017]

  • Upgraded to new Cacti and Spine v1.1.10 release 6/11/2017

LEGACY – CentOS7 Appliance with v1.1.10 Cacti, OVA is ~1.6g

 

[Update 6/5/2017]

  • Upgraded to new Cacti and Spine v1.1.9 release 6/4/2017
  • Upgraded plugins to current versions

LEGACY – CentOS7 Appliance with v1.1.9 Cacti, OVA is ~1.6g

[Update 5/22/2017]

  • Upgraded to new Cacti and Spine v1.1.7 release 5/21/2017
  • Adjusted logrotate settings
  • Installed SmokePing and setup a couple sample targets. Can be accessed from browser at /smokeping/smokeping.cgi

LEGACY – CentOS7 Appliance with v1.1.7 Cacti, OVA is ~1.5g

 

[Update 5/8/2017]

  • Upgraded to new Cacti v1.1.6 release 5/7/2017
  • Upgraded to new Spine v1.1.6 release 5/7/2017
  • Fixed Spine permissions issue

LEGACY – CentOS7 Appliance with v1.1.6 Cacti, OVA is ~1.4g
[Update 4/25/2017]

  • Upgraded to new Cacti v1.1.4 release 4/23/2017
  • Building upgrade steps commands that can be used to upgrade exsiting install in place. Will post shortly.

LEGACY – CentOS7 Appliance with v1.1.4 Cacti, OVA is ~1.4g

[Update 4/11/2017]

  • Upgraded to new Cacti v1.1.2 release 4/2/2017
  • Set SELinux to permissive permanently

Legacy – CentOS7 Appliance with v1.1.2 Cacti, OVA is ~1.4g
[Update 3/20/2017]

  • Upgraded to new Cacti v1.1.0 release 3/17/2017
  • Added VMWare tools to Centos

Legacy – CentOS7 Appliance with v1.1.0 Cacti, OVA is ~1.4g

[Update 2/9/2017]

All new appliance! Now based on CentOS 7 Minimal and the newly released Cacti v1.0.1. This is a great new version of Cacti with many new features, steamlined interface, and built in plugins.

  • Cacti now at the newest 1.0.1, released 2/5/2017
  • Changed to 1m polling as default
  • Added in officially released plugins

Legacy – CentOS7 Appliance with v1.0.1 Cacti, OVA is ~1.1g

[Update 2/7/2017]

I am working on building up the template with the new version of Cacti v1.0.1! pretty exciting stuff they have put together. Check back shortly for the new appliance.

[Update 6/3/2016]

  • Cacti now at the newest 0.8.8.h, released 5/8/2016

Legacy – v2.4 Cacti Appliance Download OVA is ~1.4g
SHA1 checksum: e34340acf60185a7a0c3089e3451191b110db110

[Update 4/11/2016]

  • Cacti now at newest version 0.8.8g, released 02/21/2016
  • Updated CentOS

Legacy – v2.3 Cacti Appliance Download OVA is ~ 1.5g

[Update 8/14/2015] Updates to the appliance.

  • Cacti now at newest version 0.8.8f, released 07/19/2015
  • Resolved the syslog plugin retention issue. The fix is to enable syslog plugin first, then enable all other plugins.
  • Graph zoom issue resolved in cacti 0.8.8f.

Legacy – v2.2 Cacti Virtual Appliance Download OVA is ~1.1g

Please let me know if you have any troubles or suggestions.

[Update: 7/30/2015] I have found that in the current setup syslog will not respect your retention schedule. It seems there is a conflict with another plugin. I am in the process of figuring out which, and then will try and troubleshoot.

[Update: 7/8/2015] Updated many things in the appliance.

  • Cacti now at newest version 0.8.8d, released 06/09/15
  • added Discovery plugin
  • added Syslog plugin
  • added default traffic template
  • added FortiGate template
  • minor tweaks
  • OVA file should now import directly to VMWare

Please let me know if you have any issues, or other suggestions!

Legacy – v2 Cacti Virtual Appliance Download OVA is ~ 780m

Legacy  – v1 Cacti Virtual Appliance download OVA is ~630m

TROUBLESHOOTING

Network interface not showing up after you import the template?
Couple things to check;
Does /etc/udev/rules.d/70-persistent-net.rules exist?
If so, lets move it out of the way (this command moves it to the users home directory);
sudo mv /etc/udev/rules.d/70-persistent-net.rules ~/
Now lets check the interface configuration;
cd /etc/sysconfig/
The “network” file here will be where you control your systems hostname, its default hostname is “cacti-template”, change it freely.  If you don’t mind, you can ignore this file.
cd network-scripts/
In this directory you will have your network interface configuration files;
Loopback = ifcfg-lo
Primary Interface = ifcfg-eth0
If your network interface is not showing up, you may have a MAC address issue.  You will need to update the ifcfg-eth0 configuration with your actual MAC address.  The MAC address field in the ifcfg-eth0 configuration file is;
HWADDR
Get the mac address of the physical network adapter from the virtual machine’s settings, and simply replace the existing entry in ifcfg-eth0 with that MAC address.
Once the /etc/udev/rules.d/70-persistent-net.rules file is moved out of the way, and the ifcfg-eth0 configuration is updated, reboot the system and you should have networking.

 

 

Unintentional load test

I’ve been a little out of touch with this blog in the last month or so. Ever since Thanksgiving things have been crazy, especially at work with the busy season.

Over the last year we have made some great efforts to dramatically increase our stability as well as availability by increasing redundancy to remove single points of failure. This was on many levels including the networking layer by introducing an HA firewall pair, and an HA load balancer pair. We also built out our server infrastructure by implementing 3 web servers for the load balancing, as well as clustering our database hardware and our application server hardware. All of this was intended to be able to easily handle the load of the retail busy season, between Thanksgiving and New Year’s weekend. To be able to really know how much we could handle we wanted to load test the infrastructure top to bottom. Continue reading Unintentional load test

Polling an F5 Load balancer using MRTG and SNMP

[UPDATE 3-6-2015:] Check out my newly posted Cacti Virtual Appliance. It is much easier to use than MRTG, and has a pre-loaded host template for F5 BIG-IP Load Balancers!

After getting MRTG setup and running in my MRTG Virtual Appliance as I call it, I started setting up all my networking devices for monitoring.  One of the devices I really wanted to poll some more advanced data from is our Load Balancer.  What I really wanted to be able to see was the number of concurrent connections to the LB and each of the Virtual Servers if possible.  This proved to be much more complicated than I had anticipated.

My first problem is that my SNMP software in Ubuntu was not configured correctly.  By default the SNMPd was looking for /usr/share/snmp/mib to load the mib files.  In the version of Ubuntu that I had the path was /usr/share/snmp/mib2c-data so I had to update the snmp.conf file.  Once I did that then SNMP was able to correctly load all the add-on MIBs so that I could have the OID definitions load correctly.

My second problem was the the MIB file that I had gotten from the web was incorrect, or more to the point it was outdated.  The search that I did for F5 MIBs returned many hits, but the one that I went to for most of the information I started with was a nice post at vegan.net.  Unfortunately, I didnt realize that this was really out of date.  As a result the LOAD-BAL-SYSTEM-MIB.txt is invalid with the software version that my F5 is running and fails the OID lookups.

My F5 is hosted so I dont have direct access to the device.  I was able to get the hosting company to grab the MIB files from the filesystem of the F5, and then I put them into my /usr/share/snmp/mib2c-data directory.  After that my MRTG graphs for Virtual Server connections started working. One mistake that I made in this process was only putting some of the MIB files on my machine.  Do yourself a favor and just get ALL the MIBs and load them to your SNMP MIBs folder.

I found a nifty script here called Buils_mrtg_cfg_for_virtual_servers.pl.txt that was able to do an snmpwalk and get all the information about my Virtual Servers that I needed to get current connections and bandwidth on a per VS basis.  From there MRTG was up and running with some pretty good stats about concurrent connection rate and bandwidth utilization across all my domains.

MRTG Virtual Appliance

[UPDATE 8-19-2015:] Check out my newly posted Cacti Virtual Appliance. It is much easier to use than MRTG! This MRTG appliance has never been updated, I have shifted all focus over to Caci.


 




Over the last couple of weeks I have been working to build an MRTG server in our VM environment.  I wanted it to be very lightweight for CPU, RAM and Disk storage.

I’ve used MRTG quite a bit before and it can sometimes be tricky getting everything worked out just right so that it runs without babysitting.  I finally got a pretty good install going and thought I would share it up here for anyone who might find it useful.

This is an MRTG Virtual Appliance that is running on Ubuntu Server for Virtualization.  The install is very compact with just a 2gb virtual disk, 1 cpu and 512MB of RAM.  You can download the  MRTG_Appliance here.  Total file size is ~430MB.

MRTG is setup and configured as well as lighttpd as the webserver.  The server is configured for DHCP and SSH is enabled for console management.  There is no GUI but there is a configuration page linked from the default webpage.

This is a first run at creating an Appliance/OVA file for me, so I’m sure I have missed some steps.  I will update this page as well as the download file as any issues are identified.

Let me know how it goes so we can make it better!

Virtual Load Balancer Appliance

In our production environment that we host at Rackspace we have an F5 Big IP load balancer.  This is an excellent  product that has way more features than we can ever hope to need.

One problem with this setup is that in our development and staging environments we do not have load balancing and this has caused some issues when moving to production.  Some of the issues we’ve had stem around session persitence and what server the sessions are landing on.  These can be hard to troubleshoot, and if you aren’t seeing them in the Dev or QA processes you are debugging while in production which is not good.

It became pretty evident that we needed to make our staging environment as much like production as we could, so I started to poke around for a virtual load balancer.  After a bit of searching I found several that seemed to fit the need, but many of them were not free.  With a bit more digging I found that the Zeus Traffic Manager product has a developer licence that is free to use for non production environments.  This suits our needs very well as this is just for staging testing.

I downloaded the VMWare template and had the box up in running in no time.  The initial web config is quick and easy, and the developer license was good for 1 year.  After which I assume/hope I can still get another free one.

The web based configuration is clean an easy to understand.  I have never administered a Load Balancer myself, and even so I was able to get all of our staging sites up and running with thier own pools, healthchecks, session persistence settings and everything we have in production.  Our QA team tells me that the speed is noticably better and it has already helped us uncover some issues that we have been fighting with our production servers.

All in this has been a great addition and the best price you can hope for.

Take a look and let me know your experience