REGISTER  


PerpetON (The Server Monitor) Blog
This thread belongs to forum.perpeton.com


2012-02-14 14:24 GMT   |   #2

Comments: 14
Birth of Perpeton Continuing

We've  decided to create our own monitoring software, but we had to do it in a clever way so we can use it easily on our existing enviroments and also on the new projects.
Because we usually offer complete solutions to our customers, from software tailored to their business to configuring and installing of additional softwares and also management and maintenance of their servers,  we've needed a way to monitor these stuff too.

We came up with the idea to create modules inside the agent which measure specific stuff, in this way we can easily add new functionalities by adding new modules to it. The versioning of the agents helps people to easily upgrade to the new versions which will have additional extensions and improvments. The upgrading system was designed to be a simple, semi-automatic process exploiting the possibilities of the operating system.

On each operating system we've created individual modules to provide specific metrics about the operating system itself, about the network traffic, the usage of the disks, about specific applications like database servers or web servers and so on. Also with this modular solution we could solve the monitoring of our own applications, by creating a generic developer module which is able to communicate with any external application via a simple file.

This module will ease the way developers can track their applications. As a developer all you have to do is to export the metrics you want to follow into a file each and every minute and the agent will read it and will send it to the centralized system for storing and processing. In this way we've saved up to 10% of the development time and finally we could analyze in a unified way all of our projects.

This project was started for personal use within our company but the solutions came up so generic that we've decided it to share it with others who are in the same shoes as we are.
There were disputes to offer it for free, but the maintenance and data storing costs proved to be to high so we decided to keep the basic monitoring of a server free and charge a moderate fee for the other modules.

The long term advantages are clear: from stop fingerpointing to focus your optimization when it is really needed, to eventually reduce your hardware needs and to be avare of what is going on on your system.
If you didn't use such a system before we offer a 30 day free trial so you can convince yourself.
2012-02-09 16:02 GMT   |   #1

Comments: 14
Birth of Perpeton

This project was started by two software engineers passionate to create quality software.
We began to work in a company which was handling a large volume of data in real time. After that we've started our own company but our new projects were even bigger and more resource intensive so it was very important to create performant and optimal solutions. To do this we needed a tool to measure the performance of these software so we began to search the internet for monitoring solutions but none of the ones we found were quite what we needed:

1. One aproach of this tool is to install an application where you want to do the monitoring.
There were pretty good tools in this area like the free nagios or some paid ones but they all have some major drawbacks like:
- the fact that you need to install on the server you want to do the monitoring which means it eats your precious resources both computing power and storage space
- some were really complicated even for us to understand and the important metrics were hard to find
- they were not user friendly at all
- we couldn't measure metrics of our own software
- either they had no real time alerting possibility or it was useless when the server crashed

2. Another set of solutions were trying to do the monitoring from the exterior meaning that they were trying out ports to see if there is any application there. This raised several problems like:
- they can't measure accurate data, they can only tell if the service is there and how fast it is answering
- you need to open your firewall which protects the service you want to monitor. This is a major security issue because for example a database usually handles only local connections and you may need to open the access for a large set of IP addresses.
- you have no access to the really important metrics like memory, cpu and storage usage
- application specific metrics are out of the question

3. The best solution is an agent based solution where you install an agent application which only exports the measurments and all the handling, storing, analyzing and alerting are done externally.
Serious tools implementing this solution are few and have week points that combine shortcomings from all the previous solutions like:
- some are very complicated
- some are not user friendly at all
- some can't even provide correct measurments
- it is pretty hard in all of them to measure metrics of your own software which is the most important thing we need as developers
- they have shortcomings in their alerting system like not being real time or you can't create customizable alerts

As we couldn't  find a solution up for the job we've started to do monitoring within our software. It was a long and painful process, we had to redo similar functionalities for each and every project, then we had to check on each project those measurments, generate reports on each project, their look were different, they had no real time alerting and so on.

As our frustration and also our experience on what and how to measure grew we came up with the idea of creating a generic and unified way of monitoring server software, hardware and specific application metrics and also to solve many of the proplems of the existing solutions.

More in our next post ...