Following on from my LoRa temperature and humidity sensor and CurrentCost SDR man in the middle it became apparent a time series database was needed to store all the data that was being collected. RRDTool seemed the obvious choice and although it has a good track recorded at dealing with time series data previous interactions with the software left scars. Particularly the fact that data cannot be inserted retrospectively, data must be inserted in chronological order, and also the lack of a modern API.
To overcome the concerns with RRDTool two other tools where considered, Graphite and InfluxDB. InfluxDB has a clear advantage in terms of implementation although it seems pretty heavy and involved, on the other hand Graphite offers familiarity as I have used it before and hence administration and maintenance would be a lot simpler and less of a learning curve due to knowledge of the API.
Lets get on with the Graphite server installation, Graphite comes in several components…
- Whisper – the time series data store, file based like RRDTool but without the limitations.
- Carbon – an API layer exposed by raw socket which indexes, caches, inserts and retrieves data from the whisper files.
- Graphite-Web – a web layer that provides a HTTP accessible API including rendering of graphs, and a web front end for exploring the data. The API itself is not RESTful but pretty easy to use and learn.
There are also a few other carbon components which can be used to cluster Graphite deployments, although in this case installation will be to a single server with backups on cron to a NAS.
Installation on Debian is mega easy as packages exist for Graphite in the distribution’s repo and are actively maintained.
apt-get install graphite-carbon graphite-web
This will install and start the Carbon service, the web service is not configured to run and hence needs to be configured with SystemD to run properly. By default Carbon is configured to store data with an interval of 60 seconds for a maximum time period of 24 hours, after this the space in the whisper file will be overwritten with new data just like a round robin database file in RRDTool. In my case I wanted to store the data with high resolution for essentially ever, so I updated the default Carbon storage schema to have a 10 second interval and store data for 30 years, I expect this server will be replaced by then and we can worry about the data migration at a later date. Of course this comes at a large increase in storage cost, and each metric with this resolution and storage period eats 1.1gb of disk, although if you do not have so many metrics (in my case probably a maximum of 10 or so in the end) then it isn’t a concern with today’s hard drive prices. You should edit the storage schema to fit your use case…
vi /etc/carbon/storage-schemas.conf [carbon] pattern = ^carbon. retentions = 60:90d [default_10secs_for_30years] pattern = .* retentions = 10s:30y
After the storage schema has been configured you should restart the Carbon service, following this you can start writing data to your Graphite server.
service carbon-cache restart
Writing data to your Graphite server is easy using a raw socket, but be warned this is unprotected and is not encrypted so you should only run it within your local network and not out on the internet. If you want to expose the ability to write to your Graphite server to external devices it is recommended to write a wrapper API which implements authentication and SSL. However due to the input to Carbon being a raw socket we can simply write data to the time series database using netcat… Lets add some random stuff to the database to ensure its working properly.
echo "demo.increments 1 `date +%s`" | nc -q0 127.0.0.1 2003 sleep 10 echo "demo.increments 2 `date +%s`" | nc -q0 127.0.0.1 2003 sleep 10 echo "demo.increments 3 `date +%s`" | nc -q0 127.0.0.1 2003 sleep 10 echo "demo.increments 4 `date +%s`" | nc -q0 127.0.0.1 2003 sleep 10 echo "demo.increments 5 `date +%s`" | nc -q0 127.0.0.1 2003 sleep 10 echo "demo.increments 6 `date +%s`" | nc -q0 127.0.0.1 2003 sleep 10 echo "demo.increments 7 `date +%s`" | nc -q0 127.0.0.1 2003 sleep 10 echo "demo.increments 8 `date +%s`" | nc -q0 127.0.0.1 2003 sleep 10 echo "demo.increments 9 `date +%s`" | nc -q0 127.0.0.1 2003 sleep 10 echo "demo.increments 10 `date +%s`" | nc -q0 127.0.0.1 2003
The sleep in here should be sufficient to write the values into separate intervals defines in your storage schema, if you do not sleep for a sufficient period of time the values will be averaged, of course instead of sleeping you could pass in some pre decided time stamps instead of using the date command. After the inserts have ran check out /var/lib/graphite/whisper/demo you should see a file called increments.
ls -alh /var/lib/graphite/whisper/demo total 1.1G drwxr-xr-x 2 _graphite _graphite 4.0K Jun 25 20:05 . drwxr-xr-x 5 _graphite _graphite 4.0K Jun 25 20:05 .. -rw-r--r-- 1 _graphite _graphite 1.1G Jun 25 20:07 increments.wsp
Here we can see that a whisper file has been created for our metric “increments”, metrics can be nested and the folders within /var/lib/graphite/whisper are arranged accordingly. Whisper files can be interrogated using the Whisper CLI tools…
whisper-info increments.wsp maxRetention: 946080000 # equivalent to 30 years at 10 second resolution. xFilesFactor: 0.5 aggregationMethod: average fileSize: 1135296028 Archive 0 retention: 946080000 secondsPerPoint: 10 points: 94608000 size: 1135296000 offset: 28
Using the whisper-dump command we can see all the time series data stored in the file.
whisper-dump increments.wsp --very long output mostly made up of 0 values--
Ok, this proves Carbon is writing to the Whisper files as required, now lets look at the Web UI and graphing the data, for now lets run graphite-web by hand, we can create a SystemD service for this once we are happy everything is functional. When you first run the web server you will need to run graphite-manage syncdb to run database migrations for the Web UI, the database by default is an SQLite file in /var/lib/graphite, although for larger installations you can swap this out for MySQL or PostgreSQL by editing /etc/graphite/local_settings.py.
graphite-manage syncdb graphite-manage runserver 0.0.0.0:8000
You should now be able to open the Graphite Web UI in your browser on the port specified in the runserver command, the left hand side of the UI is a tree containing all of your data, and the right hand side pane is used for drawing graphs. Expand the tree and click on increments to load in data from your whisper file, this will ask Carbon to fetch the data either from it’s cache or the disk.
You’ll notice the data isn’t really displayed that well, by default the Graphite Composer draws graphs with a 24 hour time window, obviously we only entered the data over 100 seconds so the data is a very small vertical line on the graph, lets change the graph’s time window to an hour so the data is actually readable. Click the select recent data button (5th from the left in the composer window) and change the time range to the past 30 minutes. The graph will now update, we should see the incrementing data we previously submitted to Carbon, of course as our 10 second intervals do not match exactly the intervals of the whisper file some averaging has occurred, we can see this as the data ends around 9, rather than 10 as submitted.
The basic functionality seems to be working great. Exit the graphite-web server by pressing ctrl-c, lets configuring backup and systemd to start the graphite-web service, of course if you prefer you can use Apache with modwsgi or UWSGI and NGINX, after all the graphite-web server is only a Django application, however as its only to be used internally by myself I’ll just run it standalone.
To configure Graphite-Web to run under SystemD create a new service file in /lib/systemd/system such as…
vi /lib/systemd/system/graphite-web.service [Unit] Description=Graphite Web Service [Service] Type=simple ExecStart=/usr/bin/graphite-manage runserver 0.0.0.0:8000 [Install] WantedBy=multi-user.target
Save the file and then run systemctl daemon-reload to configure the service, following this you can test the service by running the service command.
systemctl daemon-reload service graphite-web start
Now check it’s up and running in your browser!
If the UI loads as per the above then we know SystemD has the service configured ok, lets make it run at boot time.
systemctl enable graphite-web
Configuring Backup to NAS
If you do not have a clustered Graphite server obviously you have a huge single point of failure, not only that but as the data is only stored on one node so in case of failure data loss is somewhat likely. To combat this it is advised to backup your /var/lib/graphite folder to another host, and preferably offsite. In my case I have installed Graphite on my existing web server which already mounts an NFS share from my ZFS powered NAS. This NAS snapshots changes periodically on the shares and replicates them offsite to another ZFS based NAS. Therefore I will just abuse this for backing up my Graphite data.
The share from the NAS is mounted on /mnt/web_backups via AutoFS and the backups are started by cron at 12 every day. The existing backup script will be altered to tar up the Graphite data and dump it into the share, the configuration is also copied for easy restoration at a later date, most of the file has been redacted.
vi /root/do_backups.sh #!/bin/bash mysq... ...ache2 tar -zcvf /mnt/web_backups/graphite-data-`date +%Y-%m-%d_%H-%M-%S`.tar.gz /var/lib/graphite tar -zcvf /mnt/web_backups/graphite-config-`date +%Y-%m-%d_%H-%M-%S`.tar.gz /etc/graphite tar -zcvf /mnt/web_backups/carbon-config-`date +%Y-%m-%d_%H-%M-%S`.tar.gz /etc/carbon
Now the backup script is ran, and the resulting tar.gz file compared for completeness. If you’d like more information on mounting the NFS mount from the NAS or AutoFS check out the Debian Wiki .
Fetch data via the render API
Now everything is up and running it is possible to fetch data via the Graphite render API, by default Carbon automatically adds some metrics to the time series database about the local machine and statistics about the Carbon cache. Lets query the machine load average via the render API. First lets fetch the last 1 hour of cpu usage in JSON output.
Now lets use the render API to output a PNG file containing a graph, there are a load of different formats which can be rendered including PDF, SVG and many more.
curl 'http://localhost:8000/render/?target=carbon.agents.web-a.cpuUsage&format=png&from=-1hours&width=800&height=600' > ~/test.png display ~/test.png
Hopefully you should have a CPU usage graph similar to the below outputted into test.png.
Installing Graphite on my existing Debian webserver was pretty quick and painless and works well. Hopefully the above should give you a leg up compared to storing your IoT sensor submissions in RRDTool or in a MySQL DB. Unfortunately it’s probably out of scope todo clustering in my configuration as I don’t really want several servers running in my house due to the heat / noise / power usage etc… The Graphite web UI is useful for quickly viewing the data and allows saving favourites etc… so you can probably get away without having Grafana running. Next I’ll work on integrating my sensors and making some dashboards using the render API.