This is a guide to monitoring my home network (and various services) with Prometheus.
Startup Commands
$ DATADIR=/home/services/prometheus
$ docker run --restart=always --name=alertmanager -d -p 9093:9093 \
-v ${DATADIR}/alertmanager:/alertmanager prom/alertmanager \
-config.file=/alertmanager/alertmanager.yml
$ docker run --restart=always --name=snmpexporter -d -p 9116:9116 \
-v ${DATADIR}/snmp_exporter:/snmp-exporter prom/snmp-exporter \
-config.file=/snmp-exporter/snmp.yml
$ docker run --restart=always --name=prometheus -d -p 9090:9090 \
--link=alertmanager --link=snmpexporter -v ${DATADIR}/prometheus:/prometheus-data \
prom/prometheus -config.file=/prometheus-data/prometheus.yml
$ docker run --restart=always --name=grafana -d -p 3000:3000 --link=prometheus \
-v ${DATADIR}/grafana:/var/lib/grafana grafana/grafana
prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
"router.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
- job_name: 'snmp'
metrics_path: /snmp
params:
module: [default]
static_configs:
- targets:
- 192.168.1.1
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: snmpexporter:9116
Grafana Setup
- Go to http://localhost:3000/
- Click the Grafana logo on the top left
- Click Data Sources
- Click Add Data Source:
Name: Prometheus
Type: Prometheus
Url: http://prometheus:9090/
Access: proxy