At home when we use power intensive appliances, this is a nifty little dashboard designed by fffonion to easily view the power usage of each appliance in your home/office, all credits go to fffonion and contributors to the plug exporter project.

Prerequisites

  • TP-Link HS110 smart plug with energy monitoring (AmazonLink)
  • A server to run docker containers
  • A little bit of Linux terminal knowledge
  • Basic knowledge on spinning up docker containers
  • A router with ability to set static IP to connected devices

I'm going to assume you have setup your TP-Link smart plug (using their KASA app) and its running in your network.

Next step is to set a static IP to your plug, this will become useful later when we are setting up the data scraper.

Refer to your router manual or google.

Docker

Now comes the interesting part, lets spin up all that we need to see this dashboard using docker.

First of all ssh into your docker server and make sure you are in the home folder of your Linux user.

create a directory, lets call it monitoring. This can be any name you want and this will be your stack name in docker.

mkdir monitoring
cd monitoring

this is optional but its a good idea to keep all your related containers in a separate network.

docker network create -d bridge monitoring

Also lets create some extra volumes to persist your Grafana and Prometheus data if you decide to bring down the stack or remove and recreate the stack.

docker volume create grafana_data
docker volume create prometheus_data

Prometheus Configuration

We need to add a json file for Prometheus so that it knows from where to gather data from.

mkdir Prometheus
cd Prometheus
nano prometheus.yml

And paste the following contents

scrape_configs:
  - job_name: 'kasa'
    static_configs:
    - targets:
      # IP of your smart plugs
      - 192.168.1.32
    metrics_path: /scrape
    relabel_configs:
      - source_labels : [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        # IP of the exporter
        replacement: plugexporter:9233

# scrape kasa_exporter itself
  - job_name: 'kasa_exporter'
    static_configs:
      - targets:
        # IP of the exporter
        - plugexporter:9233

Important: In the above file, make sure you replace 192.168.1.32 with the static IP of your plug, which you setup earlier.

Save the file and exit(Ctrl+o and Ctrl+x) and go back to the root of your project directory (in this case cd .. will bring you back to the root of monitoring folder).

For simplicity sake I will include the full docker compose file.

Create a file called docker-compose.yml in the root of your monitoring folder.

nano docker-compose.yml

and copy paste the below contents, make sure you set the GF_SECURITY_ADMIN_PASSWORD to a password that you can remember.

version: '3.3'

services:
  grafana:
    image: grafana/grafana
    container_name: grafana
    restart: unless-stopped
    volumes:
      - grafana_data:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=yourpassword
      - GF_USERS_ALLOW_SIGN_UP=false
    ports:
      - 3000:3000
    networks:
      - monitoring
    links:
      - prometheus
    depends_on:
      - prometheus
  
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    restart: unless-stopped
    volumes:
      - ./Prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus
    ports:
      - 9090:9090
    networks:
      - monitoring
    links:
      - plugexporter
    depends_on:
      - plugexporter
  
  plugexporter:
    image: fffonion/tplink-plug-exporter
    container_name: plugexporter
    restart: unless-stopped
    networks:
      - monitoring

networks:
  monitoring:
    external: true

volumes:
  grafana_data:
    external: true
  prometheus_data:
    external: true

Press Ctrl+o and Ctrl+x to save the file.

Explanation

In this case we are creating 3 containers, plug exporter will pull data directly from your TP-Link plug api. Prometheus will be scraping data from plug exporter and from your plug in a time series intervals to bring time series data so Grafana can plot graphs for the dashboards.

that's it. Now time to spin up your stack.

docker-compose up -d

If you followed the tutorial correctly it should spin up everything without issues. To view your promethus data goto http://your-server-ip:9090/targets

It should show something similar to the below image. If some targets are in unhealthy (red) state, give it few minutes and it should turn green.

Now let's login to Grafana and set the Prometheus container as the data source.

Get the precreated dashboard (by fffonion) from here https://grafana.com/grafana/dashboards/10957

Click "Copy Id to Clipboard"

Now in Grafana, click the import button and paste the Id that you copied from the above link and click load. Before you save make sure you have clicked Prometheus as the data source and click import.

That's it, your dashboard should be ready to use. Give it some time to populate the values.

I know this is a bit long, but hope you enjoy settings this up as I did. If you run into any issues, feel free to leave a comment and I will be happy to look into it.