To clone the repo with all submodules:
git clone --recurse-submodules https://github.com/leonahess/smarthome
or if you pulled without the submodules you can pull them afterwards with
git submodule update --init
The backbone of all scripts is a InfluxDB instance running. All the Sensor scripts write to the InfluxDB instance, while the website or a Grafana instance ist used to visualize the data.
Everything is supposed to run on python3.7, most things should run with older python3 versions, but raspi_cpu
requires python3.7.
Name | required python modules | Dockerfile | Unit File |
---|---|---|---|
website |
|
tested on Pi3B+ |
probably broken |
ds18b20 |
|
tested on PiZero |
probably broken |
dht22 |
|
tested on PiZero |
probably broken |
hs110 |
|
tested on Pi3B+ |
probably broken |
raspi_cpu |
|
only works on Pi3B+ |
doesnt exist |
I run one central Pi3B+ with the Database and most of the scripts and then multiple PiZeros with the temperature and humidity sensors. This can be scaled indefinetly.
All of the scripts are run in there own Docker container, which makes it easy to deploy new PiZeros or update already running scripts.
Name | Device | Services | Description |
---|---|---|---|
Collectors |
2x Raspberry Pi Zero W |
|
|
Pi 3 |
Raspberry Pi 3B+ |
|
|
Kubernetes Cluster |
3x Raspberry Pi 3B+ |
|
Currently does nothing |
x86 Machine |
AMD FX CPU |
|
Runs the heavy apps and stuff that doesnt run on ARM |
Quickly get a Docker container up and running with:
Create a Docker Volume for persistent Database storage:
docker volume create influxdb-storage
docker run \ --name influxdb \ --restart always \ -d \ -p 8086:8086 \ -v influxdb-storage:/var/lib/influxdb \ -v $PWD/influxdb.conf:/etc/influxdb/influxdb.conf:ro \ influxdb:latest
-
--name influxdb
sets the name of the container -
-d
detaches the container from the shell -
-p 8086:8086
opens the influx specific port -
-v influxdb-storage:/var/lib/influxdb
mount the internal data directory to the storage volume for persistent database storage -
-v $PWD/influxdb.conf:/etc/influxdb/influxdb.conf:ro
runs Influx with the config in your current directory, leave out for default config
Quickly get a Docker container up and running:
docker run \ -v $PWD/telegraf.conf:/etc/telegraf/telegraf.conf:ro \ --restart always \ --name=telegraf \ -d \ -h raspi-cluster-3 \ -v /var/run/docker.sock:/var/run/docker.sock \ -e HOST_PROC=/host/proc \ -v /proc:/host/proc:ro \ telegraf
But I also build my own Telegraf container with my custom Config baked in.
Quickly get a Docker container up and running:
Create a volume for the Grafana data, so it is persistent over container restarts.
docker volume create grafana-storage
Run the container
docker run \ --name grafana \ --restart always \ -d \ -p 3000:3000 \ -v grafana-storage:/var/lib/grafana \ grafana/grafana
A python Flask to display various stats about the setup
-
Currently only displays temperature and humidity from the
ds18b20
,dht22
andhs110
scripts.
-
Things to implement:
-
Data of the other scripts
-
admin panel to change what is displayed
-
reads ds18b20 sensors connected to a RaspberryPi
Connect all your DS18B20s to the GPIO port 4
.
Also don’t forget to enable the 1wire bus (sudo raspi-config
).
The ds18b20 sensors can run on different precisions. In the scripts
directory edit the set_precision.py
and run it once to write to the memory of the sensor. (The Memory of the sensor can only be written about 50k times
so be careful with writing to its memory)
Mode | Resolution | Conversion time |
---|---|---|
9 bits |
0.5°C |
93.75 ms |
10 bits |
0.25°C |
187.5 ms |
11 bits |
0.125°C |
375 ms |
12 bits |
0.0625°C |
750 ms |
For the DS18B20 sensors add their unique id in the "id" field and add name of your choosing.
If you don’t know the unique IDs of your DS18B20s you can run python3 get_ds18b20_ids.py
which will print them out for you.
-
influx_ip = "192.168.66.56"
sets the IP of your InfluxDB Server or localhost if you run it on your RPi -
influx_port = "8086"
sets the port of the InfluxDB Server, default is8086
. -
influx_database = "smarthome"
sets the database name, default issmarthome
. -
influx_retention_policy = "2w"
sets the retention policy, the amount of time until Influx discards your data, for infinite retention use"autogen"
Possible retention intervals:
ns nanoseconds (1 billionth of a second) u or µ microseconds (1 millionth of a second) ms milliseconds (1 thousandth of a second) s second m minute h hour d day w week
cd
into the dht22
directory, then run:
docker build -t ds18b20 . docker run --restart always -d --privileged --name=ds18b20 ds18b20
I supply a default unit file. For it to work you have to clone this repo into home directory of the user pirate
(/home/pirate/
).
If you want to store the script in another location you just have to change the path to the
smarthome_ds18b20.service
.
Copy the unit file smarthome_ds18b20.service
to the correct directory:
`
sudo cp smarthome_ds18b20.service /lib/systemd/system/
`
Then set the right permissions on that file:
`
sudo chmod 644 /lib/systemd/system/smarthome_ds18b20.service
`
Then enable the service:
`
sudo systemctl daemon-reload
sudo systemctl enable smarthome_ds18b20.service
`
The script should now autostart on system startup. It should also try to restart if it crashes.
you can start the script without rebooting with:
sudo systemctl start smarthome_ds18b20.service
If you want to check the status of the script:
sudo systemctl status smarthome_ds18b20.service
Reads dht22 sensors connected to a RaspberryPi
Connect one dht22 to a GPIO port of your choosing respectively.
Also don’t forget to enable the 1wire bus (sudo raspi-config
).
For the dht22 sensors add the gpio pin which you connected it to and add a name of your choosing.
-
influx_ip = "192.168.66.56"
sets the IP of your InfluxDB Server or localhost if you run it on your RPi -
influx_port = "8086"
sets the port of the InfluxDB Server, default is8086
. -
influx_database = "smarthome"
sets the database name, default issmarthome
. -
influx_retention_policy = "2w"
sets the retention policy, the amount of time until Influx discards your data, for infinite retention use"autogen"
Possible retention intervals:
ns nanoseconds (1 billionth of a second) u or µ microseconds (1 millionth of a second) ms milliseconds (1 thousandth of a second) s second m minute h hour d day w week
cd
into the dht22
directory, then run:
docker build -t dht22 . docker run --restart always -d --name=dht22 --privileged dht22
I supply a default unit file. For it to work you have to clone this repo into home directory of the user pirate
(/home/pirate/
).
If you want to store the script in another location you just have to change the path to the
smarthome_dht22.service
.
Copy the unit file smarthome_dht22.service
to the correct directory:
`
sudo cp smarthome_dht22.service /lib/systemd/system/
`
Then set the right permissions on that file:
`
sudo chmod 644 /lib/systemd/system/smarthome_dht22.service
`
Then enable the service:
`
sudo systemctl daemon-reload
sudo systemctl enable smarthome_dht22.service
`
The script should now autostart on system startup. It should also try to restart if it crashes.
you can start the script without rebooting with:
sudo systemctl start smarthome_dht22.service
If you want to check the status of the script:
sudo systemctl status smarthome_dht22.service
Reads TP.Link HS110 smart wallplugs.
Setup all you HS110
's with the Kasa App.
Then adjust the config to your needs and run the commands from the Docker section to get the container running.
-
influx_ip = "192.168.66.56"
sets the IP of your InfluxDB Server -
influx_port = "8086"
sets the port of the InfluxDB Server, default is8086
. -
influx_database = "smarthome"
sets the database name, default issmarthome
. -
influx_retention_policy = "12w"
sets the retention policy, the amount of time until Influx discards your data, for infinite retention use"autogen"
Possible retention intervals:
ns nanoseconds (1 billionth of a second) u or µ microseconds (1 millionth of a second) ms milliseconds (1 thousandth of a second) s second m minute h hour d day w week
Reads the temperature and cpu frequency of a raspberry pi 3B+
Adjust the config to your needs. Then run the commands from the Docker section to get the container running.
-
influx_ip = "192.168.66.56"
sets the IP of your InfluxDB Server -
influx_port = "8086"
sets the port of the InfluxDB Server, default is8086
. -
influx_database = "telegraf"
sets the database name, default istelegraf
. -
hostname = raspi-cluster-3
sets the hostname of the container, since you can’t access it otherwise -
influx_retention_policy = "2w"
sets the retention policy, the amount of time until Influx discards your data, for infinite retention use"autogen"
Possible retention intervals:
ns nanoseconds (1 billionth of a second) u or µ microseconds (1 millionth of a second) ms milliseconds (1 thousandth of a second) s second m minute h hour d day w week