-
-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pimd integration #61
pimd integration #61
Conversation
@pvizeli |
We using our tool tempio: https://github.com/home-assistant/plugin-dns/blob/master/rootfs/etc/cont-init.d/corefile.sh The communication between SU and plugins are a bit limited. We working on a new generation that include a small daemon at the plugin to control things inside. But I think for now as MVP, using the config file should work |
@@ -11,6 +11,7 @@ on: | |||
- Dockerfile | |||
- build.yaml | |||
- 'rootfs/**' | |||
workflow_dispatch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
workflow_dispatch: |
That is trigger on merge that generate also the version number, this would only overwrite the build before
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was just a workaround to manually trigger the build in my dev branch because builder.yml just listens for changes on master branch of course.
Hi Is it possible to get this dev/draft plugin into a running HA to test/play around or optimize the pimd configuration ? |
To get an all in one build environment the builder can be used. I often end up calling
I've fixed two issues with the latest base images and built the plug-in. It seems pimd is running, however, not sure if it does what it should 😅 Maybe it also interferes with mDNS etc... This needs more testing! To test it on a running machine, you can replace the current multicast plugin as follows:
To restore the original:
(I've built it for amd64 and aarch64, replace according to your systems architecture) |
You can find an example of this using |
Thanks for all these hints @agners. Highly appreciated. I indeed got pimd also up&running in the test environment here but also don't know if it correctly works or not as I definitely need to read more on that UDP multicasting topic as it seems. One thing that came in my mind, however, is: Is this docker-based plugin-multicast itself running on the host network (thus in hostmode)? Because otherwise we would end up in the same situation that docker containers usually don't receive any UDP multicasting traffic and thus pimd / plugin-multicast cannot forward it to all the other individual containers. And thanks for the short documentation on how to test all this and how to build a test container. I will see if I can try all this rather soon and then we can see if pimd works as expected and is able to forward multicast UDP traffic. I fear, however, that we would need some kind of testbed here to actually try to reproduce / simulate udp multicast traffic arriving at pimd. |
Yeah multicast is a bit a pain. But in general, it's always UDP (as TCP is point-to-point by nature). IGMP is used to register membership. By default, it does not cross routers (and that is how the internal With
Yes, the plug-in runs in host mode. Otherwise it could not do this, I agree. We probably want to limit What is also helpful for testing is docker execing into the container, shtudown the pimd s6 service and manually start the service, e.g.
|
@agners nice work! I made a full testbench with HA, Raspberrymatic CCU Addon, with connected Homematic IP RF-USB-Stick and the DRAP (multicast device) and two sensors (presence detector and a temperatur sensor connected via IP wired.) @jens-maus @agners |
I tried something similar, because I wanted to move Rasperrymatic into a DMZ, while keeping the DRAP in the iot network. I didn't use pimd, but smcroute (static multicast routing) instead. While I made sure to increase the TTL of all packets to prevent them from being discarded, the traffic never reached the other interface. From troglobit/smcroute#50 I learned that 224.0.0.0/8 is for link-local traffic only and thus discarded by the kernel. Unfortunately that also seems to be the case for pimd (troglobit/pimd#120). |
Would be great if you could present some summarized network traffic that you probably catched with tools like wireshark. |
Sure. Here are my observations. I configured smcroute to allow multicast traffic on 224.0.0.1 and 224.0.0.120 between all my networks: stateDiagram
LAN_1 --> IOT_100
IOT_100 --> LAN_1
DMZ_11 --> LAN_1
LAN_1 --> DMZ_11
The networks are 192.168.XXX.1/24 (where the XXX is noted in the above diagram). My CCU has the IP 192.168.100.11; the DRAP has 192.168.100.1419: The TTL of all multicast packets is increased to five by pre-routing rules and firewall rules are set, so that the packets are not discarded. For SSDP (which was configured analogously), this is working properly (see SSDP dump.csv). Then I ran the net finder from inside the IOT network, as well as from within my LAN. Observations from same network (IOT)In this case, I am observing the following communication (see HmIP same network.csv); discovery (triggered by netfinder)sequenceDiagram
net finder->>224.0.0.1: Who is there?
DRAP->>224.0.0.1: DRAP with {SN} and {FW}
CCU->>224.0.0.1: CCU with {SN} and {FW}
net finder->>224.0.0.1: DRAP with {SN}: Reply
DRAP->>224.0.0.1: ACK (?)
net finder->>224.0.0.1: CCU with {SN}: Reply
CCU->>224.0.0.1: ACK (?)
Net finder always sends discovery requests to port 43439. Devices answer from port 43439 to the net finder's source port, where the request originated from. The TTL of the Request is 5. The TTL of the responses is 1. heartbeats (sent automatically from the homematic devices)Both CCU and DRAP send regular multicasts to 224.0.0.120:43438 with changing payload, but constant payload size. The payload size seems to vary over longer time durations. Observations from different network (LAN)Here, I only see the discovery request from net finder, but no response (see HmIP routed.csv). When capturing the IOT interface at the same time, I don't see the forwarded discovery packet.
|
This sounds all tricky. A easy solution could be by using a external USB Network adapter and forward this adapter into the docker container. So with it a own NIC for DRAPs is given and the rest is unaffected. Bad is a second network cable is needed and a running DHCP Server. |
We actually do have a different type of solution in the making/pipelining for the HAP/DRAP communication issue we tried to solve initially with this PR here, see jens-maus/RaspberryMatic#1373 (comment). While for the ordinary docker/OCI use case we already developed a solution by using a `macvlan' network (see https://github.com/jens-maus/RaspberryMatic/wiki/Installation-Docker-OCI), however, for the HA addon use case of RaspberryMatic we are waiting for the HA devs to develop an integrated solution so that an add-on can define that it requires a macvlan insteqd of using the standard hassio docker network only. However, we could already develop a manual patch script to get a running RaspberryMatic HA addon working until the next HA restart. See here: https://github.com/jens-maus/RaspberryMatic/wiki/Installation-HomeAssistant#hmip-haphmipw-drap-support-patch |
Initial discussion on how to integrate macvlan support in Home Assistant Supervisor started: home-assistant/architecture#1034. I am closing this PR as currently the pimd integration solution is given up on. |
This PR refs #17 and is a first draft of getting PIMD (https://github.com/troglobit/pimd) compiled and integrated into plugin-multicast. Evaluation, setup and testing is still missing, thought. But this draft should help in getting things sorted out over time so that we finally get
pimd
integrated for third party addons like RaspberryMatic CCU which require also udp multicasting to be routed between the docker networks and the host network, which hopefully pimd can solve.