-
-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Synchronized Video Streaming with Raspberry PI Zero 2 W #79
Comments
I record timestamps while receiving the frame from the camera, This timestamp is appended to the encoded frame before sending it to the webrtc client side. So It may work for your case😕. But you must ensure the clocks from different devices are synchronized, or there is always a fixed time difference between each device. There are still a lot of codes that need to be cleaned or refactored in the future. I don't recommend using this project in production so far. I'm refactoring the DMA mechanism to enhance the performance in those few days, I plan to migrate this project to my pi zero 2 as well😄. |
Ah, that's great! I'll try out the timestamp code then :) As for time sync, I can use ntp sync, shouldn't be a problem. As for production warning, message received. I already like how well the code is structured, looking forward to seeing the further progress! If you need some help in testing, let me know, especially with pi zero 2! |
Btw, for testing on pi zero 2, would you recommend installing RaspberryPi OS 64bit? |
Yes, I'm using Raspberry Pi OS 64bit. I haven't tested on others yet.😅 |
Alright, will try that (Bullseye 64bit)! One more question: do build webrtc directly on RPI or do you cross-compile? I am asking because it might take a long time on pizero and am not even sure if it can do that. I am trying to figure out a good cross-compilation recipe.. |
I built on my WSL with Ubuntu 20.04, and it only took 10 minutes. |
Thanks. I have trying on bare ubuntun 22.04, but could also give WSL on 20.04 a try. Besides the compiling arguments you pointed at, you must also have a cross compile configuration/procedure right? I have been trying to cook one, but I am getting clang errors (despite using your recommended one). I am getting weird CLANG errors, e.g., there are several compiler flags that the current version of clang does not recognize (that's both with your optional procedures or the default vresion) -Wno-thread-safety-reference-return |
Try to download the newer clang version here. |
I don't know whether you check out the specific branch of webrtc or the newest one. I'm using m115 now. |
That worked, thank you! I am cross-compiling on ubuntu 22.04. Later on, if you want I can share the whole recipe from creating the cross-compilation env onward, maybe it could be useful info for this repo. Now I am moving to |
In the meantime I made progress with cross-compilation, however I now get stuck at all works (and I followed carefully your instructions about c++11 --> c++14) until when i try to run
at that point i have a ton of compilation errors; i create a gist with the output |
I built all other libs on Raspberry Pi. |
Alright, will do the same then and let you know! |
Compiling on pizero seems to be quite hard though. I am now at
while using 2GB+ swap. I wonder if we could compile on more powerful RPI and then transfer the library - would that work you think? |
I'm researching how to emulate Raspberry Pi OS on Windows to speed up.
|
btw, my libs were built on 3B, the pi zero is too slow. |
excellent, i will do the same then (suing RPI3 and transfer); as for the QUEMU aproach, let me open another issue for you with my current configuration on ubuntu 22.04 (but should work on 20.04 as well); maybe we can work on the env together |
I recompiled all dependent libs and released a static executable file here. You could take a try. |
That's very kind, thank you (I deleted the message because right after I asked that the RPI4 arrived :) |
I just tested your binaries and everything is working, thank you! Back to the original question of this issue, for our purpose, we won't need a browser on the receiving end. We will instead have another arm64 linux machine, which will match images based on their timestamps and then send that to a neural network. So, I am now wondering if we should replace |
It's a good question! I agree with your suggestion that the signaling mechanism could be replaced or selected by other protocols. I'll try to decouple and refactor it with an interface in a few days. For the time to implement the regular web socket or other protocols, I have no plan for when I'll finish it. Maybe you can implement your protocol on the interface, and recompile the program by setting customized options like |
Having an interface would be great! Alright, I'll look into the best suitable signaling protocol for my use case while waiting for the interface then. I'll be happy to act as tester afterwards, maybe that could also be helpful for others if we document it well. |
I refactored the signaling service code in #86. |
That's great! I will give it a try and ask for feedback as I am not an expert :) |
Quick question: which version of ffmpeg are you using on the RPI and was tested with? |
I use 4.3.6 version of ffmpeg, the shared libs details show with the command
|
Thank you! The receiver I am working on is a rock 5A, Here my output of
I also need to build a specific Do you think this could cause issues? If I understand correctly, the thing the receiver should do is properly decode and scale the video stream. As far as I can see, these are the dynamic libs I should pay attention to:
|
It'll be ok. The receiver only needs to decode the video stream. |
Glad to hear. When attempting to compile to project on rock5, I indeed get issues such as
which I could fix by replacing
with
I am working on the alternative signaling and am also writing the receiver side (i.e., I am writing a |
In this project, the ffmpeg is only used for recording the video file that I'm doing in issue #91 so far. |
Hello @TzuHuanTai ! It took me some time but I managed to arrange a temporary version RaspberryPi_WebRTC that attempts to use Mosquitto for signaling. I put the instructions how to run it in the README.md. I apologize for the not yet well integrated code. There is lot to be improved in terms of OOP, but I wanted to first make sure to keep the new files isolated so that they are easier to debug. Once all fixed, I'll be happy to make a PR if you are interested. So far, I managed to successfully exchange SDP and ICE offers (at least, for what I can tell). In particular: About SDP
Which means that:
About ICEsender ICE-related logs:
receiver ICE-related logs:
And, for completeness, here is the Mosquitto broker thread:
The ICEs seem to be exchanged, however, the In a first attempt to debug this, I tried to use two different networks, but no success. Note that I haven't yet tested a
Alright, this is it for now. I would be very grateful if could give this setup a try. Of course, I am here to answer any questions! |
I added also the dependent libs including mosquitto's |
I just finished my vacation. I'll browse/test in these few days.
|
I hope you had nice holidays! Thanks for confirming about |
After running and briefly reviewing your codes, I've found a problem.
|
Good catch! However, if I try to fix it this way, I get
And I suspect that this is due to the sender sending out an SDP offer which doesnt specify
|
Do you mind if I refer to (copy) your code and try it out? |
Quite the contrary, I would be grateful if you could give it a try! |
I invited you to the repo in case it helps. |
I received the video on my web receiver by mqtt signaling successfully. I'll clean, refactor some code, and rewrite docs then merge it into |
That's great news! OK, I'll wait for updates then. |
I already merged to |
I noticed, awesome! I am currently on holiday, once back I test it. |
Hello @TzuHuanTai, I reviewed the code, which looks very well organized now! Despite that, I am still hitting the same issue though and I am not sure how to proceed. Let me try to explain it to you. Looking at your code, I suspect that the initiative of triggering the SDP exchange is left to the frontend, correct? What I am trying to do is the following: Start the SDP exchange from the RPI client. To do that, I first try to print the SDP and inspect it. I do it with this simple method:
The problem with that is that I get the following SDP, which I believe is dictated by these (as you can see, no trace of h264):
In fact, if I replace those with the following:
I get an incomplete offer (no trace of
Now, if I instead try to kickstart the SDP offer from my receiver, I get an even smaller SDP offer and that's because all the RTP capabilities are to be inferred from the conductor (i.e., I dont create any tracks on the receiver side):
I am quite puzzled and do not understand what am I missing in this logic. |
Yes, my frontend leaves the specific codec info to decide it. |
Hello @TzuHuanTai , thanks again for the hint. Let me paste here the progress I made so far (problem not solved yet). First, I check
Them, following this, I recompile
Then, I recompile
And I print the offer like this, in the main:
and here is
but that's what I get:
|
I might have fixed it. In the project
that's because, despite flagging it during
I get this offer from the conductor:
|
I'm glad you found the problem. Then use the openh264 with your media_dependencies.video_encoder_factory = std::make_unique<webrtc::VideoEncoderFactoryTemplate<
webrtc::OpenH264EncoderTemplateAdapter>>();
media_dependencies.video_decoder_factory = std::make_unique<webrtc::VideoDecoderFactoryTemplate<
webrtc::OpenH264DecoderTemplateAdapter>>(); |
Hello,
Great project! I would like to use this library for a specific use case, but I am not sure if it would be suitable so I'll ask here.
The use case involves using two Raspberry PI Zero 2 W devices to stream video feeds in parallel to a Linux terminal. The critical requirement is to align each video frame from one stream with the corresponding frame from the other stream, based on their timestamps.
To achieve this, the following functionalities are necessary:
Each video frame must be accompanied by a unique timestamp when sent from the Raspberry PIs.
The receiving terminal should be capable of processing the streams by fetching frames individually, retrieving their timestamps, and synchronizing them with the frames from the alternate stream.
Could you please advise whether your library supports these features, or if there is a possibility to implement such a synchronization mechanism using your library?
Thanks in advance!
The text was updated successfully, but these errors were encountered: