Replies: 1 comment 3 replies
-
I would also guess that using 64 byte chunks are more efficient. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
original post here: https://forums.mbed.com/t/performance-question-bufferedserial-write-byte-after-byte-or-in-chuncks/18671
Hi there!
I'm finding myself scratching my head about how to deal with large buffers sent to
BufferedSerial::write()
as I was working on optimizing/refactoring ourlogger
.The data is stored in a
CircularBuffer
(fifo) and the call toBufferedSerial::write()
is made with an event queue in a low priority thread to avoid blocking important stuff and only output logs when nothing else is happening.Current implementation writing byte after byte looks like this:
Chunk implementation looks like that:
We first pop the data in a
64-byte std::array
then pass thisbuffer
toBufferedSerial::write()
.The assumptions are that using the temporary
64-byte std::array
buffer can:BufferedSerial::write()
CircularBuffer
(fifo) faster allowing it to be filled faster as wellBut to be honest I'm not sure 😂
Using chuncks adds 64 bytes of RAM and 64 bytes of flash, which is something we can live with.
But are my assumptions correct? Am I optimizing anything? Or doing premature pessimization?
Our test code seems to be running the same, character output is the same:
1988 characters/ms
in fifo12 characters/ms
to serialSo what do you guys think? Should we make the change? :)
For reference:
Beta Was this translation helpful? Give feedback.
All reactions