-
-
Notifications
You must be signed in to change notification settings - Fork 429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce an optional thread pool for DSL rules and events #3890
Conversation
c308ac1
to
507590c
Compare
It might be better to use VirtualThreads as soon as java 21 is the minimum version, this would make things easier. Less code and ThreadLocals would still work without special handling. |
It will still take quite some time before we move to Java 21 as a minimal requirement... |
@kaikreuzer okay, so i will push this further. Do you think it does make sense to have this |
@joerg1985 I was asking myself where we would need it. For the DSL rules I don't think it is necessary; besides scheduling timers, there is no thread support available and in the past we already had thread pools for the execution in place. |
I'm not sure I understand what the "it" is here under discussion. From my perspective, in the old days there was a relatively limited thread pool for rules and any individual rule could be running multiple times across more than one thread. This could and often did lead to all the rules failing to run because of just one long running rule ran amok. I want to avoid reintroducing that behavior. It's also my understanding that before the introduction of the I don't know if that worked because the thread was reused or something else. I don't think we necessarily need to be overly concerned with how Nashorn UI rules worked but it would break some user's rules if this behavior were to change. Finally, there is an issue where the first time a JS Scripting (and therefore Blockly) script is loaded it can take up to 30 seconds to run the first time on 32-bit ARM machines (i.e. the recommended openHABian config). If reusing threads also means reloading rules this problem will become really bad as it won't just be the first time the rule runs, it will be random or it will be every time. So, as long as we retain the situation where one rule running amok can't impact other rule's ability to run, I'm happy. As long as using a thread pool doesn't require reloading a script for each thread it runs in, I'm happy. I do not want to go back to the days where we needed to implement reentrant locks in our rules to prevent them from running more than once at a time and with or without that a poorly written rule can bring down all rules by exhausting the thread pool. |
@rkoshak thanks for the feedback on this. The PR will not allow a rule to block the others by consuming all the threads or run concurrently. The 'it' question was - as i understood: |
I don't know enough about the implications. Does running the rule on another thread require the Script Actions/Conditions to be reloaded or reinitialized?
There is a problem in JS Scripting right now where it can take 20-40 seconds to run the rule that first time after it's loaded/reloaded. Having to randomly reload the scripts if it's put on a different Thread would be a nightmare. |
@kaikreuzer Okay it looks like there might be issues, but as long as the user has to opt-in via the So i would suggest to:
Any concerns about this? |
If I get it right, with the new system property, it would be a "hidden" feature that brave users can try out by activating it. For all others there won't be any change, correct? |
@kaikreuzer you are right, this will protect from trouble. |
7d7d306
to
db4a88c
Compare
@kaikreuzer i just updated the PR, the flag is gone and the review should be easier now. |
@kaikreuzer is there any thing left that i need to do, before this can be merged? |
So while I don't disagree with the concept, please please please please please make sure that there is a way to turn this off for power users. I've had so many issues with thread pools causing congestion over the last few years that the concept of adding one is terrifying to me. I have a ton of rules that run concurrently. I also understand that I'm a power user and as such my OH is running on a relatively beefy device that can handle such a load. Again, I can absolutely understand the reason for this especially on lower end platforms, it's why I wrote the original PR that you noted above, just please make sure there is a way to run this without a pool as well. Thank you! |
@morph166955 the current plan is to opt-in via a system property to avoid general issues. Regarding the congestion of the pool, this PR uses a unlimited pool, in the worst case the number of threads in the pool is equal to the number of rules. There will be some latency to run new threads, but this are only milliseconds. This sounds like you could be the perfect tester for this, or at least share some numbers? The total number of rules and the number if concurrent executing rules would be interesting for me. |
db4a88c
to
847a582
Compare
@kaikreuzer What is the state here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry that it took me so long - please find some review comments below.
I have to admit that I do not fully understand the implemented mechanism here. In general, my experience with implementing own executor services is that this is a very tricky job and that a lot of things can go wrong. So I would definitely consider this feature for now experimental and we should indeed only enable it on an opt-in basis for testing. Nonetheless, I'd love to see it working and reducing the number of threads that are used, so let's move forward with it!
...g.openhab.core/src/main/java/org/openhab/core/common/SequentialScheduledExecutorService.java
Outdated
Show resolved
Hide resolved
...g.openhab.core/src/main/java/org/openhab/core/common/SequentialScheduledExecutorService.java
Outdated
Show resolved
Hide resolved
...g.openhab.core/src/main/java/org/openhab/core/common/SequentialScheduledExecutorService.java
Outdated
Show resolved
Hide resolved
...g.openhab.core/src/main/java/org/openhab/core/common/SequentialScheduledExecutorService.java
Outdated
Show resolved
Hide resolved
...g.openhab.core/src/main/java/org/openhab/core/common/SequentialScheduledExecutorService.java
Outdated
Show resolved
Hide resolved
...g.openhab.core/src/main/java/org/openhab/core/common/SequentialScheduledExecutorService.java
Outdated
Show resolved
Hide resolved
bundles/org.openhab.core/src/main/java/org/openhab/core/common/ThreadPoolManager.java
Outdated
Show resolved
Hide resolved
bundles/org.openhab.core/src/main/java/org/openhab/core/internal/events/EventHandler.java
Outdated
Show resolved
Hide resolved
bundles/org.openhab.core/src/test/java/org/openhab/core/common/ThreadPoolManagerTest.java
Outdated
Show resolved
Hide resolved
...g.openhab.core/src/main/java/org/openhab/core/common/SequentialScheduledExecutorService.java
Outdated
Show resolved
Hide resolved
847a582
to
1cbead1
Compare
@kaikreuzer thank you for the review, i addressed most of these points. There are two open points with questions how to continue. |
…les and events Signed-off-by: Jörg Sautter <[email protected]>
1cbead1
to
b18c48c
Compare
@kaikreuzer now all the points are addressed, so i think this is ready 🚀 |
Thanks @joerg1985! |
@kaikreuzer yes, this is correct. To enable the new executor the pool size must be set > 0, this allows to enable it only for specific pools. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, let's start testing it!
Is it documented somewhere? |
This is basically a reincarnation of the idea in #2182 with a different approach.
The
SequentialScheduledExecutorService
in this PR will act like aExecutors.newSingleThreadScheduledExecutor
, but it does use a thread pool to execute the tasks. The mechanism to block theScheduledExecutorService
to run tasks concurrently is based on a chain ofCompletableFutures
. Each instance has a reference to the last CompletableFuture and will callhandleAsync
to add a new task. This will ensure tasks are running sequentially.A
.schedule
call will register a callback which will then submit the task to perform now, ending in the same handleAsync call on theCompletableFutures
, to ensure these are also running sequentially.The only side effect a rule has, is to not run inside the same thread again and again. This might break things like a
ThreadLocal
, therefore a Rule has to opt-in to this new behavior (seeRule.isThreadBound
).This feature is currently disabled by default and has to be enabled by setting the system property
org.openhab.core.common.SequentialScheduledExecutorService
totrue
.When deserializing an existing
RuleTemplate
thethreadBound
flag should be set true. I am not sure how to do this, i tried in theRuleTemplateDTO
by settingthreadBound
true for a new instance.Further work: The rule DSLRuleProvider could inspect the DSL and set
threadBound
to true when aThreadLocal
orThread
is used by the rule. This will ensure no exisiting DSL rule will break.