Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Executor hogging threads #291

Open
ybasket opened this issue Apr 16, 2020 · 6 comments
Open

Executor hogging threads #291

ybasket opened this issue Apr 16, 2020 · 6 comments

Comments

@ybasket
Copy link

ybasket commented Apr 16, 2020

Hey, we see sometimes log messages like this

[dd.trace 2020-04-14 06:25:55:223 +0000] [dd-jmx-collector] WARN org.datadog.jmxfetch.App - Executor has to be replaced for recovery processor, previous one hogging threads

Looking at the code, this seem to mean all threads of the ExecutorService it wants to use are busy. As we see this in a service which is quite idle, this is bit surprising…nevertheless, important would be to know whether this is a real problem or not. Does someone here know the rationale behind checking the "thread availability" and maybe even what to about this log message?

@mswezey23
Copy link

I noticed my services in K8 restart after this WARN is issued.

Any details?

@BurgersMcSlopshot
Copy link

We're seeing this as well in a service in production: Executor has to be replaced for recovery processor, previous one hogging threads
Is there any workarounds or any other documentation about this error?

@cb-salaikumar
Copy link

Is there any update on this?

@eli-traderepublic
Copy link

eli-traderepublic commented Jan 19, 2021

I noticed my services in K8 restart after this WARN is issued.

Did you manage to solve the restart of the pods? I am facing the same problem

@mpires
Copy link

mpires commented Jan 19, 2021

I was seeing the same behaviour, with restarts.
Turns out, in my case it was due to pod going over its memory limit and restarting, giving rise to these logs and not the other way around as I initially thought.

@eli-traderepublic
Copy link

Turns out, in my case it was due to pod going over its memory limit and restarting, giving rise to these logs and not the other way around as I initially thought

Thank you @mpires - I found out the problem, it was actually a misconfiguration of the app, I don't know however why it caused this error and didn't provide clear details. In working service this line is not displayed, so it was a bit confusing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants