-
-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential Memory Leak #786
Comments
Could you try to gather some memory profiles? Maybe @withinboredom has other tips to track memory leaks? |
Did you set the
So if you have 16 cores, 2gb of memory, and 128 MB PHP memory limit, then I just checked an application I have and don't see a memory leak. That isn't saying there isn't one (I was wrong before and I could be wrong again), just that I'm not currently reproducing it. |
In the meantime, I'll go run a few load tests and see if I can see a memory leak. |
I don't see any evidence of a memory leak with
Note that this doesn't necessarily mean that there isn't one, but just that I cannot reproduce it. If you have a way of reproducing it in prod, feel free to send me an email and maybe we can figure out what is going on. |
Hey again, thanks for all the replies! In my original post I forgot to mention that I've upgraded from Based on your suggestion @withinboredom, I've set the I'll attempt to run some memory profiles in a bit. |
I'm having some issues observing the profiling data. Running
and:
When trying to access
Excuse my ignorance! |
Just like PHP, Go is a garbage collected language. That means after every request, whatever was left over is just left in memory to eventually to be garbage collected. PHP nor Go will really do that until there is memory pressure and Docker memory limits do not create memory pressure when its usage within the limits is high (IIRC, citation needed). Using GOMEMLIMIT creates an artificial pressure on Go (technically, it creates a memory target, but that is effectively the same for our purposes). If the combined memory usage of ALL your processes is over the Docker memory limit, a random process in your container will be killed. You either need to provide the container with enough memory, or tune your processes to stay within the limit. |
Closing for now. Please reopen if you can confirm the issue. |
@withinboredom i'm not following this sorry. |
ah, so you're saying the above example is not good as |
Yes, if you calculate a negative limit, then you do not have enough memory and are overcommitting. I keep meaning to write a guide. I may start that tonight. |
thanks @withinboredom ! Also note my comments here around workers, threads, php and go memory limits, and how all these things are connected and should be configured: |
Hi all, I would like to possibly confirm what caused the behaviour shown in my screenshots. Turns out Cloud Run uses a in-memory filesystem, as documented here. This in combination with the installed FOSHttpCacheBundle (which writes to the in-memory filesystem by default), might explain the situation I was encountering. The crash that resulted might have happened by a OOM-killer of sorts. Since I have resolved this flaw in the caching/infrastructure, I've had zero issues related to FrankenPHP! |
What happened?
Hi all. I've been running a Sulu application on GCP Cloud Run for a couple of weeks now, served by FrankenPHP, and I'm encountering a odd issue potentially caused by a memory leak in FrankenPHP.
In the metrics of my application I observe the following pattern:
Memory utilisation:
Request latencies:
Memory usage of the container seems to gradually increase as time goes on. Around the 25%-27%, with 2Gi allocated, the underlying PHP worker seems to crash or fail in some way resulting in request timeouts. Personally I'm not sure exactly how FrankenPHP handles PHP processing, so please excuse my explanation here.
Besides the above observations, there seem no relevant error logs related to FrankenPHP or my Sulu application. Response codes simply jump from
200
to504
.In addition to the above I've ran some profiles with Blackfire in a attempt to see what's going on. The memory usage for each request is around ~80mb. Which for me doesn't explain the above.
Any help further diagnosing what's going on would be much appreciated!
Potentially related issues
I've also been digging through the issues here on GitHub, and the following might seem related:
FrankenPHP version
FrankenPHP v1.1.4 PHP 8.3.7 Caddy v2.7.6 h1:w0NymbG2m9PcvKWsrXO6EEkY9Ru4FJK8uQbYcev1p3A=
Build Type
Docker (Debian Bookworm)
Worker Mode
No
Operating System
GNU/Linux
CPU Architecture
x86_64
PHP configuration
Relevant log output
No response
The text was updated successfully, but these errors were encountered: