-
Notifications
You must be signed in to change notification settings - Fork 750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Race condition between aws-vpc-cni-k8s and systemd-networkd on AL2023 #3162
Comments
Thank you providing these details. We had seen a similar issue with a race between "amazon-ec2-net-utils" package installed AL2023 AMI (earlier to v20240329) and VPC CNI, the default gateway route for these secondary ENIs installed by CNI was deleted by amazon-ec2-net-utils. It was realized that the ENI and its routes were first initialized by the VPC CNI plugin, then, amazon-ec2-net-utils init came into action, subsequently removed those routes in the race. A change was made in this in awslabs/amazon-eks-ami#1738 by It seems like direct solution to your report. And AMI with the fix was released - https://github.com/awslabs/amazon-eks-ami/releases/tag/v20240329. The resolution was to upgrade to AMI Could you ensure that you are in these supported AMIs? |
@orsenthil, thank you for the answer! Our AMI version is I read about changes in the mentioned PR and I think, the reason might be that In our case, the plugin doesn't wait for pods scheduled, it tries to satisfy the condition of
Also, I have noticed they were mentioning some limitations, is it possible to get any info about that? |
Hi all once again! Could someone take a look at this, please? We was able to find how to bypass the issue, but we are still not fully certain that it will work each time. That fix in the EKS AMI project seems not a solution of this particular problem since the P.S. Maybe I should submit a ticket for the AWS support team? |
@dracut5, I will take a look at this and try to reproduce this with the information you have to provided. |
Sorry, forgot to add: we are using self-managed node groups in this case, not EKS managed. |
Hi @orsenthil, Did you have a chance to try to reproduce the issue? Was it successful? Thanks a lot! |
Hello @dracut5 , I haven't been able to reproduce this issue yet.
And noticed the additional ENI allocated.
The
Do the above steps look close to how you have been reproducing this issue? |
It looks very similar to what I have done to reproduce the issue, that's true.
I would also suggest to add a few additional steps to match our configuration as much as possible:
I have some assumptions that, precisely, userdata scripts might affect the network race condition. |
What happened:
Hi!
We have faced weird behavior when some cronjobs in the EKS cluster fail occasionally due network timeouts. It occurs randomly and we were not even able to reproduce the issue for a long time, but when we got into that we found out that a node secondary network interface, which was created and attached by the Amazon VPC CNI plugin, didn't have an IP address assigned. And when a pod gets the ip address from the pool located on this secondary ENI it is unable to perform any network actions, the network is fully inaccessible.
It was the root cause of our problem, but we decided to proceed our investigation and figure out why the ip is absent, moreover, not every time.
Attach logs
At the initial phase the vpc-cni creates and attaches a secondary ENI to an instance to satisfy the condition MINIMUM_IP_TARGET=10. The plugin also configures the corresponded network interface by adding the primary IP address
At the same time, until /etc/systemd/network/80-ec2.network.d/10-eks_primary_eni_only.conf appears, the systemd-networkd service takes the mentioned interface under control and runs dhcp client to get a lease
When 10-eks_primary_eni_only.conf file, which specifies what certain interfaces systemd-networkd should manage (only one to be more precisely - the primary ENI interface), is created the service decides to stop managing the device
This action results in the deletion of the assigned IP address since the lease should be released.
In the future our secondary interface, ens6, will be managed by aws-cni exclusively, but it knows nothing about IP address removal done by systemd-networkd. At the end we get the unrouted broken network interface and, as has been said before, when a pod obtains an IP address from the related pool it is just unable to reach any network endpoints. Plugin restart helps, but it is not a long term solution.
So, very likely, it is a race condition between the Amazon VPC CNI plugin and the systemd-networkd service - they are managing the same network interface at the same time due some circumstances.
It is worth mentioning that from time to time the secondary interface is able to keep assigned primary IP when the dhcp client hadn’t got a lease before "Unmanaging interface" event occurred.
What you expected to happen:
Secondary interfaces are managed only by Amazon VPC CNI plugin and get proper permanent network configuration during a node init process.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
We found some workaround: decreasing MINIMUM_IP_TARGET to 9 in order to place all ips on one primary interface during the instance init process, it works for all instance types used in our environments. The vpc-cni can create other ones if needed, attach and configure them, but it prevents the race condition against systemd-networkd at the very start, before 10-eks_primary_eni_only.conf file exists.
Merry Christmas and Happy New Year y'all 🎄
Environment:
kubectl version
):v1.31.3-eks
v1.19.0-eksbuild.1
, installed as EKS Add-oncat /etc/os-release
):Amazon Linux 2023.6.20241212
uname -a
):6.1.119-129.201.amzn2023.aarch64
252.23-2
The text was updated successfully, but these errors were encountered: