-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spike - Wazuh kubernetes #907
Comments
I have been investigating the service options to use to deploy Wazuh, eliminating the use of AWS NLBs and thereby simplifying the use of cloud resources. |
Conclusions
Currently the Kubernetes repository is focused solely on deployments, so we do not have to split it up.
Regarding the simplification of the implementation, we have to take into account that Kubernetes is responsible for deploying all the components that normally correspond to both software and hardware in a deployment in VMs, so its simplification depends more on the purpose of the deployment itself. If we want to have a product ready to run, we have to add all the necessary components for it, such as deployments, statefulset, services, secrets and others, which must have the default parameters in many cases and depending on the Docker images we have, the possibility of adding parameters that allow the configuration of the deployment. Currently we have a basic scheme that differentiates a deployment in a cluster installed by itself and a deployment that customizes many parameters for AWS services. We need to check if we are going to keep these customizations or if we are only going to provide a base that can be adapted by the user to the environment he wants and if we are going to request dependencies for the deployment, such as the use of network load balancers in AWS or the use of Ingress Controller.
We currently have a test workflow, which has several checks but requires adding some more, such as error controls in logs. This workflow only uses images published on Docker Hub, so it requires having images in those image repositories. We must adapt these tests to use images from private repositories, which is possible and the use of these repositories was successfully analyzed. Regarding the issue of using specific image tags, it is required to add the logic within the test.
Regarding the Kubernets documentation, it currently contains the necessary parameters for the deployment itself, but it is missing information regarding the possibility of adding integrations and other configurations that require additional software, such as AWS CLI for integrations with AWS, an SMTP client for sending emails, etc. We should plan what type of information we require in our documentation, not duplicate data that may already be in the documentation for each component and, if necessary, add links to the software and dependencies that we require. ex: Kubectl and Kustomize. Currently we do not have all the information regarding the communication protocols that Wazuh will require for the connection, such as the internal processes that would allow us to determine the tests with greater precision, so this issue is blocked until we receive this information. |
UpdateWe need to develop a plan with all the items from the analysis. The plan must be ordered and each task must have an Owner and the teams involved. |
Hello @AurimasNav We are analyzing this type of requests. |
Hello all, From the outside it seems like there are 2 tasks bundled in this request.
IMO, the project should focus on (#1) creating a Helm chart for installation of the Wazuh platform first, not on what Ingress Controller or Loadbalancer users/customers choose to leverage. Having a Helm chart should allow iteration in the installation method to then add other components like cert-manager etc. To me, having a Helm chart is a maturity component and it helps align the installation of the platform with GitOps workflows, since Helm charts are versioned. For #2, you can build the best possible Architecture and deployment and label it as "Suggested Architecture" to help users visualize and plan (costs and changes to their environment etc) in preparation for implementation. I would also mention that as a security tool/application you should focus on including Network policies in your k8s manifests. |
UpdateAccording to the spike performed, I have determined a series of general tasks that we must perform for the correct update of the wazuh-kubernetes repository and its respective documentation: Steps:
These tasks can be modified according to the development needs of each component and the changes made to the images being deployed, so these steps may change before or during the development stage. Update
|
Hello @Tokynet We are currently making several changes to our product, so our deployments will undergo a large number of changes. As we continue with the development of these new changes, we will analyze the incorporation of more and better deployment solutions, but in the first instance we want to start with a base that allows our users to adapt our product to their deployment needs. |
Hello everyone, just giving my 2 cents as I struggled a lot to setup wazuh here on our environment :D I'v tryied to use the @morgoved helm setup, but it depends on a 3rd party reloader and also not much customizable, similar to the wazuh-kubernetes deployment that takes into account that you will install ONLY ONE instance. Here its 100% on premises (rancher + k8s + longhorn) and the 'local-env' do not help much as the only suggestion is over changing the storage class, what can be done before the deployment and fixed on the overlays. Also it do not count on any customization and parallel usage, such as:
with all those changes needed, considering that for now the view on the deployment focus still a single deployment without documentation on how to properly change the clustername and namespace, IMHO do not think that helm would be a good approach. For instance if you setup cert-manager, longhorn, vault, jenkins or whatever deployment you will have only one deployment on the system namespace, but wazuh as for the latest versions is clearly going to multiple custers, centralized management for many deployments and etc, pining it down even more to be a single setup could be more hard to maintain and upgrade as will always have a lot to change over the base-simpler deployment. So... what about going straight to an operator? even doing just the first level that's the deployment, we the product evolves surely can use all features of an operator to auto-manage itself and be easy to deploy many instances on the same cluster. https://developers.redhat.com/articles/2024/01/29/developers-guide-kubernetes-operators |
An issue has been created for task control for the Wazuh Kubernetes MVP v5.0.0: |
Description
As part of the DevOps overhaul objective we need to conduct research, analyze alternatives, and design how to implement the following changes.
Implementation restrictions
Plan
TBD
The text was updated successfully, but these errors were encountered: