diff --git a/.gitignore b/.gitignore index 59357328bf..9b1cd83fc9 100644 --- a/.gitignore +++ b/.gitignore @@ -17,6 +17,8 @@ deployments/* src/inventory/* hosts .vscode +.ssh +images/ venv docker/* .pytest_cache diff --git a/Documentation/AWS.md b/Documentation/AWS.md index 81a6ffa479..b29387acc5 100644 --- a/Documentation/AWS.md +++ b/Documentation/AWS.md @@ -97,7 +97,7 @@ The CIDRs for the VPC, WAN interface, LAN interface and private subnet must be s ## 6. Deploy Components After you have set up the environment and configured your components, you can use MetroAE to deploy your components with a single command. - metroae install everything + metroae-container install everything Alternatively, you can deploy individual components or perform individual tasks such as predeploy, deploy and postdeploy. See [DEPLOY.md](DEPLOY.md) for details. ## Questions, Feedback, and Contributing diff --git a/Documentation/BACKUP_RESTORE.md b/Documentation/BACKUP_RESTORE.md index 906cd2a15c..090d14c0d7 100644 --- a/Documentation/BACKUP_RESTORE.md +++ b/Documentation/BACKUP_RESTORE.md @@ -13,11 +13,11 @@ Ensure that all components you wish to have backed up are specified under your d To perform a backup of all supported components issue: - metroae backup + metroae-container backup To perform a backup of a specific component issue: - metroae backup vsds + metroae-container backup vsds Substitute the component name to be backed up if different than `vsds`. @@ -31,19 +31,19 @@ Ensure that all components you wish to have restored are specified under your de To restore all supported components issue: - metroae restore + metroae-container restore To restore a specific component issue: - metroae restore vsds + metroae-container restore vsds Substitute the component name to be backed up if different than `vsds`. Restore can alternatively be performed with each step separately using: - metroae restore vsds predeploy - metroae restore vsds deploy - metroae restore vsds postdeploy + metroae-container restore vsds predeploy + metroae-container restore vsds deploy + metroae-container restore vsds postdeploy ## TLS Configuration on VSC During Restore diff --git a/Documentation/CHANGES_IN_V5.md b/Documentation/CHANGES_IN_V5.md new file mode 100644 index 0000000000..2441f6b691 --- /dev/null +++ b/Documentation/CHANGES_IN_V5.md @@ -0,0 +1,44 @@ +# Changes in V5 +We have Made the following changes to MetroAE in v5 as compared to previous versions of MetroAE. All changes are summarised in this document. + +## MetroAE operation changes +In v4 and before, MetroAE shipped with 2 modes of operation. The container and git clone. With the new release, we have made things a bit easier for the users. There is a single mode of operation - a new MetroAE container. This will replace the old MetroAE container, which will no longer be supported. + +This new container will be dynamically built on the users machine. Users will need to get the latest MetroAE repository on their machine. The only other pre-requisite for the user is to have docker installed on their MetroAE Host Machine. The changes for moving from v4 to v5 are as below depending on type of installation. + +### Moving from MetroAE git clone/download to MetroAE v5 +1. Download or git pull the latest MetroAE code +2. Make sure docker is installed and running. The `docker ps` command can verify this. +3. Use `./metroae-container` instead of `./metroae`. All other arguments remain the same. + e.g. Instead of this initial command + `./metroae install vsds -vvv` use `./metroae-container install vsds -vvv` + Another example for deployments other than defaults + `./metroae install vsds specialdeployment -vvv` use `./metroae-container install vsds specialdeployment -vvv` +4. For all image paths, make sure they start with `/metroae`. Here `/metroae` refers to the present working directory for the user. + e.g. + `nuage_unzipped_files_dir: ./images/20.10.R4` changes to `nuage_unzipped_files_dir: /metroae/images/20.10.R4` +5. All the specified paths for licenses, unzipped files, backup directories should be inside the MetroAE repository. e.g. you cannot specify `/opt` or `/tmp` for the MetroAE host. If your mount directory for images is outside the MetroAE folder, you can use a mount bind to put them inside the MetroAE directory. + e.g. + `sudo mount --bind -o ro /mnt/nfs-data //images` +6. Users do not need to run setup at all, all dependencies will be automatically taken care of with the new container in the background. +7. For vcenter users only, MetroAE should be cloned or downloaded in a directory where ovftool is present. The entire vmware-ovftool folder must be present. In short, the path to ovftool should be somewhere inside the MetroAE top level folder. You can mount bind the ovftool to the metro repo location. + `sudo mount --bind -o ro /usr/lib/vmware-ovftool //ovftool` + +### Moving from MetroAE v4 container/download to MetroAE v5 +1. Download or git pull the latest MetroAE code +2. Destroy the old container using `./metroae-container destroy` command +3. Use `./metroae-container` instead of `/metroae`. All other arguments remain the same. + e.g. Instead of this initial command + `metroae install vsds -vvv` use `./metroae-container install vsds -vvv` + Another example for deployments other than defaults + `metroae install vsds specialdeployment -vvv` use `./metroae-container install vsds specialdeployment -vvv` +4. For all image paths, make sure they start with `/metroae` instead of `/metroae_data`. Here `/metroae` refers to the present working directory for the user. + e.g. + `nuage_unzipped_files_dir: /metroae_data/images/20.10.R4` changes to `nuage_unzipped_files_dir: /metroae/images/20.10.R4` +5. You can create an `images` folder in `nuage-metroae` and move the `/metroae_data` mount folder under `//images` athat way you can replace `/metroae_data` with `/metroae_data/images` in the deployment files. + +## Ansible and Python Changes +MetroAE is now supported with Ansible version 3.4.0 and higher. Python3 is now required. Do not worry, the container that gets dynamically created should take care of the python, ansible and any other dependencies that are needed. This will not affect the user environment as all dependencies will be installed in the MetroAE container. + +## MetroAE Config +MetroaAE Config is no longer bundled with MetroAE. Please refer to https://github.com/nuagenetworks/nuage-metroae-config to get information on how use MetroAE Config. diff --git a/Documentation/CONFIG.md b/Documentation/CONFIG.md index 084a28c22a..96679c61d0 100644 --- a/Documentation/CONFIG.md +++ b/Documentation/CONFIG.md @@ -1,78 +1,3 @@ # MetroAE Config -MetroAE Config is a template-driven VSD configuration tool. It utilizes the VSD API along with a set of common Feature Templates to create configuration in the VSD. The user will create a yaml or json data file with the necessary configuration parameters for the particular feature and execute simple CLI commands to push the configuration into VSD. - -MetroAE Config is available via the MetroAE container. Once pulled and setup, additional documentation will be available. - -## Configuration Engine Installation - -MetroAE Configuration engine is provided as one of the capabilities of the MetroAE Docker container. The installation of the container is handled via the metroae script. Along with the configuration container we also require some additional data. - -On a host where the configuration engine will be installed the following artifacts will be installed: - -* Docker container for configuration -* Collection of configuration Templates -* VSD API Specification - -### System Requirements - -* Operating System: RHEL/Centos 7.4+ -* Docker: Docker Engine 1.13.1+ -* Network Access: Internal and Public -* Storage: At least 800MB - -#### Operating system - -The primary requirement for the configuration container is Docker Engine, however the installation, container operation and functionality is only tested on RHEL/Centos 7.4. Many other operating systems will support Docker Engine, however support for these is not currently provided and would be considered experimental only. A manual set of installation steps is provided for these cases. - -#### Docker Engine - -The configuration engine is packaged into the MetroAE Docker container. This ensures that all package and library requirements are self-contained with no other host dependencies. To support this, Docker Engine must be installed on the host. The configuration container requirements, however, are quite minimal. Primarily, the Docker container mounts a local path on the host as a volume while ensuring that any templates and user data are maintained only on the host. The user never needs to interact directly with the container. - -#### Network Access - -Currently the configuration container is hosted in an internal Docker registry and the public network as a secondary option, while the Templates and API Spec are hosted only publicly. The install script manages the location of these resources. The user does not need any further information. However, without public network access the installation will fail to complete. - -#### Storage - -The configuration container along with the templates requires 800MB of local disk space. Note that the container itself is ~750MB, thus it is recommended that during install a good network connection is available. - -#### User Privileges - -The user must have elevated privileges on the host system. - -### Installation - -Execute the following operations on the Docker host: - -1. Install Docker Engine - - Various instructions for installing and enabling Docker are available on the Internet. One reliable source of information for Docker CE is hosted here: - - [https://docs.docker.com/engine/install/centos/](https://docs.docker.com/engine/install/centos/) - -2. Add the user to the wheel and docker groups on the Docker host. - -3. Move or Copy the "metroae" script from the nuage-metroae repo to /usr/bin and set permissions correctly to make the script executable. - -4. Switch to the user that will operate "metroae config". - -5. Setup the container using the metroae script. - - We are going to pull the image and setup the metro container in one command below. During the install we will be prompted for a location for the container data directory. This is the location where our user data, templates and VSD API specs will be installed and created and a volume mount for the container will be created. However this can occur orthogonally via "pull" and "setup" running at separate times which can be useful depending on available network bandwidth. - - `[caso@metroae-host ~]$ metroae container setup` - - The MetroAE container needs access to your user data. It gets access by internally mounting a directory from the host. We refer to this as the 'data directory'. The data directory is where you will have deployments, templates, documentation, and other useful files. You will be prompted to provide the path to a local directory on the host that will be mounted inside the container. You will be prompted during setup for this path. - - The MetroAE container can be setup in one of the following configurations: - - * Config only - * Deploy only - * Both config and deploy - - During setup, you will be prompted to choose one of these options. - -6. Follow additional insturctions found in the documentaion that is copied to the Docker host during setup. - -Complete documentation will be made available in the data directory you specify during setup. The complete documentation includes how to configure your environment, usage information for the tool, and more. +MetroaAE Config is no longer bundled with MetroAE. Please refer to https://github.com/nuagenetworks/nuage-metroae-config to get information on how use MetroAE Config. diff --git a/Documentation/CONTRIBUTING.md b/Documentation/CONTRIBUTING.md deleted file mode 120000 index 44fcc63439..0000000000 --- a/Documentation/CONTRIBUTING.md +++ /dev/null @@ -1 +0,0 @@ -../CONTRIBUTING.md \ No newline at end of file diff --git a/Documentation/CONTRIBUTING.md b/Documentation/CONTRIBUTING.md new file mode 100644 index 0000000000..f04caf635c --- /dev/null +++ b/Documentation/CONTRIBUTING.md @@ -0,0 +1,81 @@ +# Contributing to MetroAE + +MetroAE is built on a community model. The code has been architected to make contribution simple and straightforward. You are welcome to join in the community to help us continuously improve MetroAE. + +## We Use [Github Flow](https://guides.github.com/introduction/flow/index.html), So All Code Changes Happen Through Pull Requests +Pull requests are the best way to propose changes to the codebase (we use [Github Flow](https://guides.github.com/introduction/flow/index.html)). We actively welcome your pull requests: + +1. Fork the repo and create your branch from `dev`. +2. If you've added code that should be tested, add tests. +3. Make sure your code passes flake8. +4. Issue that pull request! + +## Prerequisites / Requirements + + All contributions must be consistent with the design of the existing workflows. + + All contrinbutions must be submitted as pull requests to the _dev_ branch, reviewed, updated, and merged into the nuage-metroae repo. + + You must have a github.com account and have been added as a collaborator to the nuage-metroae repo. + +## Contributing your code + +1. Developing your code. + + The manner in which you develop the code contribution depends on the extent of the changes. Are you enhancing an existing playbook or role, or are you adding one or more new roles? Making changes to what already exists is simple. Just make your changes to the files that are already there. + + Adding a new component or feature is a bit more involved. For example, if you are adding support for the installation of a new component, the following elements would be expected unless otherwise agreed upon by the repo owners: + + 1. A new user-input schema for the component must be created. See the exitsing files in the `schemas` directory. + 2. A new deployment template must be created. See the existing files in the `src/deployment_templates` directory. + 3. Add to the example data. All deployment templates and examples are auto-generated. The data in `src/raw_example_data` is used by the automatic generation to populate the examples properly. Also see the examples that have been auto-generated in the `examples/` directory. + 4. Add your component and associated file references to `src/workflows.yml`. + 5. Add your schema to `src/roles/common/vars/main.yml`. + 6. Execute `src/generate_all_from_schemas.sh` to create all the required files for your component. + 7. Create the proper roles. The following roles are required unless otherwise agreed to by the repo owners: _newcomponent-predeploy_, _newcomponent-deploy_, _newcomponent-postdeploy_, _newcomponent-health_, and _newcomponent-destroy_ should be created under `src/roles/` + 8. Create the proper playbooks to execute the roles: _newcomponent_predeploy.yml_, _newcomponent_deploy.yml_, _newcomponent_postdeploy.yml_, _newcomponent_health.yml_, and _newcomponent_destroy.yml_ should be created under `src/playbooks/with_build` + 9. Test, modify, and retest until your code is working perfectly. + +2. Test all proposed contributions on the appropriate hypervisors in the `metro-fork` directory. If you choose not to provide support for one or more supported hypervisors, you must provide graceful error handling for those types. + +3. All python files modified or submitted must successfully pass a `flake8 --ignore=E501` test. + +4. Add a brief description of your bug fix or enhancement to `RELEASE_NOTES.md`. + +## Any contributions you make will be under the APACHE 2.0 Software License + In short, when you submit code changes, your submissions are understood to be under the same [APACHE License 2.0](https://www.apache.org/licenses/LICENSE-2.0) that covers the project. Feel free to contact the maintainers if that's a concern. + +## Report bugs using Github's [issues](https://github.com/nuagenetworks/nuage-metroae/issues) + We use GitHub issues to track public bugs. + +## Write bug reports with detail, background, and sample code + + **Great Bug Reports** tend to have: + + - A quick summary and/or background + - Steps to reproduce + - Be specific! + - Give sample code if you can. + - What you expected would happen + - What actually happens + - Notes (possibly including why you think this might be happening, or stuff you tried that didn't work) + +## Use a Consistent Coding Style + + * 4 spaces for indentation rather than tabs + * 80 character line length + * TODO + +## License + By contributing, you agree that your contributions will be licensed under its APACHE 2.0 License. + + +## Questions and Feedback + +Ask questions and get support on the [MetroAE site](https://devops.nuagenetworks.net/). +You may also contact us directly. + Outside Nokia: [devops@nuagenetworks.net](mailto:devops@nuagenetworks.net "send email to nuage-metro project"). + Internal Nokia: [nuage-metro-interest@list.nokia.com](mailto:nuage-metro-interest@list.nokia.com "send email to nuage-metro project"). + +## References + This document was adapted from the open-source contribution guidelines for [Facebook's Draft](https://github.com/facebook/draft-js/blob/a9316a723f9e918afde44dea68b5f9f39b7d9b00/CONTRIBUTING.md) diff --git a/Documentation/CUSTOMIZE.md b/Documentation/CUSTOMIZE.md index 8822f0e749..95c2f57c1a 100644 --- a/Documentation/CUSTOMIZE.md +++ b/Documentation/CUSTOMIZE.md @@ -6,23 +6,26 @@ If you have not already set up your MetroAE Host environment, see [SETUP.md](SET ## What is a Deployment? -Deployments are component configuration sets. You can have one or more deployments in your setup. When you are working with the MetroAE container, you can find the deployment files under the data directory you specified during `metroae container setup`. For example, if you specified `/tmp` as your data directory path, `metroae container setup` created `/tmp/metroae_data` and copied the default deployment to `/tmp/metroae_data/deployments`. When you are working with MetroAE from a workspace you created using `git clone`, deployments are stored in the workspace directory `nuage-metroae/deployments//`. In both cases, the files within each deployment directory describe all of the components you want to install or upgrade. +Deployments are component configuration sets. You can have one or more deployments in your setup. +The files within each deployment directory describe all of the components you want to install or upgrade. If you issue: - ./metroae install everything + ./metroae-container install everything The files under nuage-metroae/deployments/default will be used to do an install. If you issue: - ./metroae install everything mydeployment + ./metroae-container install everything mydeployment The files under `nuage-metroae/deployments/mydeployment` will be used to do an install. This allows for different sets of component definitions for various projects. +The deployment files and the image files must be located within the git clone folder. The docker container will mount the git clone folder inside the container and will not have access to files outside of that location. All file paths must be defined as relative to the git clone folder and never using absolute paths + You can also do: ``` -./metroae install everything deployment_spreadsheet_name.xlsx +./metroae-container install everything deployment_spreadsheet_name.xlsx ``` to run the install everything playbook using the deployment information present in the specified Excel spreadsheet. More details about Excel deployments can be found in the `Customize Your Own Deployment` section below. @@ -55,9 +58,9 @@ You can also use the MetroAE spreadsheet to create your deployment. You can find ``` convert_csv_to_deployment.py deployment_spreadsheet_name.csv your_deployment_name ``` -or you can let `metroae` handle the conversion for you by specifying the name of the CSV file instead of the name of your deployment:: +or you can let `metroae-container` handle the conversion for you by specifying the name of the CSV file instead of the name of your deployment:: ``` -metroae deployment_spreadsheet_name.csv +metroae-container deployment_spreadsheet_name.csv ``` This will create or update a deployment with the same name as the CSV file - without the extension. @@ -65,9 +68,9 @@ MetroAE also supports deployment files filled out in an Excel (.xlsx) spreadshee ``` convert_excel_to_deployment.py deployment_spreadsheet_name.xlsx your_deployment_name ``` -or you can use `metroae` do the conversion for you by running build, like this: +or you can use `metroae-container` do the conversion for you by running build, like this: ``` -metroae build deployment_spreadsheet_name.xlsx +metroae-container build deployment_spreadsheet_name.xlsx ``` For Excel deployments, all playbooks (aside from `nuage_unzip` and `reset_build`) invoke the build step and can replace build in the command above. @@ -134,11 +137,30 @@ When installing or upgrading an active-standby, geo-redundant cluster, all 6 VSD `vstats.yml` contains the definition of the VSTATs (VSD Statistics) to be operated on in this deployment. This file should be present in your deployment only if you are specifying VSTATs. If not provided, no VSTATs will be operated on. This file is of yaml list type. If it contains exactly 3 VSTAT definitions, a cluster installation or upgrade will be executed. Any other number of VSTAT definitions will result in 1 or more stand-alone VSTATs being installed or upgraded. +## VSD RTT Performance Testing + +You can use MetroAE to verify that your VSD setup has sufficient RTT performance. By default, the RTT performance test will run at the beginning of the VSD deploy step, prior to installing the VSD software. The parameters that you can use to control the operation of the test are available in 'common.yml': + +* `vsd_run_cluster_rtt_test` When true, run RTT tests between VSDs in a cluster or standby/active cluster, else skip the test +* `vsd_ignore_errors_rtt_test` When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error +* `vsd_max_cluster_rtt_msec` Maximum RTT in milliseconds between VSDs in a cluster +* `vsd_max_active_standby_rtt_msec` Maximum RTT in milliseconds between Active and Standby VSDs + +In addition to the automatic execution that takes place in the VSD deploy step, you can run the VSD disk performance test at any time using `metroae-container vsd test rtt`. + ## VSD Disk Performance Testing -You can use MetroAE to verify that your VSD setup has sufficient disk performance (IOPS). By default, the disk performance test will run at the beginning of the VSD deploy step, prior to installing the VSD software. The parameters that you can use to control the operation of the test are available in 'common.yml'. You can skip the test, specify the total size of all the files used in the test, and modify the minimum threshold requirement in IOPS for the test. Note that to minimize the effects of file system caching, the total file size must exceed the total RAM on the VSD. If MetroAE finds that the test is enabled and the disk performance is below the threshold, an error will occur and installation will stop. The default values that are provided for the test are recommended for best VSD performance in most cases. Your specific situation may require different values or to skip the test entirely. +You can use MetroAE to verify that your VSD setup has sufficient disk performance (IOPS). By default, the disk performance test will run at the beginning of the VSD deploy step, prior to installing the VSD software. The parameters that you can use to control the operation of the test are available in 'common.yml': + +* `vsd_run_disk_performance_test` Run the VSD disk performance test when true, else skip the test +* `vsd_disk_performance_test_total_file_size` Sets the total size of created files for VSD disk performance test. For a valid measurement, the total file size must be larger than VSD RAM to minimize the effects of caching. +* `vsd_disk_performance_test_minimum_threshold` Sets the minimum value for VSD disk performance test in IOPS +* `vsd_disk_performance_test_max_time` Sets the duration of the VSD disk performance test in seconds +* `vsd_ignore_disk_performance_test_errors` When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error + +You can skip the test, specify the total size of all the files used in the test, and modify the minimum threshold requirement in IOPS for the test. Note that to minimize the effects of file system caching, the total file size must exceed the total RAM on the VSD. If MetroAE finds that the test is enabled and the disk performance is below the threshold, an error will occur and installation will stop. The default values that are provided for the test are recommended for best VSD performance in most cases. Your specific situation may require different values or to skip the test entirely. -In addition to the automatic execution that takes place in the VSD deploy step, you can run the VSD disk performance test at any time using `metroae vsd test disk`. +In addition to the automatic execution that takes place in the VSD deploy step, you can run the VSD disk performance test at any time using `metroae-container vsd test disk`. ## Enabling post-installation security features @@ -197,7 +219,7 @@ When you are contributing code, or pulling new versions of MetroAE quite often, A sample of the deployment configuration files are provided in the deployments/default/ directory and also in [examples/](../examples/). If these are overwritten or deleted or if a "no frills" version of the files with only the minimum required parameters are desired, they can be generated with the following command: ``` -metroae tools generate example --schema [--no-comments] +metroae-container tools generate example --schema [--no-comments] ``` This will print an example of the deployment file specified by under the [schemas/](/schemas/) directory to the screen. The optional `--no-comments` will print the minimum required parameters (with no documentation). @@ -205,7 +227,7 @@ This will print an example of the deployment file specified by Example: ``` -metroae tools generate example --schema vsds > deployments/new/vsds.yml +metroae-container tools generate example --schema vsds > deployments/new/vsds.yml ``` Creates an example vsds configuration file under the "new" deployment. diff --git a/Documentation/DEPLOY.md b/Documentation/DEPLOY.md index 0b06774491..3cecbcccb3 100644 --- a/Documentation/DEPLOY.md +++ b/Documentation/DEPLOY.md @@ -17,7 +17,6 @@ Before deploying any components, you must have previously [set up your Nuage Met Make sure you have unzipped copies of all the Nuage Networks files you are going to need for installation or upgrade. These are generally distributed as `*.tar.gz` files that are downloaded by you from Nokia OLCS/ALED. There are a few ways you can use to unzip: * If you are running MetroAE via a clone of the nuage-metroae repo, you can unzip these files by using the nuage-unzip shell script `nuage-unzip.sh` which will place the files in subdirectories under the path specified for the `nuage_unzipped_files_dir` variable in `common.yml`. -* If you are running MetroAE via the MetroAE container, you can unzip these files using the metroae command. During the setup, you were promoted for the location of an data directory on the host. This data directory is mounted in the container as `/data`. Therefore, for using the unzip action, you must 1) copy your tar.gz files to a directory under the directory you specified at setup time and 2) you must specify a container-relative path on the unzip command line. For example, if you specified the data directory as `/tmp`, setup created the directory `/data/metroae_data` on your host and mounted that same directory as `/data` in the container. Assuming you copied your tar.gz files to `/tmp/metroae_data/6.0.1` on the docker host, your unzip command line would be as follows: `metroae tools unzip images /data/6.0.1/ /data/6.0.1`. * You can also unzip the files manually and copy them to their proper locations by hand. For details of this process, including the subdirectory layout that MetroAE expects, see [SETUP.md](SETUP.md). @@ -25,20 +24,20 @@ Make sure you have unzipped copies of all the Nuage Networks files you are going MetroAE can perform a workflow using the command-line tool as follows: - metroae [deployment] [options] + metroae-container [deployment] [options] * `workflow`: Name of the workflow to perform, e.g. 'install' or 'upgrade'. Supported workflows can be listed with --list option. * `component`: Name of the component to apply the workflow to, e.g. 'vsds', 'vscs', 'everything', etc. * `deployment`: Name of the deployment directory containing configuration files. See [CUSTOMIZE.md](CUSTOMIZE.md) -* `options`: Other options for the tool. These can be shown using --help. Also, any options not directed to the metroae tool are passed to Ansible. +* `options`: Other options for the tool. These can be shown using --help. Also, any options not directed to the metroae-container tool are passed to Ansible. The following are some examples: - metroae install everything + metroae-container install everything Installs all components described in deployments/default/. - metroae destroy vsds east_network + metroae-container destroy vsds east_network Takes down only the VSD components described by deployments/east_network/vsds.yml. Additional output will be displayed with 3 levels of verbosity. @@ -47,9 +46,9 @@ Takes down only the VSD components described by deployments/east_network/vsds.ym MetroAE workflows operate on components as you have defined them in your deployment. If you run a workflow for a component not specified, the workflow skips all tasks associated with that component and runs to completion without error. Thus, if you run the `install everything` workflow when only VRS configuration is present, the workflow deploys VRS successfully while ignoring the tasks for the other components not specified. Deploy all specified components with one command as follows: ``` -metroae install everything +metroae-container install everything ``` -Note: `metroae` is a shell script that executes `ansible-playbook` with the proper includes and command line switches. Use `metroae` (instead of `ansible-playbook`) when running any of the workflows provided herein. +Note: `metroae-container` is a shell script that executes `ansible-playbook` with the proper includes and command line switches. Use `metroae-container` (instead of `ansible-playbook`) when running any of the workflows provided herein. ## Deploy Individual Modules @@ -57,20 +56,20 @@ MetroAE offers modular execution models in case you don't want to deploy all com Module | Command | Description ---|---|--- -VCS | `metroae install vsds` | Installs VSD components -VNS | `metroae install vscs` | Installs VSC components +VCS | `metroae-container install vsds` | Installs VSD components +VNS | `metroae-container install vscs` | Installs VSC components ## Install a Particular Role or Host MetroAE has a complete library of [workflows](/src/playbooks "link to workflows directory"), which are directly linked to each individual role. You can limit your deployment to a particular role or component, or you can skip steps you are confident need not be repeated. For example, to deploy only the VSD VM-images and get them ready for VSD software installation, run: ``` -metroae install vsds predeploy +metroae-container install vsds predeploy ``` To limit your deployment to a particular host, just add `--limit` parameter: ``` - metroae install vsds predeploy --limit "vsd1.example.com" + metroae-container install vsds predeploy --limit "vsd1.example.com" ``` VSD predeploy can take a long time. If you are **vCenter user** you may want to monitor progress via the vCenter console. @@ -83,33 +82,33 @@ When installing or upgrading in a KVM environment, MetroAE copies the QCOW2 imag When QCOW2 files are pre-positioned, you must add a command-line variable, 'skip_copy_images', to indicate that copying QCOW2 files should be skipped. Otherwise, the QCOW2 files will be copied again. An extra-vars 'skip_copy_images' needs to be passed on the command line during the deployment phase to skip copying of the image files again. For example, to pre-position the QCOW2 images, run: ``` -metroae tools copy qcow +metroae-container tools copy qcow ``` Then, to skip the image copy during the install: ``` -metroae install everything --extra-vars skip_copy_images=True +metroae-container install everything --extra-vars skip_copy_images=True ``` ## Deploy the Standby Clusters MetroAE can be used to bring up the Standby VSD and VSTAT(ES) cluster in situations where the active has already been deployed. This can be done using the following commands. For VSD Standby deploy ``` -metroae install vsds standby predeploy -metroae install vsds standby deploy +metroae-container install vsds standby predeploy +metroae-container install vsds standby deploy ``` For Standby VSTATs(ES) ``` -metroae install vstats standby predeploy -metroae install vstats standby deploy +metroae-container install vstats standby predeploy +metroae-container install vstats standby deploy ``` ## Setup a Health Monitoring Agent A health monitoring agent can be setup on compatible components during the deploy step. Currently this support includes the [Zabbix](zabbix.com) agent. An optional parameter `health_monitoring_agent` can be specified on each component in the deployment files to enable setup. During each component deploy step when enabled, the agent will be downloaded, installed and configured to be operational. The agent can be installed separately, outside of the deploy role, using the following command: ``` -metroae health monitoring setup +metroae-container health monitoring setup ``` ## Debugging diff --git a/Documentation/DESTROY.md b/Documentation/DESTROY.md index 7b5d570fba..d3b708a851 100644 --- a/Documentation/DESTROY.md +++ b/Documentation/DESTROY.md @@ -21,9 +21,9 @@ You have the option of removing the entire deployment or only specified individu Remove the entire existing deployment with one command as follows: ``` -metroae destroy everything +metroae-container destroy everything ``` -Note: you may alternate between `metroae install everything` and `metroae destroy everything` as needed. +Note: you may alternate between `metroae-container install everything` and `metroae-container destroy everything` as needed. ### Remove Individual Components @@ -32,11 +32,11 @@ Alternatively, you can remove individual components (VSD, VSC, VRS, etc) as need #### Example Sequence for VSC: Configure components under your deployment - Run `metroae install everything` to deploy VSD, VSC, VRS, etc. + Run `metroae-container install everything` to deploy VSD, VSC, VRS, etc. Discover that something needs to be changed in the VSCs - Run **`metroae destroy vscs`** to tear down just the VSCs + Run **`metroae-container destroy vscs`** to tear down just the VSCs Edit `vscs.yml` in your deployment to fix the problem - Run `metroae install vsc predeploy`, `metroae install vscs deploy`, and `metroae install vscs postdeploy` to get the VSCs up and running again. + Run `metroae-container install vsc predeploy`, `metroae-container install vscs deploy`, and `metroae-container install vscs postdeploy` to get the VSCs up and running again. ## Questions, Feedback, and Contributing diff --git a/Documentation/DOCKER.md b/Documentation/DOCKER.md deleted file mode 100644 index 18b610eb70..0000000000 --- a/Documentation/DOCKER.md +++ /dev/null @@ -1,112 +0,0 @@ -# Deploying Components with the MetroAE Docker Container - -This file describes many of the details of the commands used for managing MetroAE distributed via container. For information on how to setup Docker and the MetroAE Docker container, see [SETUP.md](SETUP.md). - -## The metroae Command - -The metroae command is at the heart of interacting with the MetroAE container. It is used both to manage the container and to execute MetroAE inside the container. You can access all of the command options via `metroae [action] [deployment] [options]`. - -### metroae Container Management Command Options - -The following command options are supported by the metroae command: - -**help** -displays the help text for the command - -**pull** - pulls the MetroAE container from the registry. By default, the latest container is pulled. You can also specify a valid container tag to pull another version, e.g. `metroae container pull 1.0.0`. - -**download** - pulls the MetroAE container in tar format. This allows you to copy the tar file to systems behind firewalls and convert to Docker images by using `docker load` command. - -**setup** - setup completes the steps necessary to get the MetroAE container up and running. It prompts you for the full paths to a data directory that the container uses on your local disk. The setup action will create the subdirectory `metroae_data` on disk, then mount it as `/metroae_data/` inside the container. When using the MetroAE container, you must provide paths relative to this path as seen from inside the container. For example, if you tell setup to use `/tmp` for the data directory, setup will create `/tmp/metroae_data` on your host. The setup will also mount this directory inside the container as `/metroae_data/`. If you copy your tar.gz files to `/tmp/metroae_data/images/6.0.1/` on the host, the container sees this as `/data/images/6.0.1/`. So, when using unzip or setting `nuage_unzipped_files_dir` in common.yml, you would specify `/metroae_data/images/6.0.1/` as your path. - - -Running setup multiple times replaces the existing container, but it does not remove the data on your local disk. - -**metroae container start** - starts the container using the settings from setup - -**metroae container stop** - stops the running container - -**metroae container status** - displays the container status along with container settings - -**metroae container destroy** - stops and removes the metroae container along with the container image. Use this command when you want to replace an existing container with a new one, no need to run setup again afterwards. - -**metroae container update** - upgrades the container to the latest available release image. You don't need to run setup again after an upgrade. - -**metroae tools unzip images** - unzips Nuage Networks tar.gz files into the images directory specified during the setup operation. Use of this command requires that the tar.gz files be placed in either the data or images directory that you specified during setup. -See the current values of the data and images directories by executing the status command. - -**metroae tools generate example** - generates an example for the specified schema and puts it in the examples directory under the data directory that you specified during setup. You can get the current values of the data and images directories by executing the status command. - -**metroae tools encrypt credentials** - sets the encryption credentials for encrypting secure user data, e.g. passwords - -**metroae container ssh copyid** - copies the container's public key into the ssh authorized_keys file on the specified server. This key is required for passwordless ssh access to all target servers. Usage: `metroae container ssh copyid user@host_or_ip` - -**metroae --list** - lists the workflows that are supported by MetroAE - -**metroae --ansible-help** - shows the help options for the underlying Ansible engine - -### metroae Workflow Command Options - -The MetroAE container is designed so that you run MetroAE workflows, e.g. install, from the command line using the metroae command. The format of the command line is: - - `metroae [operation] [deployment] [options]` - -## Troubleshooting - -### SSH connection problems - -If MetroAE is unable to authenticate with your target server, chances are that passwordless ssh has not been configured properly. The public key of the container must be copied to the authorized_keys file on the target server. Use the `copy-ssh-id` command option, e.g. `metroae container copy-ssh-id user@host_name_or_ip`. - -### Where is my data directory? - -You can find out about the current state of your container, including the path to the metroae_data directory, by executing the container status command, `metroae container status`. - -### General errors - -metroae.log and ansible.log are located in the data directory you specified during setup. - -## Manually use the container without the script (Nokia internal support only) - -### Pull the container - - docker pull registry.mv.nuagenetworks.net:5001/metroae:1.0 - -### Run the container - -docker run -e USER_NAME='user name for the container' -e GROUP_NAME='group name for the container' -d $networkArgs -v 'path to the data mount point':/data:Z -v 'path to images mount point':/images:Z --name metroae registry.mv.nuagenetworks.net:5001/metroae:1.0 - -#### For Linux host - -``` -networkArgs is '--network host' -``` - -#### For Mac host - -``` -networkArgs is '-p "UI Port":5001' -``` - -### Execute MetroAE Commands - - docker exec 'running container id' /source/nuage-metroae/metroae playbook deployment - -### Stop the container - - docker stop 'running container id' - -### Remove the container - - docker rm 'container id' - -### Remove MetroAE image - - docker rmi 'image id' - -## Questions, Feedback, and Contributing - -Get support via the [forums](https://devops.nuagenetworks.net/forums/) on the [MetroAE site](https://devops.nuagenetworks.net/). -Ask questions and contact us directly at [devops@nuagenetworks.net](mailto:devops@nuagenetworks.net "send email to nuage-metro project"). - -Report bugs you find and suggest new features and enhancements via the [GitHub Issues](https://github.com/nuagenetworks/nuage-metroae/issues "nuage-metroae issues") feature. - -You may also [contribute](../CONTRIBUTING.md) to MetroAE by submitting your own code to the project. diff --git a/Documentation/GETTING_STARTED.md b/Documentation/GETTING_STARTED.md index 84dfacc95b..886e31601f 100644 --- a/Documentation/GETTING_STARTED.md +++ b/Documentation/GETTING_STARTED.md @@ -17,14 +17,10 @@ * Minimum of 2 CPUs, 4 GBs memory, and 40 GB disk * A read/write NFS mount for accessing the Nuage software images -2.1 Clone the master branch of the repo onto the **MetroAE Host**. Read [Setup](SETUP.md) for details. +2.1 Clone the master branch of the repo onto the **MetroAE Host**. Read [Setup](SETUP.md) for details. NOTE: Please clone the repo in a location that can be read by libvirt/qemu. ``` git clone https://github.com/nuagenetworks/nuage-metroae.git ``` -2.2 Install the required packages. Run as root or sudo. Read [Setup](SETUP.md) for details. -``` -$ sudo ./setup.sh -``` ## 3. Enable SSH Access @@ -52,13 +48,11 @@ See [Setup](SETUP.md) for more details about enabling SSH Access. Download and install the [ovftool](https://www.vmware.com/support/developer/ovf/) from VMware. MetroAE uses ovftool for OVA operations. Note that MetroAE is tested using ovftool version 4.3. ovftool version 4.3 is required for proper operation. -Note that running the metroae Docker container for VMware installations and upgrades requires special handling of the location of the ovftool command. Please see [SETUP.md](SETUP.md) for details. - ## 5. Prepare your environment ### 5.1 Unzip Nuage files -Execute: `metroae tools unzip images ` +Execute: `metroae-container tools unzip images ` See [SETUP.md](SETUP.md) for details.       Be sure that Nuage packages (tar.gz) are available on localhost (MetroAE host), diff --git a/Documentation/LICENSE.md b/Documentation/LICENSE.md deleted file mode 120000 index ea5b60640b..0000000000 --- a/Documentation/LICENSE.md +++ /dev/null @@ -1 +0,0 @@ -../LICENSE \ No newline at end of file diff --git a/Documentation/LICENSE.md b/Documentation/LICENSE.md new file mode 100644 index 0000000000..84bafab4cb --- /dev/null +++ b/Documentation/LICENSE.md @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + https://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2020 Nokia + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + https://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/Documentation/PLUGINS.md b/Documentation/PLUGINS.md index 6ff18739c8..ae74224526 100644 --- a/Documentation/PLUGINS.md +++ b/Documentation/PLUGINS.md @@ -8,7 +8,7 @@ The overview of the process is: 1. Create the proper files and subdirectories (see below) in a directory of your choosing 1. Package the plugin using the `package-plugin.sh` script -1. Install the plugin using the `metroae plugin install` command +1. Install the plugin using the `metroae-container plugin install` command 1. Add the proper data files, if required, to your deployment directory 1. Execute your role @@ -32,7 +32,7 @@ This will create a tarball of the plugin ready for distribution to users. Users who wish to install the plugin can issue: - ./metroae plugin install + ./metroae-container plugin install This should be issued from the nuage-metro repo or container. Note that a tarball or unzipped directory can both be installed. @@ -40,7 +40,7 @@ This should be issued from the nuage-metro repo or container. Note that a tarba To uninstall a plugin, the user can issue: - ./metroae plugin uninstall + ./metroae-container plugin uninstall This should be issued from the nuage-metro repo or container. Uninstall is by plugin name as all installed files were recorded and will be rolled back. diff --git a/Documentation/RELEASE_NOTES.md b/Documentation/RELEASE_NOTES.md index 0ebd854be1..09ea46f953 100644 --- a/Documentation/RELEASE_NOTES.md +++ b/Documentation/RELEASE_NOTES.md @@ -2,20 +2,49 @@ ## Release info -* MetroAE Version 4.6.2 -* Nuage Release Alignment 20.10.R6.3 -* Date of Release 13-December-2021 +* MetroAE Version 5.0.0 +* Nuage Release Alignment 20.10 +* Date of Release 4-February-2022 ## Release Contents ### Feature Enhancements -* None +* Predeploy NSGV without vsd license file (METROAE-497) +* Added support for hardening Elasticsearch nodes (METROAE-486) +* Allow custom configuration of RAM, CPU and Memory for VSD and VSTAT (METROAE-477) +* Run VSD Database pre-upgrade checks (METROAE-428) +* Support NFS server config using MetroAE(METROAE-557) +* Webfilter should optionally use HTTP Proxy (METROAE-493) +* Add support for encrypting credentials in Excel spreadsheet (METROAE-552) +* Add Ansible 3.4.0 support (METROAE-344) +* Allow installing/renewal of VSTAT(ES) licenses during install, upgrade and standalone (METROAE-591) ### Resolved Issues -* Allow installing/renewal of VSTAT(ES) licenses during install, upgrade and standalone (METROAE-591) +* Check for ejabberd license expiry (METROAE-505) +* Added support install of SD-WAN portal without the SMTP address(METROAE-492) +* Fixed yum lock timeout issue when installing packages in KVM (METROAE-507) +* Replacing known_hosts module mgmt_ip to hostname (METROAE-481) +* Remove unnecessary debug lines from vsc-health (MetroAE-541) +* Fixing vsd-destroy to destroy old and new VMs (METROAE-504) +* Fix MetroAE errors while deploying using SSH Proxy (MetroAE-574) +* Fixed Check passwordless ssh from metro host to hypervisors and components ( METROAE-520) +* Added ES servers to NUH GUI ( METROAE-491) +* Fix NUH install on 20.10.R5 (METROAE-490) +* Fixing message issue for docker pull(METROAE-527) +* Install NUH optionally without DNS entry (METROAE-375) +* Add procedure for NUH copy certificates if installed before VSD(METROAE-559) +* Create NUH users and certs for NSG bootstrapping (METROAE-487) +* Enhance check to accept both access_port_name and access_ports variables being undefined (METROAE-585) +* VSTAT VSS UI should be set for all VSTATS (METROAE-580) +* Remove Old MetroAE container support (METROAE-564) * Fix MetroAE VSD in-place upgrades for custom credentials (METROAE-586) +* Make changes into documentation for supporting ansible version upgrade and new container(METROAE-588) +* Document where credentials are used(METROAE-532) +* Fix MetroAE inplace upgrade from 20.10.R6.1 to 20.10.R6.3(METROAE-590) +* On applying branding to the VSD jboss restart should happen serially (METROAE-597) +* Clean up temporary ISO file on VSD after mounting (METROAE-598) ## Test Matrix diff --git a/Documentation/SD-WAN_PORTAL.md b/Documentation/SD-WAN_PORTAL.md index b13a0d17c2..cf8d68b89d 100644 --- a/Documentation/SD-WAN_PORTAL.md +++ b/Documentation/SD-WAN_PORTAL.md @@ -10,18 +10,18 @@ Supported deployment: Currently the following workflows are supported: -* metroae install portal - Deploy Portal VM(s) on KVM hypervisor and install the application -* metroae install portal predeploy - Prepares the HV and deploys Portal VMs -* metroae install portal deploy - Installs Docker-CE, SD-WAN Portal on the already prepared VMs -* metroae install portal postdeploy - To be updated. Includes a restart and license update task -* metroae install portal license - Copies the license file to the Portal VM(s) and restarts the Portal(s) -* metroae destroy portal - Destroys Portal VMs and cleans up the files from hypervisor(s) -* metroae upgrade portal - Upgrade Portal VM(s) on KVM hypervisor -* metroae upgrade portal preupgrade health - Performs prerequisite and health checks of a Portal VM or cluster before initiating an upgrade -* metroae upgrade portal shutdown - Performs database backup if necessary, Portal VM snapshot and stops all services -* metroae upgrade portal deploy - Performs an install of the new SD-WAN Portal version -* metroae upgrade portal postdeploy - Performs post-upgrade checks to verify Portal VM health, cluster status, and verify successful upgrade -* metroae rollback portal - In the event of an unsuccessful upgrade, Portal(s) can be rolled back to the previously installed software version. +* metroae-container install portal - Deploy Portal VM(s) on KVM hypervisor and install the application +* metroae-container install portal predeploy - Prepares the HV and deploys Portal VMs +* metroae-container install portal deploy - Installs Docker-CE, SD-WAN Portal on the already prepared VMs +* metroae-container install portal postdeploy - To be updated. Includes a restart and license update task +* metroae-container install portal license - Copies the license file to the Portal VM(s) and restarts the Portal(s) +* metroae-container destroy portal - Destroys Portal VMs and cleans up the files from hypervisor(s) +* metroae-container upgrade portal - Upgrade Portal VM(s) on KVM hypervisor +* metroae-container upgrade portal preupgrade health - Performs prerequisite and health checks of a Portal VM or cluster before initiating an upgrade +* metroae-container upgrade portal shutdown - Performs database backup if necessary, Portal VM snapshot and stops all services +* metroae-container upgrade portal deploy - Performs an install of the new SD-WAN Portal version +* metroae-container upgrade portal postdeploy - Performs post-upgrade checks to verify Portal VM health, cluster status, and verify successful upgrade +* metroae-container rollback portal - In the event of an unsuccessful upgrade, Portal(s) can be rolled back to the previously installed software version. Example deployment files are available under examples/kvm_portal_install diff --git a/Documentation/SETUP.md b/Documentation/SETUP.md index de1b2f63dc..15cdc0db70 100644 --- a/Documentation/SETUP.md +++ b/Documentation/SETUP.md @@ -1,125 +1,17 @@ # Setting Up the Environment -You can set up the MetroAE host environment either [with a Docker container](#method-one-set-up-host-environment-using-docker-container) or [with a GitHub clone](#method-two-set-up-host-environment-using-github-clone). +You can set up the MetroAE host environment [with a GitHub clone](#set-up-host-environment-using-github-clone). +Note that docker is required on the host in order to run MetroAE. All file paths in configuration files must be relative to the git clone folder. ## Environment -### Method One: Set up Host Environment Using Docker Container - -Using a Docker container results in a similar setup as a GitHub clone, plus it delivers the following features: - -* All prerequisites are satisfied by the container. Your only requirement for your server is that it run Docker engine. -* Your data is located in the file system of the host where Docker is running. You don't need to get inside the container. -* A future release will include Day 0 Configuration capabilities. +### Set up Host Environment Using GitHub Clone #### System (and Other) Requirements * Operating System: Enterprise Linux 7 (EL7) CentOS 7.4 or greater or RHEL 7.4 or greater -* Locally available image files for VCS or VNS deployments -* Docker Engine 1.13.1 or greater installed and running -* Container operations may need to be performed with elevated privileges (*root*, *sudo*) - -#### Steps - -##### 1. Get a copy of the metroae script from the github repo - -``` -https://github.com/nuagenetworks/nuage-metroae/blob/master/metroae -``` -You can also copy the script out of a cloned workspace on your local machine. You can copy the metroae script to any directory of your choosing, e.g. `/usr/local/bin`. Note that you may need to set the script to be executeable via `chmod +x`. - -##### 2. Pull the latest Docker container using the following command: - -``` -metroae container pull -``` - -##### 3. Setup and start the Docker container using the following command: - -``` -metroae container setup [path to data directory] -``` - -You can optionally specify the data directory path. If you don't specify the data directory on the command line, you will be prompted to enter one during setup. This path is required for container operation. The data directory is the place where docs, examples, Nuage images, and your deployment files will be kept and edited. Note that setup will create a subdirectory beneath the data directory you specify, `metroae_data`. For example, if you specify `/tmp` for your data directory path during setup, setup will create `/tmp/metroae_data` for you. Setup will copy docs, logs, and deployment files to `/tmp/metroae_data`. Inside the container itself, setup will mount `/tmp/metroae_data` as `/metroae_data/`. Therefore, when you specify path names for metroae when using the container, you should always specify the container-relative path. For example, if you copy your tar.gz files to `/tmp/metroae_data/6.0.1` on the host, this will appear as `/metroae_data/6.0.1` inside the container. When you use the unzip-files action on the container, then, you would specify a source path as `/metroae_data/6.0.1`. When you complete the nuage_unzipped_files_dir variable in common.yml, you would also specify `/metroae_data/6.0.1`. - -Note that you can run setup multiple times and setup will not destroy or modify the data you have on disk. If you specify the same data and imafges directories that you had specified on earlier runs, metroae will pick up the existing data. Thus you can update the container as often as you like and your deployments will be preserved. - -Note that you can stop and start the MetroAE container at any time using these commands: - -``` -metroae container stop -metroae container start -``` - -##### 4. **For KVM Only**, copy the container ssh keys to your KVM target servers (aka hypervisors) using the following command: - -``` -metroae container ssh copyid [target_server_username]@[target_server] -``` - -This command copies the container's public key into the ssh authorized_keys file on the specified target server. This key is required for passwordless ssh access from the container to the target servers. The command must be run once for every KVM target server. This step should be skipped if your target-server type is anything but KVM, e.g. vCenter or OpenStack. - -##### 5. **For ESXi / vCenter Only**, install ovftool and copy to metroae_data directory - -When running the MetroAE Docker container, the container will need to have access to the ovftool command installed on the Docker host. The following steps are suggested: - -###### 5.1. Install ovftool - -Download and install the [ovftool](https://www.vmware.com/support/developer/ovf/) from VMware. - -###### 5.2. Copy ovftool installation to metroae_data directory - -The ovftool command and supporting files are usually installed in the /usr/lib/vmware-ovftool on the host. In order to the metroae container to be able to access these files, you must copy the entire folder to the metroae_data directory on the host. For example, if you have configured the container to use `/tmp/metroae_data` on your host, you would copy `/usr/lib/vmware-ovftool` to `/tmp/metroae_data/vmware-ovftool`. Note: Docker does not support following symlinks. You must copy the files as instructed. - -###### 5.3. Configure the ovftool path in your deployment - -The path to the ovftool is configured in your deployment in the common.yml file. Uncomment and set the variable 'vcenter_ovftool' to the container-relative path to where you copied the `/usr/lib/vmware-ovftool` folder. This is required because metroae will attempt to execute ovftool from within the container. From inside the container, metroae can only access paths that have been mounted from the host. In this case, this is the metroae_data directory which is mounted inside the container as `/metroae_data`. For our example, in common.yml you would set `vcenter_ovftool: /metroae_data/vmware-ovftool/ovftool`. - -##### 6. You can check the status of the container at any time using the following command: - -``` -metroae container status -``` - -That's it! Container configuration data and logs will now appear in the newly created `/opt/metroae` directory. Documentation, examples, deployments, and the ansible.log file will appear in the data directory configured during setup, `/tmp/metroae_data` in our examples, above. See [DOCKER.md](DOCKER.md) for specfic details of each command and container management command options. Now you're ready to [customize](CUSTOMIZE.md) for your topology. - -Note: You will continue to use the `metroae` script to run commands, but for best results using the container you should make sure your working directory is *not* the root of a nuage-metroaegit clone workspace. The `metroae` script can be used for both the container and the git clone workspace versions of MetroAE. At run time, the script checks its working directory. If it finds that the current working directory is the root of a git cloned workspace, it assumes that you are running locally. It will *not* execute commands inside the container. When using the container, you should run `metroae` from a working directory other than a git clone workspace. For example, if the git clone workspace is `/home/username/nuage/nuage-metroae`, you can `cd /home/username/nuage` and then invoke the command as `./nuage-metroae/metroae install vsds`. - -### Method Two: Set up Host Environment Using GitHub Clone - -If you prefer not to use a Docker container you can set up your environment with a GitHub clone instead. - -#### System (and Other) Requirements - -* Operating System: Enterprise Linux 7 (EL7) CentOS 7.4 or greater or RHEL 7.4 or greater -* Locally available image files for VCS or VNS deployments - -#### Optionally set up and activate python virtual environment -If you would like to run MetroAE within a python virtual environment, please follow the steps below. - -##### 1. Install pip -If pip isn't installed on the host, install it with the following command. -``` -(sudo) yum install -y python-pip -``` -##### 2. Install the virtualenv package -Install the virtualenv package using the following command. -``` -pip install virtualenv -``` -##### 3. Set up and activate python virtual environment -The python virtual environment can be created as follows. -``` -virtualenv [path_to_virtual_environment_or_name] -``` -After which the environment can be activated. -``` -source [path_to_virtual_environment_or_name]/bin/activate -``` -After activating the virtual environment, you can proceed with the rest of the document. The steps will then be performed within the virtual environment. Virtual environments can be exited/deactivated with this command. -``` -deactivate -``` +* Locally available image files for VCS or VNS deployments within the git clone folder +* Docker engine #### Steps @@ -130,58 +22,27 @@ If Git is not already installed on the host, install it with the following comma yum install -y git ``` -Clone the repo with the following command. +Clone the repo with the following command. NOTE: Please clone the repo in a location that can be +read by libvirt/qemu. ``` git clone https://github.com/nuagenetworks/nuage-metroae.git ``` -Once the nuage-metroae repo is cloned, you can skip the rest of this procedure by running the MetroAE wizard, run_wizard.py. You can use the wizard to automatically handle the rest of the steps described in this document plus the steps described in [customize](CUSTOMIZE.md). +Once the nuage-metroae repo is cloned, you can skip the rest of this procedure by running the MetroAE wizard. You can use the wizard to automatically handle the rest of the steps described in this document plus the steps described in [customize](CUSTOMIZE.md). ``` -python run_wizard.py +metroae-container wizard ``` If you don't run the wizard, please continue with the rest of the steps in this document. -##### 2. Install Packages - -MetroAE code includes a setup script which installs required packages and modules. If any of the packages or modules are already present, the script does not upgrade or overwrite them. You can run the script multiple times without affecting the system. To install the required packages and modules, run the following command. -``` -sudo ./setup.sh -``` - -The script writes a detailed log into *setup.log* for your reference. A *Setup complete!* messages appears when the packages have been successfully installed. - -If you are using a python virtual environment, you will need to run the following command to install the required pip packages. -``` -pip install -r pip_requirements.txt -``` -Please ensure that the pip selinux package and the yum libselinux-python packages are successfully installed before running MetroAE within a python virtual environment. - -##### 3. **For ESXi / vCenter Only**, install additional packages - - Package | Command - -------- | ------- - pyvmomi | `pip install pyvmomi==6.7.3` - jmespath | `pip install jmespath` - -Note that we have specified version 6.7.3 for pyvmomi. We test with this version. Newer versions of pyvmomi may cause package conflicts. - - If you are installing VSP components in a VMware environment (ESXi/vCenter) download and install the [ovftool](https://www.vmware.com/support/developer/ovf/) from VMware. MetroAE uses ovftool for OVA operations. - -##### 4. **For OpenStack Only**, install additional packages - -``` -pip install --upgrade -r openstack_requirements.txt -``` - -##### 5. Copy ssh keys +##### 2. Copy ssh keys Communication between the MetroAE Host and the target servers (hypervisors) occurs via SSH. For every target server, run the following command to copy the current user's ssh key to the authorized_keys file on the target server: ``` ssh-copy-id [target_server_username]@[target_server] ``` -##### 6. Configure NTP sync +##### 3. Configure NTP sync For proper operation Nuage components require clock synchronization with NTP. Best practice is to synchronize time on the target servers that Nuage VMs are deployed on, preferrably to the same NTP server as used by the components themselves. @@ -215,7 +76,7 @@ Alternatively, you can create the directories under the [nuage_unzipped_files_di /vns/nuh/ /vns/util/ ``` - + Note: After completing setup you will customize for your deployment, and you'll need to add this unzipped files directory path to `common.yml`. ## Next Steps diff --git a/Documentation/TERRAFORM.md b/Documentation/TERRAFORM.md index a4c0c6f85b..064a0736ce 100644 --- a/Documentation/TERRAFORM.md +++ b/Documentation/TERRAFORM.md @@ -41,12 +41,12 @@ Note that [Terraform](https://www.terraform.io/) will need to perform all of the resource definitions... provisioner "local-exec" { - command = "./metroae install vsds deploy" + command = "./metroae-container install vsds deploy" working_dir = "/nuage-metroae" } provisioner "local-exec" { - command = "./metroae install vsds postdeploy" + command = "./metroae-container install vsds postdeploy" working_dir = "/nuage-metroae" } } diff --git a/Documentation/UPGRADE_HA.md b/Documentation/UPGRADE_HA.md index a936ff7faf..60bfd9f460 100644 --- a/Documentation/UPGRADE_HA.md +++ b/Documentation/UPGRADE_HA.md @@ -33,13 +33,13 @@ You can use MetroAE to upgrade Active/Standby VSD clusters, also known as geo-re If you want to perform a standby VSD cluster inplace upgrade only, You can use the following command. -`metroae upgrade vsds standby inplace` +`metroae-container upgrade vsds standby inplace` ### VSD Stats-out upgrade By default, Nuage VSD and VSTAT components are deployed in what is referred to as 'stats-in' mode. This refers to the fact that the stats collector process that feeds data to the ES cluster runs 'in' the VSDs. An alternative to this deployment, installation of which is also supported by MetroAE, is a 'stats-out' mode. In 'stats-out', three additional VSDs are deployed specifically to handle the stats collection. We refer to those extra VSD nodes as VSD stats-out nodes. In such a case, the stats collection work is not running 'in' the regular VSD cluster. Stats collection is done 'out' in the cluster of 3 VSD stats-out nodes. ES nodes are also deployed in a special way, with 3 ES nodes in a cluster and 3+ ES nodes configured as 'data' nodes. You can find out more detail about the deployments in the Nuage documentation. -You can use MetroAE to install or upgrade upgrade your stats-out configuration. Special workflows have been created to support the stats-out upgrade. These special workflows have been included automatically in the `metroae upgrade everything` command. Alternatively you can use the step-by-step upgrade procedure to perform your upgrade. The `metroae upgrade vsd stats` command will handle upgrading the separate VSD stats-out nodes. The `metroae upgrade vsd stats inplace` command will apply a patch (in-place) upgrade of the VSD stats-out nodes. +You can use MetroAE to install or upgrade upgrade your stats-out configuration. Special workflows have been created to support the stats-out upgrade. These special workflows have been included automatically in the `metroae-container upgrade everything` command. Alternatively you can use the step-by-step upgrade procedure to perform your upgrade. The `metroae-container upgrade vsd stats` command will handle upgrading the separate VSD stats-out nodes. The `metroae-container upgrade vsd stats inplace` command will apply a patch (in-place) upgrade of the VSD stats-out nodes. Note: Upgrade of the VSD stats-out nodes should take place only after the primary VSD cluster and all Elasticsearch nodes have been upgraded. @@ -60,37 +60,37 @@ If your upgrade plans do not include upgrading VRSs or other dataplane component ### Upgrade All Components including Active/Standby clusters (does not pause for external VRS/dataplane upgrade) - metroae upgrade everything + metroae-container upgrade everything Issuing this workflow will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option does not pause until completion to allow VRSs and other dataplane components to be upgraded. If dataplane components need to be upgraded, the following option should be performed instead. ### Upgrade All Components including Active/Standby clusters (includes pause for VRS) - metroae upgrade beforevrs + metroae-container upgrade beforevrs ( Upgrade the VRSs andn other dataplane components ) - metroae upgrade aftervrs + metroae-container upgrade aftervrs Issuing the above workflows will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option allows the VRSs and other dataplane components to be upgraded between other components. ### Upgrade Individual Components including Active Standby clusters - metroae upgrade preupgrade health + metroae-container upgrade preupgrade health - metroae upgrade vsds + metroae-container upgrade vsds - metroae upgrade vscs beforevrs + metroae-container upgrade vscs beforevrs ( Upgrade the VRS(s) ) - metroae upgrade vscs aftervrs + metroae-container upgrade vscs aftervrs - metroae upgrade vstats + metroae-container upgrade vstats - metroae upgrade postdeploy + metroae-container upgrade postdeploy - metroae upgrade postupgrade health + metroae-container upgrade postupgrade health Issuing the above workflows will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option allows the VRS(s) to be upgraded in-between other components. Performing individual workflows can allow specific components to be skipped or upgraded at different times. @@ -102,13 +102,13 @@ The following workflows will upgrade each component in individual steps. Perform 1. Run health checks on VSD, VSC and VSTAT. - `metroae upgrade preupgrade health` + `metroae-container upgrade preupgrade health` Check the health reports carefully for any reported errors before proceeding. You can run health checks at any time during the upgrade process. 2. Backup the database and decouple the VSD cluster. - `metroae upgrade ha vsd dbbackup` + `metroae-container upgrade ha vsd dbbackup` `vsd_node1` has been decoupled from the cluster and is running in standalone (SA) mode. @@ -116,14 +116,14 @@ The following workflows will upgrade each component in individual steps. Perform **Note** MetroAE provides a simple tool for optionally cleaning up the backup files that are generated during the upgrade process. The tool deletes the backup files for both VSD and VSC. There are two modes foe clean-up, the first one deletes all the backups and the second one deletes only the latest backup. By default the tool deletes only the latest backup. If you'd like to clean-up the backup files, you can simply run below commands: - Clean up all the backup files: `metroae upgrade cleanup -e delete_all_backups=true` - Clean up the latest backup files: `metroae upgrade cleanup` + Clean up all the backup files: `metroae-container upgrade cleanup -e delete_all_backups=true` + Clean up the latest backup files: `metroae-container upgrade cleanup` ### Upgrade VSD 1. Power off VSD nodes two and three. - `metroae upgrade ha vsd shutdown23` + `metroae-container upgrade ha vsd shutdown23` `vsd_node2` and `vsd_node3` are shut down; they are not deleted. The new nodes are brought up with the upgrade vmnames you previously specified. You have the option of powering down VSDs manually instead. @@ -131,23 +131,23 @@ The following workflows will upgrade each component in individual steps. Perform 2. Predeploy new VSD nodes two and three. - `metroae upgrade ha vsd predeploy23` + `metroae-container upgrade ha vsd predeploy23` The new `vsd_node2` and `vsd_node3` are now up and running; they are not yet configured. - **Troubleshooting**: If you experience a failure, delete the newly-created nodes by executing the command `metroae upgrade destroy ha vsd23`, then re-execute the predeploy command. Do NOT run `metroae destroy vsds` as this command destroys the "old" VM which is not what we want to do here. + **Troubleshooting**: If you experience a failure, delete the newly-created nodes by executing the command `metroae-container upgrade destroy ha vsd23`, then re-execute the predeploy command. Do NOT run `metroae-container destroy vsds` as this command destroys the "old" VM which is not what we want to do here. 3. Deploy new VSD nodes two and three. - `metroae upgrade ha vsd deploy23` + `metroae-container upgrade ha vsd deploy23` The VSD nodes have been upgraded. - **Troubleshooting**: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command `metroae upgrade destroy ha vsd23`) then re-execute the deploy command. Do NOT run `metroae destroy vsds` as this command destroys the "old" VM. + **Troubleshooting**: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command `metroae-container upgrade destroy ha vsd23`) then re-execute the deploy command. Do NOT run `metroae-container destroy vsds` as this command destroys the "old" VM. 4. Power off VSD node one. - `metroae upgrade ha vsd shutdown1` + `metroae-container upgrade ha vsd shutdown1` `vsd_node1` shuts down; it is not deleted. The new node is brought up with the `upgrade_vmname` you previously specified. You have the option of powering down VSD manually instead. @@ -155,23 +155,23 @@ The following workflows will upgrade each component in individual steps. Perform 5. Predeploy the new VSD node one. - `metroae upgrade ha vsd predeploy1` + `metroae-container upgrade ha vsd predeploy1` The new VSD node one is now up and running; it is not yet configured. - **Troubleshooting**: If you experience a failure, delete the newly-created node by executing the command `metroae upgrade destroy ha vsd1`, then re-execute the predeploy command. Do NOT run `vsd_destroy` as this command destroys the "old" VM. + **Troubleshooting**: If you experience a failure, delete the newly-created node by executing the command `metroae-container upgrade destroy ha vsd1`, then re-execute the predeploy command. Do NOT run `vsd_destroy` as this command destroys the "old" VM. 6. Deploy the new VSD node one. - `metroae upgrade ha vsd deploy1` + `metroae-container upgrade ha vsd deploy1` All three VSD nodes are upgraded. - **Troubleshooting**: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command `metroae upgrade destroy ha vsd1`) then re-execute the deploy command. Do NOT run `metroae destroy vsds` as this command destroys the "old" VM. + **Troubleshooting**: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command `metroae-container upgrade destroy ha vsd1`) then re-execute the deploy command. Do NOT run `metroae-container destroy vsds` as this command destroys the "old" VM. 7. Set the VSD upgrade complete flag. - `metroae upgrade ha vsd complete` + `metroae-container upgrade ha vsd complete` The upgrade flag is set to complete. @@ -179,7 +179,7 @@ The following workflows will upgrade each component in individual steps. Perform 8. Apply VSD license (if needed) - `metroae vsd license` + `metroae-container vsd license` The VSD license will be applied. @@ -189,19 +189,19 @@ The following workflows will upgrade each component in individual steps. Perform 1. Run VSC health check (optional). - `metroae upgrade ha vsc health -e report_filename=vsc_preupgrade_health.txt` + `metroae-container upgrade ha vsc health -e report_filename=vsc_preupgrade_health.txt` You performed health checks during preupgrade preparations, but it is good practice to run the check here as well to make sure the VSD upgrade has not caused any problems. 2. Backup and prepare VSC node one. - `metroae upgrade ha vsc backup1` + `metroae-container upgrade ha vsc backup1` **Troubleshooting**: If you experience a failure, you can re-execute the command. 3. Deploy VSC node one. - `metroae upgrade ha vsc deploy1` + `metroae-container upgrade ha vsc deploy1` VSC node one has been upgraded. @@ -209,7 +209,7 @@ The following workflows will upgrade each component in individual steps. Perform 4. Run postdeploy for VSC node one. - `metroae upgrade ha vsc postdeploy1` + `metroae-container upgrade ha vsc postdeploy1` One VSC is now running the **old** version, and one is running the **new** version. @@ -223,13 +223,13 @@ Upgrade your VRS(s) and then continue with this procedure. Do not proceed withou 1. Backup and prepare VSC node two. - `metroae upgrade ha vsc backup2` + `metroae-container upgrade ha vsc backup2` **Troubleshooting**: If you experience a failure, you can re-execute the command. 2. Deploy VSC node two. - `metroae upgrade ha vsc deploy2` + `metroae-container upgrade ha vsc deploy2` VSC node two has been upgraded. @@ -237,7 +237,7 @@ Upgrade your VRS(s) and then continue with this procedure. Do not proceed withou 3. Run postdeploy for VSC node two. - `metroae upgrade ha vsc postdeploy2` + `metroae-container upgrade ha vsc postdeploy2` Both VSCs are now running the **new** version. @@ -249,25 +249,25 @@ Our example includes a VSTAT node. If your topology does not include one, procee 1. Run VSTAT health check (optional). - `metroae upgrade ha vstat health -e report_filename=vstat_preupgrade_health.txt` + `metroae-container upgrade ha vstat health -e report_filename=vstat_preupgrade_health.txt` You performed health checks during preupgrade preparations, but it is good practice to run the check here as well to make sure the VSD upgrade has not caused any problems. 2. Prepare the VSTAT nodes for upgrade. - `metroae upgrade ha vstat prep` + `metroae-container upgrade ha vstat prep` Sets up SSH and disables stats collection. 3. Upgrade the VSTAT nodes. - `metroae upgrade ha vstat inplace` + `metroae-container upgrade ha vstat inplace` Performs an in-place upgrade of the VSTAT nodes. 4. Complete VSTAT upgrade and perform post-upgrade checks. - `metroae upgrade ha vstat wrapup` + `metroae-container upgrade ha vstat wrapup` Completes the VSTAT upgrade process, renables stats, and performs a series of checks to ensure the VSTAT nodes are healthy. @@ -277,22 +277,22 @@ Our example includes a VSTAT node. If your topology does not include one, procee If the upgrade is a major upgrade, e.g., 6.0.* -> 20.10.* , use the following command to upgrade the VSD stats-out nodes: - `metroae upgrade vsd stats` + `metroae-container upgrade vsd stats` If the upgrade is a patch (in-place), e.g., 20.10.R1 -> 20.10.R4, first make sure that the main VSD cluster has been upgraded/patched. If the upgrade of the main VSD cluster hasn't been done, you can use the following command to patch the main VSD cluster: - `metroae upgrade vsds inplace` + `metroae-container upgrade vsds inplace` When you are certain that the main VSD cluster has been patched, you can use the following command to apply the patch to the VSD stat-out nodes: - `metroae upgrade vsd stats inplace` + `metroae-container upgrade vsd stats inplace` ### Finalize the Upgrade 1. Finalize the settings - `metroae upgrade postdeploy` + `metroae-container upgrade postdeploy` The final steps for the upgrade are executed. @@ -300,7 +300,7 @@ Our example includes a VSTAT node. If your topology does not include one, procee 2. Run a health check. - `metroae upgrade postupgrade health` + `metroae-container upgrade postupgrade health` Health reports are created that can be compared with the ones produced during preupgrade preparations. Investigate carefully any errors or discrepancies. diff --git a/Documentation/UPGRADE_SA.md b/Documentation/UPGRADE_SA.md index 0a18078c4d..2a9b28ea5d 100644 --- a/Documentation/UPGRADE_SA.md +++ b/Documentation/UPGRADE_SA.md @@ -42,37 +42,37 @@ If your topology does not include VRS you can upgrade everything with one comman ### Upgrade All Components (without VRS) - metroae upgrade everything + metroae-container upgrade everything Issuing this workflow will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option does not pause until completion to allow VRS(s) to be upgraded. If VRS(s) need to be upgraded, the following option should be performed instead. ### Upgrade All Components (with VRS) - metroae upgrade beforevrs + metroae-container upgrade beforevrs ( Upgrade the VRS(s) ) - metroae upgrade aftervrs + metroae-container upgrade aftervrs Issuing the above workflows will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option allows the VRS(s) to be upgraded in-between other components. ### Upgrade Individual Components - metroae upgrade preupgrade health + metroae-container upgrade preupgrade health - metroae upgrade vsds + metroae-container upgrade vsds - metroae upgrade vscs beforevrs + metroae-container upgrade vscs beforevrs ( Upgrade the VRS(s) ) - metroae upgrade vscs aftervrs + metroae-container upgrade vscs aftervrs - metroae upgrade vstats + metroae-container upgrade vstats - metroae upgrade postdeploy + metroae-container upgrade postdeploy - metroae upgrade postupgrade health + metroae-container upgrade postupgrade health Issuing the above workflows will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option allows the VRS(s) to be upgraded in-between other components. Performing individual workflows can allow specific components to be skipped or upgraded at different times. @@ -84,13 +84,13 @@ The following workflows will upgrade each component in individual steps. The st 1. Run health checks on VSD, VSC and VSTAT. - `metroae upgrade preupgrade health` + `metroae-container upgrade preupgrade health` Check the health reports carefully for any reported errors before proceeding. You can run health checks at any time during the upgrade process. 2. Backup the VSD node database. - `metroae upgrade sa vsd dbbackup` + `metroae-container upgrade sa vsd dbbackup` The VSD node database is backed up. @@ -98,14 +98,14 @@ The following workflows will upgrade each component in individual steps. The st **Note** MetroAE provides a simple tool for optionally cleaning up the backup files that are generated during the upgrade process. The tool deletes the backup files for both VSD and VSC. There are two modes foe clean-up, the first one deletes all the backups and the second one deletes only the latest backup. By default the tool deletes only the latest backup. If you'd like to clean-up the backup files, you can simply run below commands: - Clean up all the backup files: `metroae vsp_upgrade_cleanup -e delete_all_backups=true` - Clean up the latest backup files: `metroae vsp_upgrade_cleanup` + Clean up all the backup files: `metroae-container vsp_upgrade_cleanup -e delete_all_backups=true` + Clean up the latest backup files: `metroae-container vsp_upgrade_cleanup` ### Upgrade VSD 1. Power off the VSD node. - `metroae upgrade sa vsd shutdown` + `metroae-container upgrade sa vsd shutdown` VSD is shut down; it is not deleted. (The new node is brought up with the `upgrade_vmname` you previously specified.) You have the option of powering down VSD manually instead. @@ -113,30 +113,30 @@ The following workflows will upgrade each component in individual steps. The st 2. Predeploy the new VSD node. - `metroae install vsds predeploy` + `metroae-container install vsds predeploy` The new VSD node is now up and running; it is not yet configured. - **Troubleshooting**: If you experience a failure, delete the new node by executing the command `metroae upgrade destroy sa vsd`, then re-execute the predeploy command. Do NOT run `metroae destroy vsds` as this command destroys the "old" VM which is not what we want to do here. + **Troubleshooting**: If you experience a failure, delete the new node by executing the command `metroae-container upgrade destroy sa vsd`, then re-execute the predeploy command. Do NOT run `metroae-container destroy vsds` as this command destroys the "old" VM which is not what we want to do here. 3. Deploy the new VSD node. - `metroae upgrade sa vsd deploy` + `metroae-container upgrade sa vsd deploy` The VSD node is upgraded. - **Troubleshooting**: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command `metroae upgrade destroy sa vsd`) then re-execute the deploy command. Do NOT run `metroae destroy vsds` for this step. + **Troubleshooting**: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command `metroae-container upgrade destroy sa vsd`) then re-execute the deploy command. Do NOT run `metroae-container destroy vsds` for this step. 4. Set the VSD upgrade complete flag. - `metroae upgrade sa vsd complete` + `metroae-container upgrade sa vsd complete` The upgrade flag is set to complete. **Troubleshooting**: If you experience a failure, you can re-execute the command. 5. Apply VSD license (if needed) - `metroae vsd license` + `metroae-container vsd license` The VSD license will be applied. @@ -148,19 +148,19 @@ This example is for one VSC node. If your topology has more than one VSC node, p 1. Run VSC health check (optional). - `metroae upgrade sa vsc health -e report_filename=vsc_preupgrade_health.txt` + `metroae-container upgrade sa vsc health -e report_filename=vsc_preupgrade_health.txt` You performed health checks during preupgrade preparations, but it is good practice to run the check here as well to make sure the VSD upgrade has not caused any problems. 2. Backup and prepare the VSC node. - `metroae upgrade sa vsc backup` + `metroae-container upgrade sa vsc backup` **Troubleshooting**: If you experience failure, you can re-execute the command. 3. Deploy VSC. - `metroae upgrade sa vsc deploy` + `metroae-container upgrade sa vsc deploy` The VSC is upgraded. @@ -168,7 +168,7 @@ This example is for one VSC node. If your topology has more than one VSC node, p 4. Run VSC postdeploy. - `metroae upgrade sa vsc postdeploy` + `metroae-container upgrade sa vsc postdeploy` VSC upgrade is complete. @@ -184,25 +184,25 @@ Our example includes a VSTAT node. If your topology does not include one, procee 1. Run VSTAT health check (optional). - `metroae upgrade sa vstat health -e report_filename=vstat_preupgrade_health.txt` + `metroae-container upgrade sa vstat health -e report_filename=vstat_preupgrade_health.txt` You performed health checks during preupgrade preparations, but it is good practice to run the check here as well to make sure the VSD upgrade has not caused any problems. 2. Prepare the VSTAT node for upgrade. - `metroae upgrade sa vstat prep` + `metroae-container upgrade sa vstat prep` Sets up SSH and disables stats collection. 3. Upgrade the VSTAT node. - `metroae upgrade sa vstat inplace` + `metroae-container upgrade sa vstat inplace` Performs an in-place upgrade of the VSTAT. 4. Complete VSTAT upgrade and perform post-upgrade checks. - `metroae upgrade sa vstat wrapup` + `metroae-container upgrade sa vstat wrapup` Completes the upgrade process, renables stats and performs a series of checks to ensure the VSTAT is healthy. @@ -210,7 +210,7 @@ Our example includes a VSTAT node. If your topology does not include one, procee 1. Finalize the settings. - `metroae upgrade postdeploy` + `metroae-container upgrade postdeploy` The final steps for the upgrade are executed. @@ -218,7 +218,7 @@ Our example includes a VSTAT node. If your topology does not include one, procee 2. Run a health check. - `metroae upgrade postupgrade health` + `metroae-container upgrade postupgrade health` Health reports are created that can be compared with the ones produced during preupgrade preparations. Investigate carefully any errors or discrepancies. diff --git a/Documentation/VAULT_ENCRYPT.md b/Documentation/VAULT_ENCRYPT.md index 24b42dec35..c2ed90f118 100644 --- a/Documentation/VAULT_ENCRYPT.md +++ b/Documentation/VAULT_ENCRYPT.md @@ -1,21 +1,29 @@ # Encrypting Sensitive Data in MetroAE -You can safeguard sensitive data in MetroAE by encrypting files with MetroAE's encryption tool. See the steps below for instructions on how to encrypt `credentials.yml`. It uses Ansible's vault encoding in the background. More details about the vault feature can be found in [documentation](https://docs.ansible.com/ansible/2.4/vault.html) provided by Ansible. +You can safeguard sensitive data in MetroAE by encrypting files with MetroAE's encryption tool. Your credentials can be encrypted when provided through your `credentials.yml` deployment file or in a `credentials` sheet in your Excel deployment spreadsheet. We use Ansible's vault encoding in the background. More details about the vault feature can be found in [documentation](https://docs.ansible.com/ansible/2.4/vault.html) provided by Ansible. -### 1. Create the credentials file to be encrypted +The steps for encrypting your `credentials.yml` deployment file or your credentials in an Excel spreadsheet are as follows: -In your MetroAE deployment folder, create or edit the `credentials.yml` to store credentials required for various Nuage components. This file will be encrypted. +### 1a. Create the credentials file to be encrypted -### 2. Encrypt `credentials.yml` +In your MetroAE deployment folder, create or edit the `credentials.yml` to store credentials required for various Nuage components. This file will be encrypted. -To encrypt `credentials.yml`, run the following command: +### 1b. Create your Excel deployment + +We provide examples of Excel spreadsheets in `examples/excel/`. You can use these as guides to create your deployment. Once you fill out the `credentials` sheet in your spreadsheet, you can proceed to the next step which will encrypt the `credentials` sheet (the rest of your deployment will be unchanged). + +### 2. Encrypt your credentials + +To encrypt your credentials, run the following command: ``` -metroae tools encrypt credentials [deployment_name] +metroae tools encrypt credentials [deployment_name/path_to_excel_spreadsheet] ``` The default deployment name is `default` if not specified. This command will prompt for master passcode to encrypt the file and will also prompt for confirming passcode. Note: All user comments and unsupported fields in the credentials file will be lost. +You do not need to provide a deployment name if you're using an Excel spreadsheet. + ### 3. Running MetroAE with encrypted credentials While running MetroAE commands you can supply the MetroAE passcode via prompt or by setting an environment variable @@ -26,6 +34,15 @@ metroae [action] [deployment_name] This command prompts you to enter the master passcode that you used to encrypt the credentials file. Alternatively, if you have the environment variable METROAE_PASSWORD set to the right passcode, MetroAE does not prompt for the passcode. +If you are using an Excel spreadsheet, you can convert your Excel spreadsheet into a deployment using the converter script: +``` +convert_excel_to_deployment.py [path_to_excel_spreadsheet] [your_deployment_name] +``` +or, you can run any `metroae` playbook that invokes the `build` step (all playbooks aside from `nuage_unip` and `reset_build`) to convert your Excel spreadsheet into a deployment. The example below calls `build` directly, but you can use a different playbook: +``` +metroae build [path_to_excel_spreadsheet]` +``` + ## Questions, Feedback, and Contributing Get support via the [forums](https://devops.nuagenetworks.net/forums/) on the [MetroAE site](https://devops.nuagenetworks.net/). diff --git a/Documentation/VSD_ROLLBACK_RESTORE.md b/Documentation/VSD_ROLLBACK_RESTORE.md index e6eb1da7f7..eac930c54a 100644 --- a/Documentation/VSD_ROLLBACK_RESTORE.md +++ b/Documentation/VSD_ROLLBACK_RESTORE.md @@ -41,17 +41,17 @@ If all of these prerequisites and assumptions are true, perform the following st #### 3. Run the following command to bring up new copies of the original VMs (e.g. 5.4.1). -`metroae install vsds predeploy ` +`metroae-container install vsds predeploy ` #### 4. Manually copy (via scp) the pre-upgrade backup to /opt/vsd/data on VSD 1. #### 5. Run the following command to start the installation of the VSD software on the VSD VM. This will also restore the backup that you copied to the first VSD. -`metroae install vsds deploy ` +`metroae-container install vsds deploy ` #### 6. Run the following command to run sanity and connectivity checks on the VSDs: -`metroae install vsds postdeploy ` +`metroae-container install vsds postdeploy ` At this point, your original VSD configuration should be restored and up and running. diff --git a/Documentation/VSD_SERVICES.md b/Documentation/VSD_SERVICES.md index 2f8d079bb8..939f4e67dd 100644 --- a/Documentation/VSD_SERVICES.md +++ b/Documentation/VSD_SERVICES.md @@ -10,12 +10,12 @@ Before attempting to control the VSD services using MetroAE, you must configure MetroAE can perform any of the following VSD service workflows using the command-line tool as follows: - metroae vsd services stop [deployment] [options] - metroae vsd services start [deployment] [options] - metroae vsd services restart [deployment] [options] + metroae-container vsd services stop [deployment] [options] + metroae-container vsd services start [deployment] [options] + metroae-container vsd services restart [deployment] [options] * `deployment`: Name of the deployment directory containing configuration files. See [CUSTOMIZE.md](CUSTOMIZE.md) -* `options`: Other options for the tool. These can be shown using --help. Also, any options not directed to the metroae tool are passed to Ansible. +* `options`: Other options for the tool. These can be shown using --help. Also, any options not directed to the metroae-container tool are passed to Ansible. Note: The VSD services workflows can be used even if you didn't use MetroAE to install or upgrade your VSD deployment. diff --git a/README.md b/README.md index 149483a518..536c3e2f9d 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Nuage Networks Metro Automation Engine (MetroAE) -Version: 4.6.1 +Version: 5.0.0 MetroAE is an automation engine that can be used to @@ -18,11 +18,12 @@ Nuage Networks components. You specify the individual details of your target pla The [Documentation](Documentation/) directory contains the following guides to assist you in successfully working with MetroAE. The current documentation covers using MetroAE CLI only. +Note: For users migrating from v4.X to v5, there are some major changes worth looking at. These changes have been summarized [here](Documentation/CHANGES_IN_V5.md). If this is your first time using metroae, you can ignore the document. + File name | Description --------- | -------- [RELEASE_NOTES](Documentation/RELEASE_NOTES.md) | New features, resolved issues and known limitations and issues [GETTING_STARTED](Documentation/GETTING_STARTED.md) | MetroAE Quick Start Guide -[DOCKER](Documentation/DOCKER.md) | Installing and using MetroAE Docker container [SETUP](Documentation/SETUP.md) | Set up your environment by cloning the repo, installing packages and configuring access. [CUSTOMIZE](Documentation/CUSTOMIZE.md) | Populate variable files for a deployment and unzip Nuage software. [VAULT_ENCRYPT](Documentation/VAULT_ENCRYPT.md) | Safeguard sensitive data @@ -47,11 +48,18 @@ File name | Description ## Important Notes -You can now run `python run_wizard.py` to let MetroAE help you setup your server and create or edit a deployment. The wizard will ask you questions about what you'd like to do and then create the proper files on disk. run_wizard.py is most useful when you are working with a clone of the nuage-metroae repo. It can be used to generate deployments that can be used with the Docker container version of MetroAE, but those deployments would need to be copied manually from the nuage-metroae repo clone to the container's metroae_data directory. +You can now run `python run_wizard.py` to let MetroAE help you setup your server and create or edit a deployment. The wizard will ask you questions about what you'd like to do and then create the proper files on disk. run_wizard.py is most useful when you are working with a clone of the nuage-metroae repo. + +MetroAE uses docker containers to hold the required environment. Previous versions of MetroAE used the command `metroae` but the script `metroae-container` is now used. If an older version of the MetroAE container exists, it should be removed from the host by following this procedure: + +- Verify if the container is running: `docker ps`. See if metroae is running. +- `metroae container destroy -y` Please see [RELEASE_NOTES.md](Documentation/RELEASE_NOTES.md) for all the details about what has changed or been added in this release. -All MetroAE operations, including Docker container management, use a command `metroae` for consistent usage and syntax. Please see [DOCKER.md](Documentation/DOCKER.md) for details on configuration and use of the container version of MetroAE. +All MetroAE operations use a command `metroae-container` for consistent usage and syntax. + +MetroAE git clone version now requires Ansible version 3.4.0 or higher. Using the `metroae-container` automatically uses the 3.4.0 ansible version. ## Supported Components for Deployment and Upgrade @@ -81,7 +89,7 @@ MetroAE supports the deployment and upgrade of Nuage VSP components on the follo ## Main Steps for Using MetroAE -1. [Setup](Documentation/SETUP.md) the MetroAE host. Setup prepares the host for running MetroAE, including retrieving the repository, installing prerequisite packages and setting up SSH access. You also have the option of installing MetroAE in a container, and then working with it via CLI or the GUI. +1. [Setup](Documentation/SETUP.md) the MetroAE host. Setup prepares the host for running MetroAE, including retrieving the repository, installing prerequisite packages and setting up SSH access. 2. Obtain the proper Nuage binary files for your deployment. These can be downloaded from Nuage/Nokia online customer support. @@ -95,7 +103,7 @@ MetroAE supports the deployment and upgrade of Nuage VSP components on the follo MetroAE workflows are the operations that can be performed against a specified deployment. All supported workflows can be listed via: - metroae --list + metroae-container --list Workflows fall into the following categories: diff --git a/ansible.cfg b/ansible.cfg index df5c6ed284..5a82423b12 100644 --- a/ansible.cfg +++ b/ansible.cfg @@ -12,6 +12,7 @@ inventory = ./src/inventory/hosts display_skipped_hosts = False any_errors_fatal = true stdout_callback = metroae_stdout +remote_tmp = /tmp [persistent_connection] connect_timeout = 5 diff --git a/apt_requirements.txt b/apt_requirements.txt index fe39dba917..a2de0a5637 100755 --- a/apt_requirements.txt +++ b/apt_requirements.txt @@ -1,5 +1,6 @@ -python-pip -python-dev +python3-pip +python3-dev +python3-venv build-essential checkinstall zlib1g-dev diff --git a/deployment_spreadsheet_template.csv b/deployment_spreadsheet_template.csv index a521d3fc31..4881c86adc 100644 --- a/deployment_spreadsheet_template.csv +++ b/deployment_spreadsheet_template.csv @@ -126,6 +126,13 @@ Configuration for VNS setups, ,Management IP Address,,,Management IP of Webfilter VM, ,Management Network Prefix Length,,,Management network prefix length for Webfilter VM, ,Management Network Gateway,,,Management network gateway for Webfilter VM +,, +,NFS Server VM, +,"Configure NFS Server VM using MetroAE. Note: Metroae will not bring up the NFS server, it will configure it for Elasticsearch mounting.", +,, +,NFSs,NFS Server 1,NFS Server 2,,Descriptions, +,Hostname,,,,Hostname of NFS Server, +,NFS server IP address,,,,IP address of the NFS server. , When using MetroAE for zero-factor bootstrap of NSGvs, ,, diff --git a/deployments/default/common.yml b/deployments/default/common.yml index 5d46a35e38..3ce1e1c0b1 100644 --- a/deployments/default/common.yml +++ b/deployments/default/common.yml @@ -127,7 +127,7 @@ dns_server_list: [ ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -162,7 +162,7 @@ dns_server_list: [ ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/deployments/default/credentials.yml b/deployments/default/credentials.yml index 60f9e54175..2d2e758933 100644 --- a/deployments/default/credentials.yml +++ b/deployments/default/credentials.yml @@ -35,22 +35,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # # vsc_custom_username: "" # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # # vsc_custom_password: "" @@ -59,17 +59,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -78,17 +78,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -165,7 +165,7 @@ ##### OpenStack Credentials # < OpenStack Username > - # Username for OpenStack + # Username for OpenStack. # # openstack_username: "" @@ -179,7 +179,7 @@ ##### vcenter # < vCenter Username > - # vCenter Username + # vCenter Username. # # vcenter_username: "" @@ -193,12 +193,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -233,7 +233,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -245,27 +245,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -274,12 +279,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -288,7 +293,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -302,22 +307,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/deployments/default/upgrade.yml b/deployments/default/upgrade.yml index 7126d2d8cb..2d5b6a0b94 100644 --- a/deployments/default/upgrade.yml +++ b/deployments/default/upgrade.yml @@ -36,5 +36,10 @@ # # upgrade_portal: False +# < VSD Pre-upgrade Database Check Script Path > +# Path on the MetroAE host to the VSD pre-upgrade database check script +# +# vsd_preupgrade_db_check_script_path: "" + ############# diff --git a/deployments/default/vsds.yml b/deployments/default/vsds.yml index 157ff49c7e..49d4f10b77 100644 --- a/deployments/default/vsds.yml +++ b/deployments/default/vsds.yml @@ -245,3 +245,22 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/deployments/default/vstats.yml b/deployments/default/vstats.yml index 2b73c28484..e464eaee4f 100644 --- a/deployments/default/vstats.yml +++ b/deployments/default/vstats.yml @@ -258,3 +258,22 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/docker/Dockerfile b/docker/Dockerfile new file mode 100644 index 0000000000..69cc333091 --- /dev/null +++ b/docker/Dockerfile @@ -0,0 +1,10 @@ +FROM centos:7 +COPY yum_requirements.txt /root +COPY pip_requirements.txt /root +COPY openstack_requirements.txt /root +RUN cat /root/yum_requirements.txt | xargs -d '\n' yum install -y +RUN rm -f /usr/bin/python && ln -s /usr/bin/python3 /usr/bin/python +ENV LC_ALL=en_US.utf8 +RUN pip3 install -r /root/pip_requirements.txt +RUN pip3 install -r /root/openstack_requirements.txt +WORKDIR /metroae diff --git a/docker/build-container b/docker/build-container new file mode 100755 index 0000000000..932a960c1c --- /dev/null +++ b/docker/build-container @@ -0,0 +1,23 @@ +#!/bin/sh + +cd docker +TAG=$(git rev-parse --short HEAD) +if [[ -n "$HTTP_PROXY" ]] +then + export buildproxy="--build-arg http_proxy=$HTTP_PROXY" +fi +if [[ -n "$HTTPS_PROXY" ]] +then + export buildproxy="$buildproxy --build-arg https_proxy=$HTTPS_PROXY" +fi + +cp -f ../yum_requirements.txt ./ +cp -f ../pip_requirements.txt ./ +cp -f ../openstack_requirements.txt ./ +docker build $buildproxy -t metroaecontainer:$TAG . + +if [[ "$1" != "raw" ]] +then + docker save -o metroae-docker-container.tar metroaecontainer:$TAG + echo "File metroae-docker-container.tar was created" +fi diff --git a/documentation.html b/documentation.html index d178098f3a..d9c5630a95 100644 --- a/documentation.html +++ b/documentation.html @@ -365,9 +365,9 @@
Top +CHANGES_IN_V5 RELEASE_NOTES GETTING_STARTED -DOCKER SETUP CUSTOMIZE VAULT_ENCRYPT @@ -394,7 +394,7 @@ MetroAE

Nuage Networks Metro Automation Engine (MetroAE)

-

Version: 4.6.1

+

Version: 5.0.0

MetroAE is an automation engine that can be used to

  • Install
  • @@ -408,6 +408,7 @@

    Nuage Networks Metro Automation Engine (MetroAE)

    Nuage Networks components. You specify the individual details of your target platform, then let MetroAE do the rest.

    Documentation

    The Documentation directory contains the following guides to assist you in successfully working with MetroAE. The current documentation covers using MetroAE CLI only.

    +

    Note: For users migrating from v4.X to v5, there are some major changes worth looking at. These changes have been summarized here. If this is your first time using metroae, you can ignore the document.

    @@ -415,7 +416,6 @@

    Documentation

    - @@ -440,9 +440,15 @@

    Documentation

    File nameDescription
    RELEASE_NOTESNew features, resolved issues and known limitations and issues
    GETTING_STARTEDMetroAE Quick Start Guide
    DOCKERInstalling and using MetroAE Docker container
    SETUPSet up your environment by cloning the repo, installing packages and configuring access.
    CUSTOMIZEPopulate variable files for a deployment and unzip Nuage software.
    VAULT_ENCRYPTSafeguard sensitive data

    Important Notes

    -

    You can now run python run_wizard.py to let MetroAE help you setup your server and create or edit a deployment. The wizard will ask you questions about what you’d like to do and then create the proper files on disk. run_wizard.py is most useful when you are working with a clone of the nuage-metroae repo. It can be used to generate deployments that can be used with the Docker container version of MetroAE, but those deployments would need to be copied manually from the nuage-metroae repo clone to the container’s metroae_data directory.

    +

    You can now run python run_wizard.py to let MetroAE help you setup your server and create or edit a deployment. The wizard will ask you questions about what you’d like to do and then create the proper files on disk. run_wizard.py is most useful when you are working with a clone of the nuage-metroae repo.

    +

    MetroAE uses docker containers to hold the required environment. Previous versions of MetroAE used the command metroae but the script metroae-container is now used. If an older version of the MetroAE container exists, it should be removed from the host by following this procedure:

    +
      +
    • Verify if the container is running: docker ps. See if metroae is running.
    • +
    • metroae container destroy -y
    • +

    Please see RELEASE_NOTES.md for all the details about what has changed or been added in this release.

    -

    All MetroAE operations, including Docker container management, use a command metroae for consistent usage and syntax. Please see DOCKER.md for details on configuration and use of the container version of MetroAE.

    +

    All MetroAE operations use a command metroae-container for consistent usage and syntax.

    +

    MetroAE git clone version now requires Ansible version 3.4.0 or higher. Using the metroae-container automatically uses the 3.4.0 ansible version.

    Supported Components for Deployment and Upgrade

    MetroAE supports deployment and upgrade of the following components as VMs on the target server. These are the same target server types that are supported on the VSP platform.

    supported_components

    @@ -466,7 +472,7 @@

    Typical Nuage Topology

    topology_rev

    Main Steps for Using MetroAE

      -
    1. Setup the MetroAE host. Setup prepares the host for running MetroAE, including retrieving the repository, installing prerequisite packages and setting up SSH access. You also have the option of installing MetroAE in a container, and then working with it via CLI or the GUI.

    2. +
    3. Setup the MetroAE host. Setup prepares the host for running MetroAE, including retrieving the repository, installing prerequisite packages and setting up SSH access.

    4. Obtain the proper Nuage binary files for your deployment. These can be downloaded from Nuage/Nokia online customer support.

    5. Customize your deployment to match your network topology, and describe your Nuage Networks specifics.

    6. Deploy new components, upgrade a standalone or clustered deployment, or run a health check on your system.

    7. @@ -474,7 +480,7 @@

      Main Steps for Using MetroAE

    MetroAE Workflows

    MetroAE workflows are the operations that can be performed against a specified deployment. All supported workflows can be listed via:

    -
    metroae --list
    +
    metroae-container --list
     

    Workflows fall into the following categories:

    Standard Workflows

    @@ -550,24 +556,97 @@

    License

    LICENSE


+
+

Changes in V5

+

We have Made the following changes to MetroAE in v5 as compared to previous versions of MetroAE. All changes are summarised in this document.

+

MetroAE operation changes

+

In v4 and before, MetroAE shipped with 2 modes of operation. The container and git clone. With the new release, we have made things a bit easier for the users. There is a single mode of operation - a new MetroAE container. This will replace the old MetroAE container, which will no longer be supported.

+

This new container will be dynamically built on the users machine. Users will need to get the latest MetroAE repository on their machine. The only other pre-requisite for the user is to have docker installed on their MetroAE Host Machine. The changes for moving from v4 to v5 are as below depending on type of installation.

+

Moving from MetroAE git clone/download to MetroAE v5

+
    +
  1. Download or git pull the latest MetroAE code
  2. +
  3. Make sure docker is installed and running. The docker ps command can verify this.
  4. +
  5. Use ./metroae-container instead of ./metroae. All other arguments remain the same. +e.g. Instead of this initial command +./metroae install vsds -vvv use ./metroae-container install vsds -vvv +Another example for deployments other than defaults +./metroae install vsds specialdeployment -vvv use ./metroae-container install vsds specialdeployment -vvv
  6. +
  7. For all image paths, make sure they start with /metroae. Here /metroae refers to the present working directory for the user. +e.g. +nuage_unzipped_files_dir: ./images/20.10.R4 changes to nuage_unzipped_files_dir: /metroae/images/20.10.R4
  8. +
  9. All the specified paths for licenses, unzipped files, backup directories should be inside the MetroAE repository. e.g. you cannot specify /opt or /tmp for the MetroAE host. If your mount directory for images is outside the MetroAE folder, you can use a mount bind to put them inside the MetroAE directory. +e.g. +sudo mount --bind -o ro /mnt/nfs-data /<your-repo-location>/images
  10. +
  11. Users do not need to run setup at all, all dependencies will be automatically taken care of with the new container in the background.
  12. +
  13. For vcenter users only, MetroAE should be cloned or downloaded in a directory where ovftool is present. The entire vmware-ovftool folder must be present. In short, the path to ovftool should be somewhere inside the MetroAE top level folder. You can mount bind the ovftool to the metro repo location. +sudo mount --bind -o ro /usr/lib/vmware-ovftool /<your-repo-location>/ovftool
  14. +
+

Moving from MetroAE v4 container/download to MetroAE v5

+
    +
  1. Download or git pull the latest MetroAE code
  2. +
  3. Destroy the old container using ./metroae-container destroy command
  4. +
  5. Use ./metroae-container instead of /metroae. All other arguments remain the same. +e.g. Instead of this initial command +metroae install vsds -vvv use ./metroae-container install vsds -vvv +Another example for deployments other than defaults +metroae install vsds specialdeployment -vvv use ./metroae-container install vsds specialdeployment -vvv
  6. +
  7. For all image paths, make sure they start with /metroae instead of /metroae_data. Here /metroae refers to the present working directory for the user. +e.g. +nuage_unzipped_files_dir: /metroae_data/images/20.10.R4 changes to nuage_unzipped_files_dir: /metroae/images/20.10.R4
  8. +
  9. You can create an images folder in nuage-metroae and move the /metroae_data mount folder under /<your-repo-location>/images athat way you can replace /metroae_data with /metroae_data/images in the deployment files.
  10. +
+

Ansible and Python Changes

+

MetroAE is now supported with Ansible version 3.4.0 and higher. Python3 is now required. Do not worry, the container that gets dynamically created should take care of the python, ansible and any other dependencies that are needed. This will not affect the user environment as all dependencies will be installed in the MetroAE container.

+

MetroAE Config

+

MetroaAE Config is no longer bundled with MetroAE. Please refer to https://github.com/nuagenetworks/nuage-metroae-config to get information on how use MetroAE Config.

+
+

Metro Automation Engine Release Notes

Release info

    -
  • MetroAE Version 4.6.1
  • +
  • MetroAE Version 5.0.0
  • Nuage Release Alignment 20.10
  • -
  • Date of Release 29-September-2021
  • +
  • Date of Release 4-February-2022

Release Contents

Feature Enhancements

    -
  • None
  • +
  • Predeploy NSGV without vsd license file (METROAE-497)
  • +
  • Added support for hardening Elasticsearch nodes (METROAE-486)
  • +
  • Allow custom configuration of RAM, CPU and Memory for VSD and VSTAT (METROAE-477)
  • +
  • Run VSD Database pre-upgrade checks (METROAE-428)
  • +
  • Support NFS server config using MetroAE(METROAE-557)
  • +
  • Webfilter should optionally use HTTP Proxy (METROAE-493)
  • +
  • Add support for encrypting credentials in Excel spreadsheet (METROAE-552)
  • +
  • Add Ansible 3.4.0 support (METROAE-344)
  • +
  • Allow installing/renewal of VSTAT(ES) licenses during install, upgrade and standalone (METROAE-591)

Resolved Issues

    -
  • Install NUH without VSD on VCenter (METROAE-523)
  • -
  • Upgrade VSTATS if VSD GUI password has changed (METROAE-533)
  • -
  • Apply VSD license during upgrade (MetroAE-528)
  • +
  • Check for ejabberd license expiry (METROAE-505)
  • +
  • Added support install of SD-WAN portal without the SMTP address(METROAE-492)
  • +
  • Fixed yum lock timeout issue when installing packages in KVM (METROAE-507)
  • +
  • Replacing known_hosts module mgmt_ip to hostname (METROAE-481)
  • +
  • Remove unnecessary debug lines from vsc-health (MetroAE-541)
  • +
  • Fixing vsd-destroy to destroy old and new VMs (METROAE-504)
  • +
  • Fix MetroAE errors while deploying using SSH Proxy (MetroAE-574)
  • +
  • Fixed Check passwordless ssh from metro host to hypervisors and components ( METROAE-520)
  • +
  • Added ES servers to NUH GUI ( METROAE-491)
  • +
  • Fix NUH install on 20.10.R5 (METROAE-490)
  • +
  • Fixing message issue for docker pull(METROAE-527)
  • +
  • Install NUH optionally without DNS entry (METROAE-375)
  • +
  • Add procedure for NUH copy certificates if installed before VSD(METROAE-559)
  • +
  • Create NUH users and certs for NSG bootstrapping (METROAE-487)
  • +
  • Enhance check to accept both access_port_name and access_ports variables being undefined (METROAE-585)
  • +
  • VSTAT VSS UI should be set for all VSTATS (METROAE-580)
  • +
  • Remove Old MetroAE container support (METROAE-564)
  • +
  • Fix MetroAE VSD in-place upgrades for custom credentials (METROAE-586)
  • +
  • Make changes into documentation for supporting ansible version upgrade and new container(METROAE-588)
  • +
  • Document where credentials are used(METROAE-532)
  • +
  • Fix MetroAE inplace upgrade from 20.10.R6.1 to 20.10.R6.3(METROAE-590)
  • +
  • On applying branding to the VSD jboss restart should happen serially (METROAE-597)
  • +
  • Clean up temporary ISO file on VSD after mounting (METROAE-598)

Test Matrix

This release was tested according to the following test matrix. Other combinations and versions have been tested in previous releases of MetroAE and are likely to work. We encourage you to test MetroAE in your lab before you apply it in production.

@@ -668,12 +747,9 @@

What’s a MetroAE Host?

  • A read/write NFS mount for accessing the Nuage software images
  • -

    2.1 Clone the master branch of the repo onto the MetroAE Host. Read Setup for details.

    +

    2.1 Clone the master branch of the repo onto the MetroAE Host. Read Setup for details. NOTE: Please clone the repo in a location that can be read by libvirt/qemu.

    git clone https://github.com/nuagenetworks/nuage-metroae.git
     
    -

    2.2 Install the required packages. Run as root or sudo. Read Setup for details.

    -
    $ sudo ./setup.sh  
    -

    3. Enable SSH Access

    Passwordless SSH must be configured between the MetroAE host and all target servers, a.k.a. hypervisors. This is accomplished by generating SSH keys for the MetroAE user, then copying those keys to the authorized_keys files for the target_server_username on every target_server. The following steps should be executed on the MetroAE server as the MetroAE user.

    Please note that this is only a requirement for target servers that are KVMs. Passwordless SSH is not required for these other target-server types: AWS, Openstack, and vCenter.

    @@ -687,10 +763,9 @@

    3.2.2 When you are going to run as target_server_username on ea

    See Setup for more details about enabling SSH Access.

    4. Install ovftool (for VMware only)

    Download and install the ovftool from VMware. MetroAE uses ovftool for OVA operations. Note that MetroAE is tested using ovftool version 4.3. ovftool version 4.3 is required for proper operation.

    -

    Note that running the metroae Docker container for VMware installations and upgrades requires special handling of the location of the ovftool command. Please see SETUP.md for details.

    5. Prepare your environment

    5.1 Unzip Nuage files

    -

    Execute: metroae tools unzip images <zipped_directory> <unzip_directory>

    +

    Execute: metroae-container tools unzip images <zipped_directory> <unzip_directory>

    See SETUP.md for details.
          Be sure that Nuage packages (tar.gz) are available on localhost (MetroAE host),
          either in a native directory or NFS-mounted.

    @@ -719,192 +794,36 @@

    Questions, Feedback, and Contributing

    You may also contribute to MetroAE by submitting your own code to the project.


    -
    -

    Deploying Components with the MetroAE Docker Container

    -

    This file describes many of the details of the commands used for managing MetroAE distributed via container. For information on how to setup Docker and the MetroAE Docker container, see SETUP.md.

    -

    The metroae Command

    -

    The metroae command is at the heart of interacting with the MetroAE container. It is used both to manage the container and to execute MetroAE inside the container. You can access all of the command options via metroae <workflow> <component> [action] [deployment] [options].

    -

    metroae Container Management Command Options

    -

    The following command options are supported by the metroae command:

    -

    help -displays the help text for the command

    -

    pull - pulls the MetroAE container from the registry. By default, the latest container is pulled. You can also specify a valid container tag to pull another version, e.g. metroae container pull 1.0.0.

    -

    download - pulls the MetroAE container in tar format. This allows you to copy the tar file to systems behind firewalls and convert to Docker images by using docker load command.

    -

    setup - setup completes the steps necessary to get the MetroAE container up and running. It prompts you for the full paths to a data directory that the container uses on your local disk. The setup action will create the subdirectory metroae_data on disk, then mount it as /metroae_data/ inside the container. When using the MetroAE container, you must provide paths relative to this path as seen from inside the container. For example, if you tell setup to use /tmp for the data directory, setup will create /tmp/metroae_data on your host. The setup will also mount this directory inside the container as /metroae_data/. If you copy your tar.gz files to /tmp/metroae_data/images/6.0.1/ on the host, the container sees this as /data/images/6.0.1/. So, when using unzip or setting nuage_unzipped_files_dir in common.yml, you would specify /metroae_data/images/6.0.1/ as your path.

    -

    Running setup multiple times replaces the existing container, but it does not remove the data on your local disk.

    -

    metroae container start - starts the container using the settings from setup

    -

    metroae container stop - stops the running container

    -

    metroae container status - displays the container status along with container settings

    -

    metroae container destroy - stops and removes the metroae container along with the container image. Use this command when you want to replace an existing container with a new one, no need to run setup again afterwards.

    -

    metroae container update - upgrades the container to the latest available release image. You don’t need to run setup again after an upgrade.

    -

    metroae tools unzip images - unzips Nuage Networks tar.gz files into the images directory specified during the setup operation. Use of this command requires that the tar.gz files be placed in either the data or images directory that you specified during setup. -See the current values of the data and images directories by executing the status command.

    -

    metroae tools generate example - generates an example for the specified schema and puts it in the examples directory under the data directory that you specified during setup. You can get the current values of the data and images directories by executing the status command.

    -

    metroae tools encrypt credentials - sets the encryption credentials for encrypting secure user data, e.g. passwords

    -

    metroae container ssh copyid - copies the container’s public key into the ssh authorized_keys file on the specified server. This key is required for passwordless ssh access to all target servers. Usage: metroae container ssh copyid user@host_or_ip

    -

    metroae --list - lists the workflows that are supported by MetroAE

    -

    metroae --ansible-help - shows the help options for the underlying Ansible engine

    -

    metroae Workflow Command Options

    -

    The MetroAE container is designed so that you run MetroAE workflows, e.g. install, from the command line using the metroae command. The format of the command line is:

    -
    `metroae <workflow> <component> [operation] [deployment] [options]`
    -
    -

    Troubleshooting

    -

    SSH connection problems

    -

    If MetroAE is unable to authenticate with your target server, chances are that passwordless ssh has not been configured properly. The public key of the container must be copied to the authorized_keys file on the target server. Use the copy-ssh-id command option, e.g. metroae container copy-ssh-id user@host_name_or_ip.

    -

    Where is my data directory?

    -

    You can find out about the current state of your container, including the path to the metroae_data directory, by executing the container status command, metroae container status.

    -

    General errors

    -

    metroae.log and ansible.log are located in the data directory you specified during setup.

    -

    Manually use the container without the script (Nokia internal support only)

    -

    Pull the container

    -
    docker pull registry.mv.nuagenetworks.net:5001/metroae:1.0
    -
    -

    Run the container

    -

    docker run -e USER_NAME=’user name for the container’ -e GROUP_NAME=’group name for the container’ -d $networkArgs -v 'path to the data mount point’:/data:Z -v 'path to images mount point’:/images:Z --name metroae registry.mv.nuagenetworks.net:5001/metroae:1.0

    -

    For Linux host

    -
    networkArgs is '--network host'
    -
    -

    For Mac host

    -
    networkArgs is '-p "UI Port":5001'
    -
    -

    Execute MetroAE Commands

    -
    docker exec 'running container id' /source/nuage-metroae/metroae playbook deployment
    -
    -

    Stop the container

    -
    docker stop 'running container id'
    -
    -

    Remove the container

    -
    docker rm 'container id'
    -
    -

    Remove MetroAE image

    -
    docker rmi 'image id'
    -
    -

    Questions, Feedback, and Contributing

    -

    Get support via the forums on the MetroAE site. -Ask questions and contact us directly at devops@nuagenetworks.net.

    -

    Report bugs you find and suggest new features and enhancements via the GitHub Issues feature.

    -

    You may also contribute to MetroAE by submitting your own code to the project.

    -
    -

    Setting Up the Environment

    -

    You can set up the MetroAE host environment either with a Docker container or with a GitHub clone.

    +

    You can set up the MetroAE host environment with a GitHub clone. +Note that docker is required on the host in order to run MetroAE. All file paths in configuration files must be relative to the git clone folder.

    Environment

    -

    Method One: Set up Host Environment Using Docker Container

    -

    Using a Docker container results in a similar setup as a GitHub clone, plus it delivers the following features:

    -
      -
    • All prerequisites are satisfied by the container. Your only requirement for your server is that it run Docker engine.
    • -
    • Your data is located in the file system of the host where Docker is running. You don’t need to get inside the container.
    • -
    • A future release will include Day 0 Configuration capabilities.
    • -
    +

    Set up Host Environment Using GitHub Clone

    System (and Other) Requirements

    • Operating System: Enterprise Linux 7 (EL7) CentOS 7.4 or greater or RHEL 7.4 or greater
    • -
    • Locally available image files for VCS or VNS deployments
    • -
    • Docker Engine 1.13.1 or greater installed and running
    • -
    • Container operations may need to be performed with elevated privileges (root, sudo)
    • +
    • Locally available image files for VCS or VNS deployments within the git clone folder
    • +
    • Docker engine

    Steps

    -
    1. Get a copy of the metroae script from the github repo
    -
    https://github.com/nuagenetworks/nuage-metroae/blob/master/metroae
    -
    -

    You can also copy the script out of a cloned workspace on your local machine. You can copy the metroae script to any directory of your choosing, e.g. /usr/local/bin. Note that you may need to set the script to be executeable via chmod +x.

    -
    2. Pull the latest Docker container using the following command:
    -
    metroae container pull
    -
    -
    3. Setup and start the Docker container using the following command:
    -
    metroae container setup [path to data directory]
    -
    -

    You can optionally specify the data directory path. If you don’t specify the data directory on the command line, you will be prompted to enter one during setup. This path is required for container operation. The data directory is the place where docs, examples, Nuage images, and your deployment files will be kept and edited. Note that setup will create a subdirectory beneath the data directory you specify, metroae_data. For example, if you specify /tmp for your data directory path during setup, setup will create /tmp/metroae_data for you. Setup will copy docs, logs, and deployment files to /tmp/metroae_data. Inside the container itself, setup will mount /tmp/metroae_data as /metroae_data/. Therefore, when you specify path names for metroae when using the container, you should always specify the container-relative path. For example, if you copy your tar.gz files to /tmp/metroae_data/6.0.1 on the host, this will appear as /metroae_data/6.0.1 inside the container. When you use the unzip-files action on the container, then, you would specify a source path as /metroae_data/6.0.1. When you complete the nuage_unzipped_files_dir variable in common.yml, you would also specify /metroae_data/6.0.1.

    -

    Note that you can run setup multiple times and setup will not destroy or modify the data you have on disk. If you specify the same data and imafges directories that you had specified on earlier runs, metroae will pick up the existing data. Thus you can update the container as often as you like and your deployments will be preserved.

    -

    Note that you can stop and start the MetroAE container at any time using these commands:

    -
    metroae container stop
    -metroae container start
    -
    -
    4. For KVM Only, copy the container ssh keys to your KVM target servers (aka hypervisors) using the following command:
    -
    metroae container ssh copyid [target_server_username]@[target_server]
    -
    -

    This command copies the container’s public key into the ssh authorized_keys file on the specified target server. This key is required for passwordless ssh access from the container to the target servers. The command must be run once for every KVM target server. This step should be skipped if your target-server type is anything but KVM, e.g. vCenter or OpenStack.

    -
    5. For ESXi / vCenter Only, install ovftool and copy to metroae_data directory
    -

    When running the MetroAE Docker container, the container will need to have access to the ovftool command installed on the Docker host. The following steps are suggested:

    -
    5.1. Install ovftool
    -

    Download and install the ovftool from VMware.

    -
    5.2. Copy ovftool installation to metroae_data directory
    -

    The ovftool command and supporting files are usually installed in the /usr/lib/vmware-ovftool on the host. In order to the metroae container to be able to access these files, you must copy the entire folder to the metroae_data directory on the host. For example, if you have configured the container to use /tmp/metroae_data on your host, you would copy /usr/lib/vmware-ovftool to /tmp/metroae_data/vmware-ovftool. Note: Docker does not support following symlinks. You must copy the files as instructed.

    -
    5.3. Configure the ovftool path in your deployment
    -

    The path to the ovftool is configured in your deployment in the common.yml file. Uncomment and set the variable ‘vcenter_ovftool’ to the container-relative path to where you copied the /usr/lib/vmware-ovftool folder. This is required because metroae will attempt to execute ovftool from within the container. From inside the container, metroae can only access paths that have been mounted from the host. In this case, this is the metroae_data directory which is mounted inside the container as /metroae_data. For our example, in common.yml you would set vcenter_ovftool: /metroae_data/vmware-ovftool/ovftool.

    -
    6. You can check the status of the container at any time using the following command:
    -
    metroae container status
    -
    -

    That’s it! Container configuration data and logs will now appear in the newly created /opt/metroae directory. Documentation, examples, deployments, and the ansible.log file will appear in the data directory configured during setup, /tmp/metroae_data in our examples, above. See DOCKER.md for specfic details of each command and container management command options. Now you’re ready to customize for your topology.

    -

    Note: You will continue to use the metroae script to run commands, but for best results using the container you should make sure your working directory is not the root of a nuage-metroaegit clone workspace. The metroae script can be used for both the container and the git clone workspace versions of MetroAE. At run time, the script checks its working directory. If it finds that the current working directory is the root of a git cloned workspace, it assumes that you are running locally. It will not execute commands inside the container. When using the container, you should run metroae from a working directory other than a git clone workspace. For example, if the git clone workspace is /home/username/nuage/nuage-metroae, you can cd /home/username/nuage and then invoke the command as ./nuage-metroae/metroae install vsds.

    -

    Method Two: Set up Host Environment Using GitHub Clone

    -

    If you prefer not to use a Docker container you can set up your environment with a GitHub clone instead.

    -

    System (and Other) Requirements

    -
      -
    • Operating System: Enterprise Linux 7 (EL7) CentOS 7.4 or greater or RHEL 7.4 or greater
    • -
    • Locally available image files for VCS or VNS deployments
    • -
    -

    Optionally set up and activate python virtual environment

    -

    If you would like to run MetroAE within a python virtual environment, please follow the steps below.

    -
    1. Install pip
    -

    If pip isn’t installed on the host, install it with the following command.

    -
    (sudo) yum install -y python-pip
    -
    -
    2. Install the virtualenv package
    -

    Install the virtualenv package using the following command.

    -
    pip install virtualenv
    -
    -
    3. Set up and activate python virtual environment
    -

    The python virtual environment can be created as follows.

    -
    virtualenv [path_to_virtual_environment_or_name]
    -
    -

    After which the environment can be activated.

    -
    source [path_to_virtual_environment_or_name]/bin/activate
    -
    -

    After activating the virtual environment, you can proceed with the rest of the document. The steps will then be performed within the virtual environment. Virtual environments can be exited/deactivated with this command.

    -
    deactivate
    -
    -

    Steps

    1. Clone Repository

    If Git is not already installed on the host, install it with the following command.

    yum install -y git
     
    -

    Clone the repo with the following command.

    +

    Clone the repo with the following command. NOTE: Please clone the repo in a location that can be +read by libvirt/qemu.

    git clone https://github.com/nuagenetworks/nuage-metroae.git
     
    -

    Once the nuage-metroae repo is cloned, you can skip the rest of this procedure by running the MetroAE wizard, run_wizard.py. You can use the wizard to automatically handle the rest of the steps described in this document plus the steps described in customize.

    -
    python run_wizard.py
    +

    Once the nuage-metroae repo is cloned, you can skip the rest of this procedure by running the MetroAE wizard. You can use the wizard to automatically handle the rest of the steps described in this document plus the steps described in customize.

    +
    metroae-container wizard
     

    If you don’t run the wizard, please continue with the rest of the steps in this document.

    -
    2. Install Packages
    -

    MetroAE code includes a setup script which installs required packages and modules. If any of the packages or modules are already present, the script does not upgrade or overwrite them. You can run the script multiple times without affecting the system. To install the required packages and modules, run the following command.

    -
    sudo ./setup.sh
    -
    -

    The script writes a detailed log into setup.log for your reference. A Setup complete! messages appears when the packages have been successfully installed.

    -

    If you are using a python virtual environment, you will need to run the following command to install the required pip packages.

    -
    pip install -r pip_requirements.txt
    -
    -

    Please ensure that the pip selinux package and the yum libselinux-python packages are successfully installed before running MetroAE within a python virtual environment.

    -
    3. For ESXi / vCenter Only, install additional packages
    - - - - - - - - -
    PackageCommand
    pyvmomipip install pyvmomi==6.7.3
    jmespathpip install jmespath
    -

    Note that we have specified version 6.7.3 for pyvmomi. We test with this version. Newer versions of pyvmomi may cause package conflicts.

    -

    If you are installing VSP components in a VMware environment (ESXi/vCenter) download and install the ovftool from VMware. MetroAE uses ovftool for OVA operations.

    -
    4. For OpenStack Only, install additional packages
    -
    pip install --upgrade -r openstack_requirements.txt
    -
    -
    5. Copy ssh keys
    +
    2. Copy ssh keys

    Communication between the MetroAE Host and the target servers (hypervisors) occurs via SSH. For every target server, run the following command to copy the current user’s ssh key to the authorized_keys file on the target server:

    ssh-copy-id [target_server_username]@[target_server]
     
    -
    6. Configure NTP sync
    +
    3. Configure NTP sync

    For proper operation Nuage components require clock synchronization with NTP. Best practice is to synchronize time on the target servers that Nuage VMs are deployed on, preferrably to the same NTP server as used by the components themselves.

    Unzip Nuage Networks tar.gz files

    Ensure that the required unzipped Nuage software files (QCOW2, OVA, and Linux Package files) are available for the components to be installed. Use one of the two methods below.

    @@ -945,17 +864,19 @@

    Customizing Components for a Deployment

    Prerequisites / Requirements

    If you have not already set up your MetroAE Host environment, see SETUP.md before proceeding.

    What is a Deployment?

    -

    Deployments are component configuration sets. You can have one or more deployments in your setup. When you are working with the MetroAE container, you can find the deployment files under the data directory you specified during metroae container setup. For example, if you specified /tmp as your data directory path, metroae container setup created /tmp/metroae_data and copied the default deployment to /tmp/metroae_data/deployments. When you are working with MetroAE from a workspace you created using git clone, deployments are stored in the workspace directory nuage-metroae/deployments/<deployment_name>/. In both cases, the files within each deployment directory describe all of the components you want to install or upgrade.

    +

    Deployments are component configuration sets. You can have one or more deployments in your setup. +The files within each deployment directory describe all of the components you want to install or upgrade.

    If you issue:

    -
    ./metroae install everything
    +
    ./metroae-container install everything
     

    The files under nuage-metroae/deployments/default will be used to do an install.

    If you issue:

    -
    ./metroae install everything mydeployment
    +
    ./metroae-container install everything mydeployment
     

    The files under nuage-metroae/deployments/mydeployment will be used to do an install. This allows for different sets of component definitions for various projects.

    +

    The deployment files and the image files must be located within the git clone folder. The docker container will mount the git clone folder inside the container and will not have access to files outside of that location. All file paths must be defined as relative to the git clone folder and never using absolute paths

    You can also do:

    -
    ./metroae install everything deployment_spreadsheet_name.xlsx
    +
    ./metroae-container install everything deployment_spreadsheet_name.xlsx
     

    to run the install everything playbook using the deployment information present in the specified Excel spreadsheet. More details about Excel deployments can be found in the Customize Your Own Deployment section below.

    Each time you issue MetroaÆ, the inventory will be completely rebuilt from the deployment name specified. This will overwrite any previous inventory, so it will reflect exactly what is configured in the deployment that was specified.

    @@ -978,15 +899,15 @@

    Customize Your Own Deployment

    You can also use the MetroAE spreadsheet to create your deployment. You can find the MetroAE CSV template in deployment_spreadsheet_template.csv. When you finish customizing the spreadsheet, save it to a CSV file of your own naming. Then you can either convert the CSV directly to a deployment using this syntax:

    convert_csv_to_deployment.py deployment_spreadsheet_name.csv your_deployment_name
     
    -

    or you can let metroae handle the conversion for you by specifying the name of the CSV file instead of the name of your deployment::

    -
    metroae deployment_spreadsheet_name.csv
    +

    or you can let metroae-container handle the conversion for you by specifying the name of the CSV file instead of the name of your deployment::

    +
    metroae-container deployment_spreadsheet_name.csv
     

    This will create or update a deployment with the same name as the CSV file - without the extension.

    MetroAE also supports deployment files filled out in an Excel (.xlsx) spreadsheet. You can find examples under the examples/excel directory. Similar to a csv-based deployment, you have multiple options for creating a deployment from an Excel spreadsheet. You can run the converter script directly:

    convert_excel_to_deployment.py deployment_spreadsheet_name.xlsx your_deployment_name
     
    -

    or you can use metroae do the conversion for you by running build, like this:

    -
    metroae build deployment_spreadsheet_name.xlsx
    +

    or you can use metroae-container do the conversion for you by running build, like this:

    +
    metroae-container build deployment_spreadsheet_name.xlsx
     

    For Excel deployments, all playbooks (aside from nuage_unzip and reset_build) invoke the build step and can replace build in the command above.

    The deployment files that can be configured using the wizard, spreadsheet (csv or xlsx), or edited manually are listed, below.

    @@ -1023,9 +944,26 @@

    Notes on vsds.yml for active-standby, geo-redundant deployment

    When installing or upgrading an active-standby, geo-redundant cluster, all 6 VSDs must be defined in the vsds.yml file in your deployment. The first 3 VSDs are assumed to be active and the second 3 VSDs are assumed to be standby.

    vstats.yml

    vstats.yml contains the definition of the VSTATs (VSD Statistics) to be operated on in this deployment. This file should be present in your deployment only if you are specifying VSTATs. If not provided, no VSTATs will be operated on. This file is of yaml list type. If it contains exactly 3 VSTAT definitions, a cluster installation or upgrade will be executed. Any other number of VSTAT definitions will result in 1 or more stand-alone VSTATs being installed or upgraded.

    +

    VSD RTT Performance Testing

    +

    You can use MetroAE to verify that your VSD setup has sufficient RTT performance. By default, the RTT performance test will run at the beginning of the VSD deploy step, prior to installing the VSD software. The parameters that you can use to control the operation of the test are available in 'common.yml’:

    +
      +
    • vsd_run_cluster_rtt_test When true, run RTT tests between VSDs in a cluster or standby/active cluster, else skip the test
    • +
    • vsd_ignore_errors_rtt_test When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error
    • +
    • vsd_max_cluster_rtt_msec Maximum RTT in milliseconds between VSDs in a cluster
    • +
    • vsd_max_active_standby_rtt_msec Maximum RTT in milliseconds between Active and Standby VSDs
    • +
    +

    In addition to the automatic execution that takes place in the VSD deploy step, you can run the VSD disk performance test at any time using metroae-container vsd test rtt.

    VSD Disk Performance Testing

    -

    You can use MetroAE to verify that your VSD setup has sufficient disk performance (IOPS). By default, the disk performance test will run at the beginning of the VSD deploy step, prior to installing the VSD software. The parameters that you can use to control the operation of the test are available in 'common.yml’. You can skip the test, specify the total size of all the files used in the test, and modify the minimum threshold requirement in IOPS for the test. Note that to minimize the effects of file system caching, the total file size must exceed the total RAM on the VSD. If MetroAE finds that the test is enabled and the disk performance is below the threshold, an error will occur and installation will stop. The default values that are provided for the test are recommended for best VSD performance in most cases. Your specific situation may require different values or to skip the test entirely.

    -

    In addition to the automatic execution that takes place in the VSD deploy step, you can run the VSD disk performance test at any time using metroae vsd test disk.

    +

    You can use MetroAE to verify that your VSD setup has sufficient disk performance (IOPS). By default, the disk performance test will run at the beginning of the VSD deploy step, prior to installing the VSD software. The parameters that you can use to control the operation of the test are available in 'common.yml’:

    +
      +
    • vsd_run_disk_performance_test Run the VSD disk performance test when true, else skip the test
    • +
    • vsd_disk_performance_test_total_file_size Sets the total size of created files for VSD disk performance test. For a valid measurement, the total file size must be larger than VSD RAM to minimize the effects of caching.
    • +
    • vsd_disk_performance_test_minimum_threshold Sets the minimum value for VSD disk performance test in IOPS
    • +
    • vsd_disk_performance_test_max_time Sets the duration of the VSD disk performance test in seconds
    • +
    • vsd_ignore_disk_performance_test_errors When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error
    • +
    +

    You can skip the test, specify the total size of all the files used in the test, and modify the minimum threshold requirement in IOPS for the test. Note that to minimize the effects of file system caching, the total file size must exceed the total RAM on the VSD. If MetroAE finds that the test is enabled and the disk performance is below the threshold, an error will occur and installation will stop. The default values that are provided for the test are recommended for best VSD performance in most cases. Your specific situation may require different values or to skip the test entirely.

    +

    In addition to the automatic execution that takes place in the VSD deploy step, you can run the VSD disk performance test at any time using metroae-container vsd test disk.

    Enabling post-installation security features

    You can use MetroAE to enable optional post-installation security features to ‘harden’ the installation. Your deployment contains a number of optional variables that can be used to accomplish this. These variables are described, below. For more detail, please see the Nuage VSP Install Guide.

    vsds.yml

    @@ -1072,11 +1010,11 @@

    Hosting your deployment files outside of the repo

    When you are contributing code, or pulling new versions of MetroAE quite often, it may make sense to host your variable files in a separate directory outside of nuage-metroae/deployments/. A deployment directory in any location can be specified instead of a deployment name when issuing the metroae command.

    Generating example deployment configuration files

    A sample of the deployment configuration files are provided in the deployments/default/ directory and also in examples/. If these are overwritten or deleted or if a “no frills” version of the files with only the minimum required parameters are desired, they can be generated with the following command:

    -
    metroae tools generate example --schema <schema_filename> [--no-comments]
    +
    metroae-container tools generate example --schema <schema_filename> [--no-comments]
     

    This will print an example of the deployment file specified by <schema_filename> under the schemas/ directory to the screen. The optional --no-comments will print the minimum required parameters (with no documentation).

    Example:

    -
    metroae tools generate example --schema vsds > deployments/new/vsds.yml
    +
    metroae-container tools generate example --schema vsds > deployments/new/vsds.yml
     

    Creates an example vsds configuration file under the “new” deployment.

    Running MetroAE using a Proxy VM

    @@ -1092,21 +1030,31 @@

    Questions, Feedback, and Contributing


    Encrypting Sensitive Data in MetroAE

    -

    You can safeguard sensitive data in MetroAE by encrypting files with MetroAE’s encryption tool. See the steps below for instructions on how to encrypt credentials.yml. It uses Ansible’s vault encoding in the background. More details about the vault feature can be found in documentation provided by Ansible.

    -

    1. Create the credentials file to be encrypted

    +

    You can safeguard sensitive data in MetroAE by encrypting files with MetroAE’s encryption tool. Your credentials can be encrypted when provided through your credentials.yml deployment file or in a credentials sheet in your Excel deployment spreadsheet. We use Ansible’s vault encoding in the background. More details about the vault feature can be found in documentation provided by Ansible.

    +

    The steps for encrypting your credentials.yml deployment file or your credentials in an Excel spreadsheet are as follows:

    +

    1a. Create the credentials file to be encrypted

    In your MetroAE deployment folder, create or edit the credentials.yml to store credentials required for various Nuage components. This file will be encrypted.

    -

    2. Encrypt credentials.yml

    -

    To encrypt credentials.yml, run the following command:

    -
    metroae tools encrypt credentials [deployment_name]
    +

    1b. Create your Excel deployment

    +

    We provide examples of Excel spreadsheets in examples/excel/. You can use these as guides to create your deployment. Once you fill out the credentials sheet in your spreadsheet, you can proceed to the next step which will encrypt the credentials sheet (the rest of your deployment will be unchanged).

    +

    2. Encrypt your credentials

    +

    To encrypt your credentials, run the following command:

    +
    metroae tools encrypt credentials [deployment_name/path_to_excel_spreadsheet]
     

    The default deployment name is default if not specified. This command will prompt for master passcode to encrypt the file and will also prompt for confirming passcode. Note: All user comments and unsupported fields in the credentials file will be lost.

    +

    You do not need to provide a deployment name if you’re using an Excel spreadsheet.

    3. Running MetroAE with encrypted credentials

    While running MetroAE commands you can supply the MetroAE passcode via prompt or by setting an environment variable

    metroae <workflow> <component> [action] [deployment_name]
     

    This command prompts you to enter the master passcode that you used to encrypt the credentials file. Alternatively, if you have the environment variable METROAE_PASSWORD set to the right passcode, MetroAE does not prompt for the passcode.

    +

    If you are using an Excel spreadsheet, you can convert your Excel spreadsheet into a deployment using the converter script:

    +
    convert_excel_to_deployment.py [path_to_excel_spreadsheet] [your_deployment_name]
    +
    +

    or, you can run any metroae playbook that invokes the build step (all playbooks aside from nuage_unip and reset_build) to convert your Excel spreadsheet into a deployment. The example below calls build directly, but you can use a different playbook:

    +
    metroae build [path_to_excel_spreadsheet]`
    +

    Questions, Feedback, and Contributing

    Get support via the forums on the MetroAE site.
    Ask questions and contact us directly at devops@nuagenetworks.net.

    @@ -1131,31 +1079,30 @@

    Prerequisites / Requirements

    Make sure you have unzipped copies of all the Nuage Networks files you are going to need for installation or upgrade. These are generally distributed as *.tar.gz files that are downloaded by you from Nokia OLCS/ALED. There are a few ways you can use to unzip:

    • If you are running MetroAE via a clone of the nuage-metroae repo, you can unzip these files by using the nuage-unzip shell script nuage-unzip.sh which will place the files in subdirectories under the path specified for the nuage_unzipped_files_dir variable in common.yml.
    • -
    • If you are running MetroAE via the MetroAE container, you can unzip these files using the metroae command. During the setup, you were promoted for the location of an data directory on the host. This data directory is mounted in the container as /data. Therefore, for using the unzip action, you must 1) copy your tar.gz files to a directory under the directory you specified at setup time and 2) you must specify a container-relative path on the unzip command line. For example, if you specified the data directory as /tmp, setup created the directory /data/metroae_data on your host and mounted that same directory as /data in the container. Assuming you copied your tar.gz files to /tmp/metroae_data/6.0.1 on the docker host, your unzip command line would be as follows: metroae tools unzip images /data/6.0.1/ /data/6.0.1.
    • You can also unzip the files manually and copy them to their proper locations by hand. For details of this process, including the subdirectory layout that MetroAE expects, see SETUP.md.

    Use of MetroAE Command Line

    MetroAE can perform a workflow using the command-line tool as follows:

    -
    metroae <workflow> <component> [deployment] [options]
    +
    metroae-container <workflow> <component> [deployment] [options]
     
    • workflow: Name of the workflow to perform, e.g. ‘install’ or 'upgrade’. Supported workflows can be listed with --list option.
    • component: Name of the component to apply the workflow to, e.g. 'vsds’, 'vscs’, 'everything’, etc.
    • deployment: Name of the deployment directory containing configuration files. See CUSTOMIZE.md
    • -
    • options: Other options for the tool. These can be shown using --help. Also, any options not directed to the metroae tool are passed to Ansible.
    • +
    • options: Other options for the tool. These can be shown using --help. Also, any options not directed to the metroae-container tool are passed to Ansible.

    The following are some examples:

    -
    metroae install everything
    +
    metroae-container install everything
     

    Installs all components described in deployments/default/.

    -
    metroae destroy vsds east_network
    +
    metroae-container destroy vsds east_network
     

    Takes down only the VSD components described by deployments/east_network/vsds.yml. Additional output will be displayed with 3 levels of verbosity.

    Deploy All Components

    MetroAE workflows operate on components as you have defined them in your deployment. If you run a workflow for a component not specified, the workflow skips all tasks associated with that component and runs to completion without error. Thus, if you run the install everything workflow when only VRS configuration is present, the workflow deploys VRS successfully while ignoring the tasks for the other components not specified. Deploy all specified components with one command as follows:

    -
    metroae install everything
    +
    metroae-container install everything
     
    -

    Note: metroae is a shell script that executes ansible-playbook with the proper includes and command line switches. Use metroae (instead of ansible-playbook) when running any of the workflows provided herein.

    +

    Note: metroae-container is a shell script that executes ansible-playbook with the proper includes and command line switches. Use metroae-container (instead of ansible-playbook) when running any of the workflows provided herein.

    Deploy Individual Modules

    MetroAE offers modular execution models in case you don’t want to deploy all components together. See modules below.

    @@ -1163,39 +1110,39 @@

    Deploy Individual Modules

    - - + +
    ModuleCommandDescription
    VCSmetroae install vsdsInstalls VSD components
    VNSmetroae install vscsInstalls VSC components
    VCSmetroae-container install vsdsInstalls VSD components
    VNSmetroae-container install vscsInstalls VSC components

    Install a Particular Role or Host

    MetroAE has a complete library of workflows, which are directly linked to each individual role. You can limit your deployment to a particular role or component, or you can skip steps you are confident need not be repeated. For example, to deploy only the VSD VM-images and get them ready for VSD software installation, run:

    -
    metroae install vsds predeploy
    +
    metroae-container install vsds predeploy
     

    To limit your deployment to a particular host, just add --limit parameter:

    -
    metroae install vsds predeploy --limit "vsd1.example.com"
    +
    metroae-container install vsds predeploy --limit "vsd1.example.com"
     

    VSD predeploy can take a long time. If you are vCenter user you may want to monitor progress via the vCenter console.

    Note: If you have an issue with a VM and would like to reinstall it, you must destroy it before you replace it. Otherwise, the install will find the first one still running and skip the new install.

    Copy QCOW2 Files before Deployment

    When installing or upgrading in a KVM environment, MetroAE copies the QCOW2 image files to the target file server during the predeploy phase. As an option, you can pre-position the qcow2 files for all the components by running copy_qcow2_files. This gives the ability to copy the required images files first and then run install or upgrade later.

    When QCOW2 files are pre-positioned, you must add a command-line variable, 'skip_copy_images’, to indicate that copying QCOW2 files should be skipped. Otherwise, the QCOW2 files will be copied again. An extra-vars ‘skip_copy_images’ needs to be passed on the command line during the deployment phase to skip copying of the image files again. For example, to pre-position the QCOW2 images, run:

    -
    metroae tools copy qcow
    +
    metroae-container tools copy qcow
     

    Then, to skip the image copy during the install:

    -
    metroae install everything --extra-vars skip_copy_images=True
    +
    metroae-container install everything --extra-vars skip_copy_images=True
     

    Deploy the Standby Clusters

    MetroAE can be used to bring up the Standby VSD and VSTAT(ES) cluster in situations where the active has already been deployed. This can be done using the following commands. For VSD Standby deploy

    -
    metroae install vsds standby predeploy
    -metroae install vsds standby deploy
    +
    metroae-container install vsds standby predeploy
    +metroae-container install vsds standby deploy
     

    For Standby VSTATs(ES)

    -
    metroae install vstats standby predeploy
    -metroae install vstats standby deploy
    +
    metroae-container install vstats standby predeploy
    +metroae-container install vstats standby deploy
     

    Setup a Health Monitoring Agent

    A health monitoring agent can be setup on compatible components during the deploy step. Currently this support includes the Zabbix agent. An optional parameter health_monitoring_agent can be specified on each component in the deployment files to enable setup. During each component deploy step when enabled, the agent will be downloaded, installed and configured to be operational. The agent can be installed separately, outside of the deploy role, using the following command:

    -
    metroae health monitoring setup
    +
    metroae-container health monitoring setup
     

    Debugging

    By default, ansible.cfg tells ansible to log to ./ansible.log.

    @@ -1229,18 +1176,18 @@

    2. Remove Components

    You have the option of removing the entire deployment or only specified individual components.

    Remove All Components

    Remove the entire existing deployment with one command as follows:

    -
    metroae destroy everything
    +
    metroae-container destroy everything
     
    -

    Note: you may alternate between metroae install everything and metroae destroy everything as needed.

    +

    Note: you may alternate between metroae-container install everything and metroae-container destroy everything as needed.

    Remove Individual Components

    Alternatively, you can remove individual components (VSD, VSC, VRS, etc) as needed. See VSC example below for details.

    Example Sequence for VSC:

    Configure components under your deployment
    -Run metroae install everything to deploy VSD, VSC, VRS, etc.
    +Run metroae-container install everything to deploy VSD, VSC, VRS, etc.
    Discover that something needs to be changed in the VSCs
    -Run metroae destroy vscs to tear down just the VSCs
    +Run metroae-container destroy vscs to tear down just the VSCs
    Edit vscs.yml in your deployment to fix the problem
    -Run metroae install vsc predeploy, metroae install vscs deploy, and metroae install vscs postdeploy to get the VSCs up and running again.

    +Run metroae-container install vsc predeploy, metroae-container install vscs deploy, and metroae-container install vscs postdeploy to get the VSCs up and running again.

    Questions, Feedback, and Contributing

    Get support via the forums on the MetroAE site.
    Ask questions and contact us directly at devops@nuagenetworks.net.

    @@ -1250,56 +1197,7 @@

    Questions, Feedback, and Contributing


    MetroAE Config

    -

    MetroAE Config is a template-driven VSD configuration tool. It utilizes the VSD API along with a set of common Feature Templates to create configuration in the VSD. The user will create a yaml or json data file with the necessary configuration parameters for the particular feature and execute simple CLI commands to push the configuration into VSD.

    -

    MetroAE Config is available via the MetroAE container. Once pulled and setup, additional documentation will be available.

    -

    Configuration Engine Installation

    -

    MetroAE Configuration engine is provided as one of the capabilities of the MetroAE Docker container. The installation of the container is handled via the metroae script. Along with the configuration container we also require some additional data.

    -

    On a host where the configuration engine will be installed the following artifacts will be installed:

    -
      -
    • Docker container for configuration
    • -
    • Collection of configuration Templates
    • -
    • VSD API Specification
    • -
    -

    System Requirements

    -
      -
    • Operating System: RHEL/Centos 7.4+
    • -
    • Docker: Docker Engine 1.13.1+
    • -
    • Network Access: Internal and Public
    • -
    • Storage: At least 800MB
    • -
    -

    Operating system

    -

    The primary requirement for the configuration container is Docker Engine, however the installation, container operation and functionality is only tested on RHEL/Centos 7.4. Many other operating systems will support Docker Engine, however support for these is not currently provided and would be considered experimental only. A manual set of installation steps is provided for these cases.

    -

    Docker Engine

    -

    The configuration engine is packaged into the MetroAE Docker container. This ensures that all package and library requirements are self-contained with no other host dependencies. To support this, Docker Engine must be installed on the host. The configuration container requirements, however, are quite minimal. Primarily, the Docker container mounts a local path on the host as a volume while ensuring that any templates and user data are maintained only on the host. The user never needs to interact directly with the container.

    -

    Network Access

    -

    Currently the configuration container is hosted in an internal Docker registry and the public network as a secondary option, while the Templates and API Spec are hosted only publicly. The install script manages the location of these resources. The user does not need any further information. However, without public network access the installation will fail to complete.

    -

    Storage

    -

    The configuration container along with the templates requires 800MB of local disk space. Note that the container itself is ~750MB, thus it is recommended that during install a good network connection is available.

    -

    User Privileges

    -

    The user must have elevated privileges on the host system.

    -

    Installation

    -

    Execute the following operations on the Docker host:

    -
      -
    1. Install Docker Engine

      -

      Various instructions for installing and enabling Docker are available on the Internet. One reliable source of information for Docker CE is hosted here:

      -

      https://docs.docker.com/engine/install/centos/

    2. -
    3. Add the user to the wheel and docker groups on the Docker host.

    4. -
    5. Move or Copy the “metroae” script from the nuage-metroae repo to /usr/bin and set permissions correctly to make the script executable.

    6. -
    7. Switch to the user that will operate "metroae config".

    8. -
    9. Setup the container using the metroae script.

      -

      We are going to pull the image and setup the metro container in one command below. During the install we will be prompted for a location for the container data directory. This is the location where our user data, templates and VSD API specs will be installed and created and a volume mount for the container will be created. However this can occur orthogonally via “pull” and “setup” running at separate times which can be useful depending on available network bandwidth.

      -

      [caso@metroae-host ~]$ metroae container setup

      -

      The MetroAE container needs access to your user data. It gets access by internally mounting a directory from the host. We refer to this as the 'data directory’. The data directory is where you will have deployments, templates, documentation, and other useful files. You will be prompted to provide the path to a local directory on the host that will be mounted inside the container. You will be prompted during setup for this path.

      -

      The MetroAE container can be setup in one of the following configurations:

      -
        -
      • Config only
      • -
      • Deploy only
      • -
      • Both config and deploy
      • -
      -

      During setup, you will be prompted to choose one of these options.

    10. -
    11. Follow additional insturctions found in the documentaion that is copied to the Docker host during setup.

    12. -
    -

    Complete documentation will be made available in the data directory you specify during setup. The complete documentation includes how to configure your environment, usage information for the tool, and more.

    +

    MetroaAE Config is no longer bundled with MetroAE. Please refer to https://github.com/nuagenetworks/nuage-metroae-config to get information on how use MetroAE Config.


    @@ -1334,33 +1232,33 @@

    Example Deployment

    Upgrading Automatically

    If your topology does not include VRS you can upgrade everything with one command. If it does includes VRS you can upgrade everything with two commands. MetroAE also gives you the option of upgrading individual components with a single command for each. If you prefer to have more control over each step in the upgrade process proceed to Upgrading By Individual Steps for instructions.

    Upgrade All Components (without VRS)

    -
     metroae upgrade everything
    +
     metroae-container upgrade everything
     

    Issuing this workflow will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option does not pause until completion to allow VRS(s) to be upgraded. If VRS(s) need to be upgraded, the following option should be performed instead.

    Upgrade All Components (with VRS)

    -
     metroae upgrade beforevrs
    +
     metroae-container upgrade beforevrs
     
      ( Upgrade the VRS(s) )
     
    - metroae upgrade aftervrs
    + metroae-container upgrade aftervrs
     

    Issuing the above workflows will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option allows the VRS(s) to be upgraded in-between other components.

    Upgrade Individual Components

    -
     metroae upgrade preupgrade health
    +
     metroae-container upgrade preupgrade health
     
    - metroae upgrade vsds
    + metroae-container upgrade vsds
     
    - metroae upgrade vscs beforevrs
    + metroae-container upgrade vscs beforevrs
     
      ( Upgrade the VRS(s) )
     
    - metroae upgrade vscs aftervrs
    + metroae-container upgrade vscs aftervrs
     
    - metroae upgrade vstats
    + metroae-container upgrade vstats
     
    - metroae upgrade postdeploy
    + metroae-container upgrade postdeploy
     
    - metroae upgrade postupgrade health
    + metroae-container upgrade postupgrade health
     

    Issuing the above workflows will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option allows the VRS(s) to be upgraded in-between other components. Performing individual workflows can allow specific components to be skipped or upgraded at different times.

    Upgrading By Individual Steps

    @@ -1368,37 +1266,37 @@

    Upgrading By Individual Steps

    Preupgrade Preparations

    1. Run health checks on VSD, VSC and VSTAT.

      -

      metroae upgrade preupgrade health

      +

      metroae-container upgrade preupgrade health

      Check the health reports carefully for any reported errors before proceeding. You can run health checks at any time during the upgrade process.

    2. Backup the VSD node database.

      -

      metroae upgrade sa vsd dbbackup

      +

      metroae-container upgrade sa vsd dbbackup

      The VSD node database is backed up.

      Troubleshooting: If you experience a failure you can re-execute the command.

      Note MetroAE provides a simple tool for optionally cleaning up the backup files that are generated during the upgrade process. The tool deletes the backup files for both VSD and VSC. There are two modes foe clean-up, the first one deletes all the backups and the second one deletes only the latest backup. By default the tool deletes only the latest backup. If you’d like to clean-up the backup files, you can simply run below commands: -Clean up all the backup files: metroae vsp_upgrade_cleanup -e delete_all_backups=true -Clean up the latest backup files: metroae vsp_upgrade_cleanup

    3. +Clean up all the backup files: metroae-container vsp_upgrade_cleanup -e delete_all_backups=true +Clean up the latest backup files: metroae-container vsp_upgrade_cleanup

    Upgrade VSD

    1. Power off the VSD node.

      -

      metroae upgrade sa vsd shutdown

      +

      metroae-container upgrade sa vsd shutdown

      VSD is shut down; it is not deleted. (The new node is brought up with the upgrade_vmname you previously specified.) You have the option of powering down VSD manually instead.

      Troubleshooting: If you experience a failure you can re-execute the command or power off the VM manually.

    2. Predeploy the new VSD node.

      -

      metroae install vsds predeploy

      +

      metroae-container install vsds predeploy

      The new VSD node is now up and running; it is not yet configured.

      -

      Troubleshooting: If you experience a failure, delete the new node by executing the command metroae upgrade destroy sa vsd, then re-execute the predeploy command. Do NOT run metroae destroy vsds as this command destroys the “old” VM which is not what we want to do here.

    3. +

      Troubleshooting: If you experience a failure, delete the new node by executing the command metroae-container upgrade destroy sa vsd, then re-execute the predeploy command. Do NOT run metroae-container destroy vsds as this command destroys the “old” VM which is not what we want to do here.

    4. Deploy the new VSD node.

      -

      metroae upgrade sa vsd deploy

      +

      metroae-container upgrade sa vsd deploy

      The VSD node is upgraded.

      -

      Troubleshooting: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command metroae upgrade destroy sa vsd) then re-execute the deploy command. Do NOT run metroae destroy vsds for this step.

    5. +

      Troubleshooting: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command metroae-container upgrade destroy sa vsd) then re-execute the deploy command. Do NOT run metroae-container destroy vsds for this step.

    6. Set the VSD upgrade complete flag.

      -

      metroae upgrade sa vsd complete

      +

      metroae-container upgrade sa vsd complete

      The upgrade flag is set to complete.

      Troubleshooting: If you experience a failure, you can re-execute the command.

    7. Apply VSD license (if needed)

      -

      metroae vsd license

      +

      metroae-container vsd license

      The VSD license will be applied.

    8. Log into the VSD and verify the new version.

    @@ -1406,17 +1304,17 @@

    Upgrade VSC

    This example is for one VSC node. If your topology has more than one VSC node, proceed to the Upgrade VSC Node One section of UPGRADE_HA.md and follow those instructions through to the end.

    1. Run VSC health check (optional).

      -

      metroae upgrade sa vsc health -e report_filename=vsc_preupgrade_health.txt

      +

      metroae-container upgrade sa vsc health -e report_filename=vsc_preupgrade_health.txt

      You performed health checks during preupgrade preparations, but it is good practice to run the check here as well to make sure the VSD upgrade has not caused any problems.

    2. Backup and prepare the VSC node.

      -

      metroae upgrade sa vsc backup

      +

      metroae-container upgrade sa vsc backup

      Troubleshooting: If you experience failure, you can re-execute the command.

    3. Deploy VSC.

      -

      metroae upgrade sa vsc deploy

      +

      metroae-container upgrade sa vsc deploy

      The VSC is upgraded.

      Troubleshooting: If you experience a failure, you can re-execute the command. If it fails a second time, manually copy a valid .tim file to the VSC to affect the deployment. If that fails, deploy a new VSC using the old version, or recover the VM from a backup. You can use MetroAE for the deployment (vsc_predeploy, vsc_deploy, vsc_postdeploy…).

    4. Run VSC postdeploy.

      -

      metroae upgrade sa vsc postdeploy

      +

      metroae-container upgrade sa vsc postdeploy

      VSC upgrade is complete.

      Troubleshooting: If you experience a failure, you can re-execute the command. If it fails a second time, manually copy a valid .tim file to the VSC to affect the deployment. If that fails, deploy a new VSC using the old version, or recover the VM from a backup. You can use MetroAE for the deployment (vsc_predeploy, vsc_deploy, vsc_postdeploy…).

    @@ -1426,26 +1324,26 @@

    Upgrade VSTAT

    Our example includes a VSTAT node. If your topology does not include one, proceed to Finalize the Upgrade below.

    1. Run VSTAT health check (optional).

      -

      metroae upgrade sa vstat health -e report_filename=vstat_preupgrade_health.txt

      +

      metroae-container upgrade sa vstat health -e report_filename=vstat_preupgrade_health.txt

      You performed health checks during preupgrade preparations, but it is good practice to run the check here as well to make sure the VSD upgrade has not caused any problems.

    2. Prepare the VSTAT node for upgrade.

      -

      metroae upgrade sa vstat prep

      +

      metroae-container upgrade sa vstat prep

      Sets up SSH and disables stats collection.

    3. Upgrade the VSTAT node.

      -

      metroae upgrade sa vstat inplace

      +

      metroae-container upgrade sa vstat inplace

      Performs an in-place upgrade of the VSTAT.

    4. Complete VSTAT upgrade and perform post-upgrade checks.

      -

      metroae upgrade sa vstat wrapup

      +

      metroae-container upgrade sa vstat wrapup

      Completes the upgrade process, renables stats and performs a series of checks to ensure the VSTAT is healthy.

    Finalize the Upgrade

    1. Finalize the settings.

      -

      metroae upgrade postdeploy

      +

      metroae-container upgrade postdeploy

      The final steps for the upgrade are executed.

      Troubleshooting: If you experience a failure, you can re-execute the command.

    2. Run a health check.

      -

      metroae upgrade postupgrade health

      +

      metroae-container upgrade postupgrade health

      Health reports are created that can be compared with the ones produced during preupgrade preparations. Investigate carefully any errors or discrepancies.

    Questions, Feedback, and Contributing

    @@ -1479,10 +1377,10 @@

    Patch Upgrade for VSD, AKA in-place upgrade

    Active/Standby cluster upgrade

    You can use MetroAE to upgrade Active/Standby VSD clusters, also known as geo-redundant clusters. You can also use MetroAE to upgrade Active/Standby VSTAT (ES) clusters. The support for this is built into the upgrade_everything, upgrade_vsds, and upgrade_vstats plays. A step-by-step manual procedure is supported, but is not documented here. See Upgrading By Individual Steps for more information.

    If you want to perform a standby VSD cluster inplace upgrade only, You can use the following command.

    -

    metroae upgrade vsds standby inplace

    +

    metroae-container upgrade vsds standby inplace

    VSD Stats-out upgrade

    By default, Nuage VSD and VSTAT components are deployed in what is referred to as ‘stats-in’ mode. This refers to the fact that the stats collector process that feeds data to the ES cluster runs ‘in’ the VSDs. An alternative to this deployment, installation of which is also supported by MetroAE, is a ‘stats-out’ mode. In 'stats-out’, three additional VSDs are deployed specifically to handle the stats collection. We refer to those extra VSD nodes as VSD stats-out nodes. In such a case, the stats collection work is not running ‘in’ the regular VSD cluster. Stats collection is done ‘out’ in the cluster of 3 VSD stats-out nodes. ES nodes are also deployed in a special way, with 3 ES nodes in a cluster and 3+ ES nodes configured as ‘data’ nodes. You can find out more detail about the deployments in the Nuage documentation.

    -

    You can use MetroAE to install or upgrade upgrade your stats-out configuration. Special workflows have been created to support the stats-out upgrade. These special workflows have been included automatically in the metroae upgrade everything command. Alternatively you can use the step-by-step upgrade procedure to perform your upgrade. The metroae upgrade vsd stats command will handle upgrading the separate VSD stats-out nodes. The metroae upgrade vsd stats inplace command will apply a patch (in-place) upgrade of the VSD stats-out nodes.

    +

    You can use MetroAE to install or upgrade upgrade your stats-out configuration. Special workflows have been created to support the stats-out upgrade. These special workflows have been included automatically in the metroae-container upgrade everything command. Alternatively you can use the step-by-step upgrade procedure to perform your upgrade. The metroae-container upgrade vsd stats command will handle upgrading the separate VSD stats-out nodes. The metroae-container upgrade vsd stats inplace command will apply a patch (in-place) upgrade of the VSD stats-out nodes.

    Note: Upgrade of the VSD stats-out nodes should take place only after the primary VSD cluster and all Elasticsearch nodes have been upgraded.

    A patch upgrade of the stats out node can also be done by running upgrade_vsd_stats_inplace procedure.

    Example Deployment

    @@ -1496,33 +1394,33 @@

    Example Deployment

    Upgrading Automatically

    If your upgrade plans do not include upgrading VRSs or other dataplane components, you can upgrade everything with one command. If your upgrade plans do include VRSs or other dataplane components, you can upgrade everything with two commands. MetroAE also gives you the option of upgrading all instances of a component type, e.g. VSC, with a single command for each component type. If you prefer to have more control over each step in the upgrade process proceed to Upgrading By Individual Steps for instructions.

    Upgrade All Components including Active/Standby clusters (does not pause for external VRS/dataplane upgrade)

    -
     metroae upgrade everything
    +
     metroae-container upgrade everything
     

    Issuing this workflow will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option does not pause until completion to allow VRSs and other dataplane components to be upgraded. If dataplane components need to be upgraded, the following option should be performed instead.

    Upgrade All Components including Active/Standby clusters (includes pause for VRS)

    -
     metroae upgrade beforevrs
    +
     metroae-container upgrade beforevrs
     
      ( Upgrade the VRSs andn other dataplane components )
     
    - metroae upgrade aftervrs
    + metroae-container upgrade aftervrs
     

    Issuing the above workflows will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option allows the VRSs and other dataplane components to be upgraded between other components.

    Upgrade Individual Components including Active Standby clusters

    -
     metroae upgrade preupgrade health
    +
     metroae-container upgrade preupgrade health
     
    - metroae upgrade vsds
    + metroae-container upgrade vsds
     
    - metroae upgrade vscs beforevrs
    + metroae-container upgrade vscs beforevrs
     
      ( Upgrade the VRS(s) )
     
    - metroae upgrade vscs aftervrs
    + metroae-container upgrade vscs aftervrs
     
    - metroae upgrade vstats
    + metroae-container upgrade vstats
     
    - metroae upgrade postdeploy
    + metroae-container upgrade postdeploy
     
    - metroae upgrade postupgrade health
    + metroae-container upgrade postupgrade health
     

    Issuing the above workflows will detect if components are clustered (HA) or not and will upgrade all components that are defined in the deployment. This option allows the VRS(s) to be upgraded in-between other components. Performing individual workflows can allow specific components to be skipped or upgraded at different times.

    Upgrading By Individual Steps not including Active/Standby clusters

    @@ -1530,66 +1428,66 @@

    Upg

    Preupgrade Preparations

    1. Run health checks on VSD, VSC and VSTAT.

      -

      metroae upgrade preupgrade health

      +

      metroae-container upgrade preupgrade health

      Check the health reports carefully for any reported errors before proceeding. You can run health checks at any time during the upgrade process.

    2. Backup the database and decouple the VSD cluster.

      -

      metroae upgrade ha vsd dbbackup

      +

      metroae-container upgrade ha vsd dbbackup

      vsd_node1 has been decoupled from the cluster and is running in standalone (SA) mode.

      Troubleshooting: If you experience a failure, recovery depends on the state of vsd_node1. If it is still in the cluster, you can re-execute the command. If not, you must redeploy vsd_node1 from a backup or otherwise recover.

      Note MetroAE provides a simple tool for optionally cleaning up the backup files that are generated during the upgrade process. The tool deletes the backup files for both VSD and VSC. There are two modes foe clean-up, the first one deletes all the backups and the second one deletes only the latest backup. By default the tool deletes only the latest backup. If you’d like to clean-up the backup files, you can simply run below commands: -Clean up all the backup files: metroae upgrade cleanup -e delete_all_backups=true -Clean up the latest backup files: metroae upgrade cleanup

    3. +Clean up all the backup files: metroae-container upgrade cleanup -e delete_all_backups=true +Clean up the latest backup files: metroae-container upgrade cleanup

    Upgrade VSD

    1. Power off VSD nodes two and three.

      -

      metroae upgrade ha vsd shutdown23

      +

      metroae-container upgrade ha vsd shutdown23

      vsd_node2 and vsd_node3 are shut down; they are not deleted. The new nodes are brought up with the upgrade vmnames you previously specified. You have the option of powering down VSDs manually instead.

      Troubleshooting: If you experience a failure you can re-execute the command or power off the VM manually.

    2. Predeploy new VSD nodes two and three.

      -

      metroae upgrade ha vsd predeploy23

      +

      metroae-container upgrade ha vsd predeploy23

      The new vsd_node2 and vsd_node3 are now up and running; they are not yet configured.

      -

      Troubleshooting: If you experience a failure, delete the newly-created nodes by executing the command metroae upgrade destroy ha vsd23, then re-execute the predeploy command. Do NOT run metroae destroy vsds as this command destroys the “old” VM which is not what we want to do here.

    3. +

      Troubleshooting: If you experience a failure, delete the newly-created nodes by executing the command metroae-container upgrade destroy ha vsd23, then re-execute the predeploy command. Do NOT run metroae-container destroy vsds as this command destroys the “old” VM which is not what we want to do here.

    4. Deploy new VSD nodes two and three.

      -

      metroae upgrade ha vsd deploy23

      +

      metroae-container upgrade ha vsd deploy23

      The VSD nodes have been upgraded.

      -

      Troubleshooting: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command metroae upgrade destroy ha vsd23) then re-execute the deploy command. Do NOT run metroae destroy vsds as this command destroys the “old” VM.

    5. +

      Troubleshooting: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command metroae-container upgrade destroy ha vsd23) then re-execute the deploy command. Do NOT run metroae-container destroy vsds as this command destroys the “old” VM.

    6. Power off VSD node one.

      -

      metroae upgrade ha vsd shutdown1

      +

      metroae-container upgrade ha vsd shutdown1

      vsd_node1 shuts down; it is not deleted. The new node is brought up with the upgrade_vmname you previously specified. You have the option of powering down VSD manually instead.

      Troubleshooting: If you experience a failure you can re-execute the command or power off the VM manually.

    7. Predeploy the new VSD node one.

      -

      metroae upgrade ha vsd predeploy1

      +

      metroae-container upgrade ha vsd predeploy1

      The new VSD node one is now up and running; it is not yet configured.

      -

      Troubleshooting: If you experience a failure, delete the newly-created node by executing the command metroae upgrade destroy ha vsd1, then re-execute the predeploy command. Do NOT run vsd_destroy as this command destroys the “old” VM.

    8. +

      Troubleshooting: If you experience a failure, delete the newly-created node by executing the command metroae-container upgrade destroy ha vsd1, then re-execute the predeploy command. Do NOT run vsd_destroy as this command destroys the “old” VM.

    9. Deploy the new VSD node one.

      -

      metroae upgrade ha vsd deploy1

      +

      metroae-container upgrade ha vsd deploy1

      All three VSD nodes are upgraded.

      -

      Troubleshooting: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command metroae upgrade destroy ha vsd1) then re-execute the deploy command. Do NOT run metroae destroy vsds as this command destroys the “old” VM.

    10. +

      Troubleshooting: If you experience a failure before the VSD install script runs, re-execute the command. If it fails a second time or if the failure occurs after the VSD install script runs, destroy the VMs (either manually or with the command metroae-container upgrade destroy ha vsd1) then re-execute the deploy command. Do NOT run metroae-container destroy vsds as this command destroys the “old” VM.

    11. Set the VSD upgrade complete flag.

      -

      metroae upgrade ha vsd complete

      +

      metroae-container upgrade ha vsd complete

      The upgrade flag is set to complete.

      Troubleshooting: If you experience a failure, you can re-execute the command.

    12. Apply VSD license (if needed)

      -

      metroae vsd license

      +

      metroae-container vsd license

      The VSD license will be applied.

    13. Log into the VSDs and verify the new versions.

    Upgrade VSC Node One

    1. Run VSC health check (optional).

      -

      metroae upgrade ha vsc health -e report_filename=vsc_preupgrade_health.txt

      +

      metroae-container upgrade ha vsc health -e report_filename=vsc_preupgrade_health.txt

      You performed health checks during preupgrade preparations, but it is good practice to run the check here as well to make sure the VSD upgrade has not caused any problems.

    2. Backup and prepare VSC node one.

      -

      metroae upgrade ha vsc backup1

      +

      metroae-container upgrade ha vsc backup1

      Troubleshooting: If you experience a failure, you can re-execute the command.

    3. Deploy VSC node one.

      -

      metroae upgrade ha vsc deploy1

      +

      metroae-container upgrade ha vsc deploy1

      VSC node one has been upgraded.

      Troubleshooting: If you experience a failure, you can re-execute the command. If it fails a second time, manually copy (via scp) the .tim file, bof.cfg, and config.cfg (that were backed up in the previous step), to the VSC. Then reboot the VSC.

    4. Run postdeploy for VSC node one.

      -

      metroae upgrade ha vsc postdeploy1

      +

      metroae-container upgrade ha vsc postdeploy1

      One VSC is now running the old version, and one is running the new version.

      Troubleshooting: If you experience a failure, you can re-execute the command. If it fails a second time, manually copy (via scp) the .tim file, bof.cfg, and config.cfg (that were backed up before beginning VSC upgrade), to the VSC. Then reboot the VSC.

    @@ -1598,14 +1496,14 @@

    Upgrade VRS

    Upgrade VSC Node Two

    1. Backup and prepare VSC node two.

      -

      metroae upgrade ha vsc backup2

      +

      metroae-container upgrade ha vsc backup2

      Troubleshooting: If you experience a failure, you can re-execute the command.

    2. Deploy VSC node two.

      -

      metroae upgrade ha vsc deploy2

      +

      metroae-container upgrade ha vsc deploy2

      VSC node two has been upgraded.

      Troubleshooting: If you experience a failure, you can re-execute the command. If it fails a second time, manually copy (via scp) the .tim file, bof.cfg, and config.cfg (that were backed up before beginning VSC upgrade), to the VSC. Then reboot the VSC.

    3. Run postdeploy for VSC node two.

      -

      metroae upgrade ha vsc postdeploy2

      +

      metroae-container upgrade ha vsc postdeploy2

      Both VSCs are now running the new version.

      Troubleshooting: If you experience a failure, you can re-execute the command. If it fails a second time, manually copy (via scp) the .tim file, bof.cfg, and config.cfg (that were backed up before beginning VSC upgrade), to the VSC. Then reboot the VSC.

    @@ -1613,36 +1511,36 @@

    Upgrade VSTAT

    Our example includes a VSTAT node. If your topology does not include one, proceed to Finalize the Upgrade below.

    1. Run VSTAT health check (optional).

      -

      metroae upgrade ha vstat health -e report_filename=vstat_preupgrade_health.txt

      +

      metroae-container upgrade ha vstat health -e report_filename=vstat_preupgrade_health.txt

      You performed health checks during preupgrade preparations, but it is good practice to run the check here as well to make sure the VSD upgrade has not caused any problems.

    2. Prepare the VSTAT nodes for upgrade.

      -

      metroae upgrade ha vstat prep

      +

      metroae-container upgrade ha vstat prep

      Sets up SSH and disables stats collection.

    3. Upgrade the VSTAT nodes.

      -

      metroae upgrade ha vstat inplace

      +

      metroae-container upgrade ha vstat inplace

      Performs an in-place upgrade of the VSTAT nodes.

    4. Complete VSTAT upgrade and perform post-upgrade checks.

      -

      metroae upgrade ha vstat wrapup

      +

      metroae-container upgrade ha vstat wrapup

      Completes the VSTAT upgrade process, renables stats, and performs a series of checks to ensure the VSTAT nodes are healthy.

    Upgrade Stats-out Nodes

    1. Upgrade the VSD stats-out nodes

      If the upgrade is a major upgrade, e.g., 6.0.* -> 20.10.* , use the following command to upgrade the VSD stats-out nodes:

      -

      metroae upgrade vsd stats

      +

      metroae-container upgrade vsd stats

      If the upgrade is a patch (in-place), e.g., 20.10.R1 -> 20.10.R4, first make sure that the main VSD cluster has been upgraded/patched. If the upgrade of the main VSD cluster hasn’t been done, you can use the following command to patch the main VSD cluster:

      -

      metroae upgrade vsds inplace

      +

      metroae-container upgrade vsds inplace

      When you are certain that the main VSD cluster has been patched, you can use the following command to apply the patch to the VSD stat-out nodes:

      -

      metroae upgrade vsd stats inplace

    2. +

      metroae-container upgrade vsd stats inplace

    Finalize the Upgrade

    1. Finalize the settings

      -

      metroae upgrade postdeploy

      +

      metroae-container upgrade postdeploy

      The final steps for the upgrade are executed.

      Troubleshooting: If you experience a failure, you can re-execute the command.

    2. Run a health check.

      -

      metroae upgrade postupgrade health

      +

      metroae-container upgrade postupgrade health

      Health reports are created that can be compared with the ones produced during preupgrade preparations. Investigate carefully any errors or discrepancies.

    Questions, Feedback, and Contributing

    @@ -1659,13 +1557,13 @@

    Prerequisites / Requirements

    Before attempting to control the VSD services using MetroAE, you must configure a deployment with information about the VSDs. You also must have previously set up your Nuage MetroAE environment and customized the environment for your target platform.

    Use of MetroAE Command Line

    MetroAE can perform any of the following VSD service workflows using the command-line tool as follows:

    -
    metroae vsd services stop [deployment] [options]
    -metroae vsd services start [deployment] [options]
    -metroae vsd services restart [deployment] [options]
    +
    metroae-container vsd services stop [deployment] [options]
    +metroae-container vsd services start [deployment] [options]
    +metroae-container vsd services restart [deployment] [options]
     
    • deployment: Name of the deployment directory containing configuration files. See CUSTOMIZE.md
    • -
    • options: Other options for the tool. These can be shown using --help. Also, any options not directed to the metroae tool are passed to Ansible.
    • +
    • options: Other options for the tool. These can be shown using --help. Also, any options not directed to the metroae-container tool are passed to Ansible.

    Note: The VSD services workflows can be used even if you didn’t use MetroAE to install or upgrade your VSD deployment.

    Questions, Feedback, and Contributing

    @@ -1715,12 +1613,12 @@

    2. Configure or create a new deployment as if you intend to install the orig
  • If necessary, provide unique VM names in vsds.yml.
  • 3. Run the following command to bring up new copies of the original VMs (e.g. 5.4.1).

    -

    metroae install vsds predeploy <deployment name>

    +

    metroae-container install vsds predeploy <deployment name>

    4. Manually copy (via scp) the pre-upgrade backup to /opt/vsd/data on VSD 1.

    5. Run the following command to start the installation of the VSD software on the VSD VM. This will also restore the backup that you copied to the first VSD.

    -

    metroae install vsds deploy <deployment name>

    +

    metroae-container install vsds deploy <deployment name>

    6. Run the following command to run sanity and connectivity checks on the VSDs:

    -

    metroae install vsds postdeploy <deployment name>

    +

    metroae-container install vsds postdeploy <deployment name>

    At this point, your original VSD configuration should be restored and up and running.

    Questions, Feedback, and Contributing

    Get support via the forums on the MetroAE site.
    @@ -1798,18 +1696,18 @@

    SD-WAN Portal deployment using MetroAE

    Currently the following workflows are supported:

      -
    • metroae install portal - Deploy Portal VM(s) on KVM hypervisor and install the application
    • -
    • metroae install portal predeploy - Prepares the HV and deploys Portal VMs
    • -
    • metroae install portal deploy - Installs Docker-CE, SD-WAN Portal on the already prepared VMs
    • -
    • metroae install portal postdeploy - To be updated. Includes a restart and license update task
    • -
    • metroae install portal license - Copies the license file to the Portal VM(s) and restarts the Portal(s)
    • -
    • metroae destroy portal - Destroys Portal VMs and cleans up the files from hypervisor(s)
    • -
    • metroae upgrade portal - Upgrade Portal VM(s) on KVM hypervisor
    • -
    • metroae upgrade portal preupgrade health - Performs prerequisite and health checks of a Portal VM or cluster before initiating an upgrade
    • -
    • metroae upgrade portal shutdown - Performs database backup if necessary, Portal VM snapshot and stops all services
    • -
    • metroae upgrade portal deploy - Performs an install of the new SD-WAN Portal version
    • -
    • metroae upgrade portal postdeploy - Performs post-upgrade checks to verify Portal VM health, cluster status, and verify successful upgrade
    • -
    • metroae rollback portal - In the event of an unsuccessful upgrade, Portal(s) can be rolled back to the previously installed software version.
    • +
    • metroae-container install portal - Deploy Portal VM(s) on KVM hypervisor and install the application
    • +
    • metroae-container install portal predeploy - Prepares the HV and deploys Portal VMs
    • +
    • metroae-container install portal deploy - Installs Docker-CE, SD-WAN Portal on the already prepared VMs
    • +
    • metroae-container install portal postdeploy - To be updated. Includes a restart and license update task
    • +
    • metroae-container install portal license - Copies the license file to the Portal VM(s) and restarts the Portal(s)
    • +
    • metroae-container destroy portal - Destroys Portal VMs and cleans up the files from hypervisor(s)
    • +
    • metroae-container upgrade portal - Upgrade Portal VM(s) on KVM hypervisor
    • +
    • metroae-container upgrade portal preupgrade health - Performs prerequisite and health checks of a Portal VM or cluster before initiating an upgrade
    • +
    • metroae-container upgrade portal shutdown - Performs database backup if necessary, Portal VM snapshot and stops all services
    • +
    • metroae-container upgrade portal deploy - Performs an install of the new SD-WAN Portal version
    • +
    • metroae-container upgrade portal postdeploy - Performs post-upgrade checks to verify Portal VM health, cluster status, and verify successful upgrade
    • +
    • metroae-container rollback portal - In the event of an unsuccessful upgrade, Portal(s) can be rolled back to the previously installed software version.

    Example deployment files are available under examples/kvm_portal_install

    Portal Install

    @@ -1958,7 +1856,7 @@

    Alternative Specification for NSGv Only Deployments

    The CIDRs for the VPC, WAN interface, LAN interface and private subnet must be specified. When provisioning a VPC in this way, the elastic network interface identifiers aws_data_eni and aws_access_eni for the NSGv do not need to be specified as they are discovered from the created VPC.

    6. Deploy Components

    After you have set up the environment and configured your components, you can use MetroAE to deploy your components with a single command.

    -
    metroae install everything
    +
    metroae-container install everything
     

    Alternatively, you can deploy individual components or perform individual tasks such as predeploy, deploy and postdeploy. See DEPLOY.md for details.

    Questions, Feedback, and Contributing

    @@ -2000,12 +1898,12 @@

    4. Add Terraform Provisio resource definitions... provisioner "local-exec" { - command = "./metroae install vsds deploy" + command = "./metroae-container install vsds deploy" working_dir = "<metro-directory>/nuage-metroae" } provisioner "local-exec" { - command = "./metroae install vsds postdeploy" + command = "./metroae-container install vsds postdeploy" working_dir = "<metro-directory>/nuage-metroae" } } @@ -2120,7 +2018,7 @@

    Overview

    1. Create the proper files and subdirectories (see below) in a directory of your choosing
    2. Package the plugin using the package-plugin.sh script
    3. -
    4. Install the plugin using the metroae plugin install command
    5. +
    6. Install the plugin using the metroae-container plugin install command
    7. Add the proper data files, if required, to your deployment directory
    8. Execute your role
    @@ -2135,12 +2033,12 @@

    Package a Plugin

    This will create a tarball of the plugin ready for distribution to users.

    Install a Plugin

    Users who wish to install the plugin can issue:

    -
    ./metroae plugin install <plugin-tarball-or-directory>
    +
    ./metroae-container plugin install <plugin-tarball-or-directory>
     

    This should be issued from the nuage-metro repo or container. Note that a tarball or unzipped directory can both be installed.

    Uninstall a Plugin

    To uninstall a plugin, the user can issue:

    -
    ./metroae plugin uninstall <plugin-name>
    +
    ./metroae-container plugin uninstall <plugin-name>
     

    This should be issued from the nuage-metro repo or container. Uninstall is by plugin name as all installed files were recorded and will be rolled back.

    Examples

    diff --git a/documentation.pdf b/documentation.pdf index a45a625917..eb77d9e644 100644 Binary files a/documentation.pdf and b/documentation.pdf differ diff --git a/encrypt_credentials.py b/encrypt_credentials.py index 43425049eb..977153c126 100755 --- a/encrypt_credentials.py +++ b/encrypt_credentials.py @@ -6,8 +6,10 @@ import jinja2 import os from ansible.parsing.vault import VaultEditor, VaultSecret, is_encrypted +from openpyxl import load_workbook from generate_example_from_schema import ExampleFileGenerator from builtins import str +from io import open DEPLOYMENT_DIR = 'deployments' @@ -16,12 +18,126 @@ class VaultYaml(str): pass -def vault_constructor(loader, node): - return node.value +class ExcelParseError(Exception): + pass -def literal_unicode_representer(dumper, data): - return dumper.represent_scalar("!vault", data, style='|') +class ExcelParser(object): + + def __init__(self): + self.settings = { + "column_offset": 1, + "row_offset": 4, + "row_sections_present": True, + "use_list_name": False, + "default_fields_by_col": True} + + self.errors = list() + self.cell_positions = dict() + + def read_and_encrypt_credentials_excel_sheet(self, passcode, spreadsheet_path): + schema_data = read_file('schemas/credentials.json') + props = schema_data['items']['properties'] + title_field_map = self.generate_title_field_map(props) + fields_by_col = self.settings["default_fields_by_col"] + workbook = load_workbook(spreadsheet_path) + worksheet = workbook['Credentials'] + labels = self.read_labels(worksheet, title_field_map, + fields_by_col=fields_by_col) + + entry_offset = 0 + do_not_encrypt_list = get_do_not_encrypt_list() + while True: + self.cell_positions.clear() + entry = self.read_and_encrypt_data_entry(workbook, spreadsheet_path, worksheet, + labels, entry_offset, do_not_encrypt_list, + passcode, fields_by_col=fields_by_col) + + if entry != dict(): + entry_offset += 1 + else: + break + + def generate_title_field_map(self, properties): + title_field_map = dict() + for name, field in iter(properties.items()): + if "type" in field and field["type"] == "array": + title_field_map[field["title"]] = "list:" + name + else: + title_field_map[field["title"]] = name + + return title_field_map + + def read_labels(self, worksheet, title_field_map, fields_by_col=False): + labels = list() + + col = self.settings["column_offset"] + row = self.settings["row_offset"] + + if self.settings["row_sections_present"] and fields_by_col: + row += 1 + + while True: + cell = worksheet.cell(row=row, column=col) + value = cell.value + if fields_by_col: + col += 1 + else: + row += 1 + + if value is not None: + if value in title_field_map: + labels.append(title_field_map[value]) + else: + labels.append(None) + else: + break + + return labels + + def read_and_encrypt_data_entry(self, workbook, spreadsheet_path, worksheet, labels, entry_offset, + do_not_encrypt_list, passcode, fields_by_col=False): + entry = dict() + + col = self.settings["column_offset"] + row = self.settings["row_offset"] + + if fields_by_col: + row += entry_offset + 1 + if self.settings["row_sections_present"]: + row += 1 + else: + col += entry_offset + 1 + + for label in labels: + cell = worksheet.cell(row=row, column=col) + value = cell.value + if value is not None: + if label is not None: + if label not in do_not_encrypt_list: + encrypted_value = encrypt_value(value, passcode).encode('utf-8') + cell.value = encrypted_value + entry[label] = encrypted_value + self.cell_positions[label] = cell.coordinate + else: + self.record_error(cell.coordinate, "Data entry for unknown label") + else: + self.cell_positions[label] = cell.coordinate + if fields_by_col: + col += 1 + else: + row += 1 + + workbook.save(spreadsheet_path) + return entry + + def record_error(self, position, message): + self.errors.append({"position": position, + "message": message}) + + +def vault_constructor(loader, node): + return node.value def encrypt_credentials_file(passcode, deployment_name): @@ -34,50 +150,65 @@ def encrypt_credentials_file(passcode, deployment_name): else: credentials_file = os.path.join( DEPLOYMENT_DIR, deployment_name, 'credentials.yml') - with open(credentials_file, 'r') as file: - credentials = yaml.load(file.read().decode("utf-8"), - Loader=yaml.Loader) - with open('schemas/credentials.json') as credentials_schema: - data = yaml.load(credentials_schema.read().decode("utf-8"), - Loader=yaml.Loader) - props = data['items']['properties'] - do_not_encrypt_list = [] - for k, v in props.items(): - if ('encrypt' in v) and (not v['encrypt']): - do_not_encrypt_list.append(k) + credentials = read_file(credentials_file) + do_not_encrypt_list = get_do_not_encrypt_list() if credentials is not None: for cred_set in credentials: for cred in list(cred_set.keys()): if cred not in do_not_encrypt_list: - secret = VaultSecret(passcode) - editor = VaultEditor() - if not is_encrypted(cred_set[cred]): - vaultCode = editor.encrypt_bytes(cred_set[cred], - secret) - else: - vaultCode = cred_set[cred] - cred_set[cred] = '!vault |\n' + (vaultCode) + encrypted_val = encrypt_value(cred_set[cred], passcode) + cred_set[cred] = encrypted_val gen_example = ExampleFileGenerator(False, True) example_lines = gen_example.generate_example_from_schema( 'schemas/credentials.json') template = jinja2.Template(example_lines) credentials = template.render(credentials=credentials) - with open(credentials_file, 'w') as file: - file.write(credentials.encode("utf-8")) + with open(credentials_file, 'w', encoding='utf-8') as file: + file.write(credentials) + + +def get_do_not_encrypt_list(): + data = read_file('schemas/credentials.json') + props = data['items']['properties'] + do_not_encrypt_list = [] + for k, v in props.items(): + if ('encrypt' in v) and (not v['encrypt']): + do_not_encrypt_list.append(k) + + return do_not_encrypt_list + + +def encrypt_value(value, passcode): + secret = VaultSecret(bytes(passcode.encode('ascii'))) + editor = VaultEditor() + if not is_encrypted(value): + vaultCode = editor.encrypt_bytes(value, secret).decode('ascii') + else: + vaultCode = value + encrypted_val = '!vault |\n' + (vaultCode) + + return encrypted_val + + +def read_file(filepath): + with open(filepath, 'r', encoding='utf-8') as f: + data = yaml.load(f.read(), Loader=yaml.Loader) + + return data def main(): parser = argparse.ArgumentParser( - description='Encrypt user credentials file') + description='Encrypt user credentials (deployment name or Excel spreadsheet path)') parser.add_argument( "deployment", nargs='?', - help="Optional deployment name - will default to 'default' deployment", + help="Required path when encrypting an Excel spreadsheet. Optional if using a deployment name - will default to 'default' deployment", default="default") - args = parser.parse_args() + args, unknown = parser.parse_known_args() if "METROAE_PASSWORD" in os.environ: passcode = os.environ["METROAE_PASSWORD"] @@ -98,7 +229,19 @@ def main(): print("Error in getting passcode from command line") sys.exit() - encrypt_credentials_file(passcode, args.deployment) + if args.deployment.endswith('.xlsx'): + parser = ExcelParser() + parser.settings["use_list_name"] = True + parser.settings["default_fields_by_col"] = False + + try: + parser.read_and_encrypt_credentials_excel_sheet(passcode, args.deployment) + except ExcelParseError: + for error in parser.errors: + print("%s | %s" % error["position"], error["message"]) + exit(1) + else: + encrypt_credentials_file(passcode, args.deployment) if __name__ == '__main__': diff --git a/examples/active_standby_vsds/common.yml b/examples/active_standby_vsds/common.yml index aa526aacab..35a2347edc 100644 --- a/examples/active_standby_vsds/common.yml +++ b/examples/active_standby_vsds/common.yml @@ -129,7 +129,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -164,7 +164,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/active_standby_vsds/vsds.yml b/examples/active_standby_vsds/vsds.yml index 0102b23c94..26bc30c651 100644 --- a/examples/active_standby_vsds/vsds.yml +++ b/examples/active_standby_vsds/vsds.yml @@ -126,6 +126,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -245,6 +264,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -364,6 +402,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 4 @@ -483,6 +540,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 5 @@ -602,6 +678,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 6 @@ -721,5 +816,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/aws_sdwan_install/common.yml b/examples/aws_sdwan_install/common.yml index 1714ee3886..92fd816491 100644 --- a/examples/aws_sdwan_install/common.yml +++ b/examples/aws_sdwan_install/common.yml @@ -124,7 +124,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -159,7 +159,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/aws_sdwan_install/credentials.yml b/examples/aws_sdwan_install/credentials.yml index 3a72cfa987..9ad1f30644 100644 --- a/examples/aws_sdwan_install/credentials.yml +++ b/examples/aws_sdwan_install/credentials.yml @@ -34,22 +34,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # # vsc_custom_username: "" # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # # vsc_custom_password: "" @@ -58,17 +58,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -77,17 +77,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -164,12 +164,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -204,7 +204,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -216,27 +216,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -245,12 +250,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -259,7 +264,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -273,22 +278,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/examples/aws_sdwan_install/vsds.yml b/examples/aws_sdwan_install/vsds.yml index adb4a5e15a..414ff1286d 100644 --- a/examples/aws_sdwan_install/vsds.yml +++ b/examples/aws_sdwan_install/vsds.yml @@ -153,6 +153,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -299,6 +318,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -445,5 +483,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/aws_sdwan_install/vstats.yml b/examples/aws_sdwan_install/vstats.yml index de38b94925..0d6802f071 100644 --- a/examples/aws_sdwan_install/vstats.yml +++ b/examples/aws_sdwan_install/vstats.yml @@ -164,6 +164,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 2 @@ -321,6 +340,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 3 @@ -478,5 +516,24 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/examples/aws_sdwan_upgrade/common.yml b/examples/aws_sdwan_upgrade/common.yml index 1714ee3886..92fd816491 100644 --- a/examples/aws_sdwan_upgrade/common.yml +++ b/examples/aws_sdwan_upgrade/common.yml @@ -124,7 +124,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -159,7 +159,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/aws_sdwan_upgrade/credentials.yml b/examples/aws_sdwan_upgrade/credentials.yml index 3a72cfa987..9ad1f30644 100644 --- a/examples/aws_sdwan_upgrade/credentials.yml +++ b/examples/aws_sdwan_upgrade/credentials.yml @@ -34,22 +34,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # # vsc_custom_username: "" # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # # vsc_custom_password: "" @@ -58,17 +58,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -77,17 +77,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -164,12 +164,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -204,7 +204,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -216,27 +216,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -245,12 +250,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -259,7 +264,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -273,22 +278,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/examples/aws_sdwan_upgrade/upgrade.yml b/examples/aws_sdwan_upgrade/upgrade.yml index fbf1ab518e..3af1056baf 100644 --- a/examples/aws_sdwan_upgrade/upgrade.yml +++ b/examples/aws_sdwan_upgrade/upgrade.yml @@ -38,4 +38,9 @@ upgrade_to_version: "5.2.3" # # upgrade_portal: False +# < VSD Pre-upgrade Database Check Script Path > +# Path on the MetroAE host to the VSD pre-upgrade database check script +# +# vsd_preupgrade_db_check_script_path: "" + ############# diff --git a/examples/aws_sdwan_upgrade/vsds.yml b/examples/aws_sdwan_upgrade/vsds.yml index 0047b9e105..25de3d9760 100644 --- a/examples/aws_sdwan_upgrade/vsds.yml +++ b/examples/aws_sdwan_upgrade/vsds.yml @@ -153,6 +153,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -299,6 +318,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -445,5 +483,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/aws_sdwan_upgrade/vstats.yml b/examples/aws_sdwan_upgrade/vstats.yml index 32d1d091f8..6584ac3b43 100644 --- a/examples/aws_sdwan_upgrade/vstats.yml +++ b/examples/aws_sdwan_upgrade/vstats.yml @@ -164,6 +164,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 2 @@ -321,6 +340,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 3 @@ -478,5 +516,24 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/examples/blank_deployment/common.yml b/examples/blank_deployment/common.yml index 5d46a35e38..3ce1e1c0b1 100644 --- a/examples/blank_deployment/common.yml +++ b/examples/blank_deployment/common.yml @@ -127,7 +127,7 @@ dns_server_list: [ ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -162,7 +162,7 @@ dns_server_list: [ ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/blank_deployment/credentials.yml b/examples/blank_deployment/credentials.yml index 60f9e54175..2d2e758933 100644 --- a/examples/blank_deployment/credentials.yml +++ b/examples/blank_deployment/credentials.yml @@ -35,22 +35,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # # vsc_custom_username: "" # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # # vsc_custom_password: "" @@ -59,17 +59,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -78,17 +78,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -165,7 +165,7 @@ ##### OpenStack Credentials # < OpenStack Username > - # Username for OpenStack + # Username for OpenStack. # # openstack_username: "" @@ -179,7 +179,7 @@ ##### vcenter # < vCenter Username > - # vCenter Username + # vCenter Username. # # vcenter_username: "" @@ -193,12 +193,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -233,7 +233,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -245,27 +245,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -274,12 +279,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -288,7 +293,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -302,22 +307,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/examples/blank_deployment/nfss.yml b/examples/blank_deployment/nfss.yml new file mode 100644 index 0000000000..e748a2cef4 --- /dev/null +++ b/examples/blank_deployment/nfss.yml @@ -0,0 +1,29 @@ +############################################################################### +# NFS Server VM +# +# Configure NFS Server VM using MetroAE. Note: Metroae will not bring up the NFS server, it will configure it for Elasticsearch mounting. +# + +# +# NFS 1 +# +- + ##### NFS parameters + + # < Hostname > + # Hostname of NFS Server + # + hostname: "" + + # < NFS server IP address > + # IP address of the NFS server. + # + nfs_ip: "" + + # < NFS mount directory location > + # Optional user specified location of the mount directory to export for the NFS. Defaults to /nfs. + # + # mount_directory_location: /nfs + + #################### + diff --git a/examples/blank_deployment/nsgv_network_port_vlans.yml b/examples/blank_deployment/nsgv_network_port_vlans.yml new file mode 100644 index 0000000000..22c8b0b8b9 --- /dev/null +++ b/examples/blank_deployment/nsgv_network_port_vlans.yml @@ -0,0 +1,44 @@ +############################################################################### +# NSGv Network Port VLANs +# +# Specify NSGvs network port VLAN configuration. +# + +# +# NSGv 1 +# +- + ##### Network ports + + # < NSGv Network Port VLAN Name > + # VLAN name of the NSGv network port + # + name: "" + + # < NSGv Network Port VLAN Number > + # VLAN number of the NSGv network port + # + vlan_number: 0 + + # < VSC Infra Profile Name > + # Name of the VSC infra profile for the NSG on the VSD + # + vsc_infra_profile_name: "" + + # < VSC Infra Profile First Controller > + # Host name or IP address of the VSC infra profile first controller for the NSG + # + first_controller_address: "" + + # < VSC Infra Profile Second Controller > + # Host name or IP address of the VSC infra profile second controller for the NSG + # + # second_controller_address: "" + + # < Create uplink connection on this Vlan > + # If vlan 0 has an uplink, then other vlans can't. If multiple uplinks are defined, then network acceleration will be enabled + # + uplink: false + + ################### + diff --git a/examples/blank_deployment/nsgvs.yml b/examples/blank_deployment/nsgvs.yml index da7be7f636..5aed600e5b 100644 --- a/examples/blank_deployment/nsgvs.yml +++ b/examples/blank_deployment/nsgvs.yml @@ -274,6 +274,11 @@ # # network_port_physical_name: port1 + # < NSG Network Port VLAN list > + # VLAN name list of the network port for the NSG + # + # network_port_vlans: [] + # < NSG Access Port Name > # Name of the access port for the NSG. Deprecated in favor of access_ports # diff --git a/examples/blank_deployment/upgrade.yml b/examples/blank_deployment/upgrade.yml index 7126d2d8cb..2d5b6a0b94 100644 --- a/examples/blank_deployment/upgrade.yml +++ b/examples/blank_deployment/upgrade.yml @@ -36,5 +36,10 @@ # # upgrade_portal: False +# < VSD Pre-upgrade Database Check Script Path > +# Path on the MetroAE host to the VSD pre-upgrade database check script +# +# vsd_preupgrade_db_check_script_path: "" + ############# diff --git a/examples/blank_deployment/vsds.yml b/examples/blank_deployment/vsds.yml index 157ff49c7e..49d4f10b77 100644 --- a/examples/blank_deployment/vsds.yml +++ b/examples/blank_deployment/vsds.yml @@ -245,3 +245,22 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/blank_deployment/vstats.yml b/examples/blank_deployment/vstats.yml index 2b73c28484..e464eaee4f 100644 --- a/examples/blank_deployment/vstats.yml +++ b/examples/blank_deployment/vstats.yml @@ -258,3 +258,22 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/examples/blank_deployment/webfilters.yml b/examples/blank_deployment/webfilters.yml index 5efa6b1567..8a9df7cc5b 100644 --- a/examples/blank_deployment/webfilters.yml +++ b/examples/blank_deployment/webfilters.yml @@ -69,6 +69,21 @@ # # cert_name: (Hostname) + # < Http proxy > + # Optional HTTP Proxy for webfilter VM + # + # web_http_proxy: "" + + # < Http proxy host > + # HTTP Proxy host for webfilter proxy + # + # web_proxy_host: "" + + # < Http proxy port > + # HTTP Proxy port for webfilter proxy + # + # web_proxy_port: "" + # < Run incompass operation command > # Run incompass operation command. This is enabled by default and may take up to 20 minutes and requires internet connection. # diff --git a/examples/csv_deployment/sample_deployment.csv b/examples/csv_deployment/sample_deployment.csv index 5ef7d3cc07..14e361425c 100644 --- a/examples/csv_deployment/sample_deployment.csv +++ b/examples/csv_deployment/sample_deployment.csv @@ -26,6 +26,14 @@ This Spreadsheet can be used as an Input to MetroAE, ,Data Network Bridge,br1,,Network Bridge used for the data path of a component or the Control interface on VSC. This will be a Distributed Virtual PortGroup (DVPG) when deploying on vCenter or a Linux network bridge when deploying on KVM. This field can be overridden by defining the Data network bridge separately in the component configuration , ,, +,NFS Server VM, +,"Configure NFS Server VM using MetroAE. Note: Metroae will not bring up the NFS server, it will configure it for Elasticsearch mounting.", +,, +,nfss,nfss1,,Descriptions, +,Hostname,nfs1.company.com,,Hostname of NFS Server, +,NFS server IP address,192.168.110.74,,IP address of the NFS server. +, +,, ,Deployment details, ,Nuage VSP Deployment, ,, diff --git a/examples/excel/active_standby_vsds.xlsx b/examples/excel/active_standby_vsds.xlsx index d523c6713c..eab67d6b7f 100644 Binary files a/examples/excel/active_standby_vsds.xlsx and b/examples/excel/active_standby_vsds.xlsx differ diff --git a/examples/excel/aws_sdwan_install.xlsx b/examples/excel/aws_sdwan_install.xlsx index 26e78be2c8..7b57f26638 100644 Binary files a/examples/excel/aws_sdwan_install.xlsx and b/examples/excel/aws_sdwan_install.xlsx differ diff --git a/examples/excel/aws_sdwan_upgrade.xlsx b/examples/excel/aws_sdwan_upgrade.xlsx index b5a16264d0..9626b55af1 100644 Binary files a/examples/excel/aws_sdwan_upgrade.xlsx and b/examples/excel/aws_sdwan_upgrade.xlsx differ diff --git a/examples/excel/blank_deployment.xlsx b/examples/excel/blank_deployment.xlsx index 57d89d38d8..9be6df12da 100644 Binary files a/examples/excel/blank_deployment.xlsx and b/examples/excel/blank_deployment.xlsx differ diff --git a/examples/excel/kvm_datacenter_install.xlsx b/examples/excel/kvm_datacenter_install.xlsx index f7f8d29739..54e2575df7 100644 Binary files a/examples/excel/kvm_datacenter_install.xlsx and b/examples/excel/kvm_datacenter_install.xlsx differ diff --git a/examples/excel/kvm_portal_install.xlsx b/examples/excel/kvm_portal_install.xlsx index 7603932828..8eb7c2e54f 100644 Binary files a/examples/excel/kvm_portal_install.xlsx and b/examples/excel/kvm_portal_install.xlsx differ diff --git a/examples/excel/kvm_sdwan_install.xlsx b/examples/excel/kvm_sdwan_install.xlsx index aa252cc7c0..d76ba3572b 100644 Binary files a/examples/excel/kvm_sdwan_install.xlsx and b/examples/excel/kvm_sdwan_install.xlsx differ diff --git a/examples/excel/kvm_vsc_upgrade.xlsx b/examples/excel/kvm_vsc_upgrade.xlsx index b38f3736e4..48d0826073 100644 Binary files a/examples/excel/kvm_vsc_upgrade.xlsx and b/examples/excel/kvm_vsc_upgrade.xlsx differ diff --git a/examples/excel/kvm_vsd_install.xlsx b/examples/excel/kvm_vsd_install.xlsx index b4bcf96677..b05884947c 100644 Binary files a/examples/excel/kvm_vsd_install.xlsx and b/examples/excel/kvm_vsd_install.xlsx differ diff --git a/examples/excel/nsgv_bootstrap.xlsx b/examples/excel/nsgv_bootstrap.xlsx index 45b2774e6f..0d797f19c2 100644 Binary files a/examples/excel/nsgv_bootstrap.xlsx and b/examples/excel/nsgv_bootstrap.xlsx differ diff --git a/examples/excel/openstack_install.xlsx b/examples/excel/openstack_install.xlsx index 5061bd00e4..e2aade773c 100644 Binary files a/examples/excel/openstack_install.xlsx and b/examples/excel/openstack_install.xlsx differ diff --git a/examples/excel/stats_out.xlsx b/examples/excel/stats_out.xlsx index b693cc5662..ae66846a66 100644 Binary files a/examples/excel/stats_out.xlsx and b/examples/excel/stats_out.xlsx differ diff --git a/examples/excel/upgrade_datacenter_vcenter.xlsx b/examples/excel/upgrade_datacenter_vcenter.xlsx index b6a1d5fd81..ef535c2af8 100644 Binary files a/examples/excel/upgrade_datacenter_vcenter.xlsx and b/examples/excel/upgrade_datacenter_vcenter.xlsx differ diff --git a/examples/excel/vcenter_datacenter_install.xlsx b/examples/excel/vcenter_datacenter_install.xlsx index 44b3e7c430..d0c2c43f6a 100644 Binary files a/examples/excel/vcenter_datacenter_install.xlsx and b/examples/excel/vcenter_datacenter_install.xlsx differ diff --git a/examples/excel/vcenter_sdwan_install.xlsx b/examples/excel/vcenter_sdwan_install.xlsx index 55701254ce..26869b3e78 100644 Binary files a/examples/excel/vcenter_sdwan_install.xlsx and b/examples/excel/vcenter_sdwan_install.xlsx differ diff --git a/examples/kvm_datacenter_install/common.yml b/examples/kvm_datacenter_install/common.yml index b1f3a12486..315c1ec071 100644 --- a/examples/kvm_datacenter_install/common.yml +++ b/examples/kvm_datacenter_install/common.yml @@ -129,7 +129,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -164,7 +164,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/kvm_datacenter_install/vsds.yml b/examples/kvm_datacenter_install/vsds.yml index 351551cae3..6df15b435a 100644 --- a/examples/kvm_datacenter_install/vsds.yml +++ b/examples/kvm_datacenter_install/vsds.yml @@ -126,6 +126,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -245,6 +264,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -364,5 +402,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/kvm_datacenter_install/vstats.yml b/examples/kvm_datacenter_install/vstats.yml index 0a195caff1..f990a71710 100644 --- a/examples/kvm_datacenter_install/vstats.yml +++ b/examples/kvm_datacenter_install/vstats.yml @@ -154,6 +154,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 2 @@ -301,6 +320,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 3 @@ -448,5 +486,24 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/examples/kvm_portal_install/common.yml b/examples/kvm_portal_install/common.yml index f804d160a5..1142778176 100644 --- a/examples/kvm_portal_install/common.yml +++ b/examples/kvm_portal_install/common.yml @@ -129,7 +129,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -164,7 +164,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/kvm_portal_install/credentials.yml b/examples/kvm_portal_install/credentials.yml index 347cd453f1..b0376b51f8 100644 --- a/examples/kvm_portal_install/credentials.yml +++ b/examples/kvm_portal_install/credentials.yml @@ -39,22 +39,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # # vsc_custom_username: "" # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # # vsc_custom_password: "" @@ -63,17 +63,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -82,17 +82,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -155,12 +155,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -195,7 +195,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -207,27 +207,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -236,12 +241,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -250,7 +255,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -264,22 +269,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/examples/kvm_sdwan_install/common.yml b/examples/kvm_sdwan_install/common.yml index b1f3a12486..315c1ec071 100644 --- a/examples/kvm_sdwan_install/common.yml +++ b/examples/kvm_sdwan_install/common.yml @@ -129,7 +129,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -164,7 +164,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/kvm_sdwan_install/nfss.yml b/examples/kvm_sdwan_install/nfss.yml new file mode 100644 index 0000000000..055ec1f7d0 --- /dev/null +++ b/examples/kvm_sdwan_install/nfss.yml @@ -0,0 +1,35 @@ +############################################################################### +# NFS Server VM +# +# Configure NFS Server VM using MetroAE. Note: Metroae will not bring up the NFS server, it will configure it for Elasticsearch mounting. +# +# Automatically generated by script. +# + + + +# +# NFS 1 +# +- + ##### NFS parameters + + # < Hostname > + # Hostname of NFS Server + # + hostname: "nfs1.company.com" + + # < NFS server IP address > + # IP address of the NFS server. + # + nfs_ip: "192.168.110.74" + + # < NFS mount directory location > + # Optional user specified location of the mount directory to export for the NFS. Defaults to /nfs. + # + mount_directory_location: "/nfs" + + #################### + + + diff --git a/examples/kvm_sdwan_install/nsgvs.yml b/examples/kvm_sdwan_install/nsgvs.yml index f10934e8fe..be6e18f8c5 100644 --- a/examples/kvm_sdwan_install/nsgvs.yml +++ b/examples/kvm_sdwan_install/nsgvs.yml @@ -278,6 +278,11 @@ # # network_port_physical_name: port1 + # < NSG Network Port VLAN list > + # VLAN name list of the network port for the NSG + # + # network_port_vlans: [] + # < NSG Access Port Name > # Name of the access port for the NSG. Deprecated in favor of access_ports # diff --git a/examples/kvm_sdwan_install/vsds.yml b/examples/kvm_sdwan_install/vsds.yml index 351551cae3..6df15b435a 100644 --- a/examples/kvm_sdwan_install/vsds.yml +++ b/examples/kvm_sdwan_install/vsds.yml @@ -126,6 +126,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -245,6 +264,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -364,5 +402,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/kvm_sdwan_install/vstats.yml b/examples/kvm_sdwan_install/vstats.yml index 00ef89bcb1..6a311794a2 100644 --- a/examples/kvm_sdwan_install/vstats.yml +++ b/examples/kvm_sdwan_install/vstats.yml @@ -154,6 +154,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 2 @@ -301,6 +320,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 3 @@ -448,5 +486,24 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/examples/kvm_vsc_upgrade/common.yml b/examples/kvm_vsc_upgrade/common.yml index b1f3a12486..315c1ec071 100644 --- a/examples/kvm_vsc_upgrade/common.yml +++ b/examples/kvm_vsc_upgrade/common.yml @@ -129,7 +129,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -164,7 +164,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/kvm_vsc_upgrade/credentials.yml b/examples/kvm_vsc_upgrade/credentials.yml index 744bc1c4d6..5a8a7295e1 100644 --- a/examples/kvm_vsc_upgrade/credentials.yml +++ b/examples/kvm_vsc_upgrade/credentials.yml @@ -39,22 +39,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # vsc_custom_username: vscuser # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # vsc_custom_password: vscpass @@ -63,17 +63,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -82,17 +82,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -155,12 +155,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -195,7 +195,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -207,27 +207,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -236,12 +241,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -250,7 +255,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -264,22 +269,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/examples/kvm_vsc_upgrade/upgrade.yml b/examples/kvm_vsc_upgrade/upgrade.yml index fbf1ab518e..3af1056baf 100644 --- a/examples/kvm_vsc_upgrade/upgrade.yml +++ b/examples/kvm_vsc_upgrade/upgrade.yml @@ -38,4 +38,9 @@ upgrade_to_version: "5.2.3" # # upgrade_portal: False +# < VSD Pre-upgrade Database Check Script Path > +# Path on the MetroAE host to the VSD pre-upgrade database check script +# +# vsd_preupgrade_db_check_script_path: "" + ############# diff --git a/examples/kvm_vsc_upgrade/vsds.yml b/examples/kvm_vsc_upgrade/vsds.yml index 351551cae3..6df15b435a 100644 --- a/examples/kvm_vsc_upgrade/vsds.yml +++ b/examples/kvm_vsc_upgrade/vsds.yml @@ -126,6 +126,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -245,6 +264,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -364,5 +402,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/kvm_vsd_install/common.yml b/examples/kvm_vsd_install/common.yml index 7cceefe260..32242e1ef3 100644 --- a/examples/kvm_vsd_install/common.yml +++ b/examples/kvm_vsd_install/common.yml @@ -129,7 +129,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -164,7 +164,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/kvm_vsd_install/vsds.yml b/examples/kvm_vsd_install/vsds.yml index e80215c877..009327c4fa 100644 --- a/examples/kvm_vsd_install/vsds.yml +++ b/examples/kvm_vsd_install/vsds.yml @@ -126,5 +126,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/minimum_required_blank_deployment/nfss.yml b/examples/minimum_required_blank_deployment/nfss.yml new file mode 100644 index 0000000000..823cfbf173 --- /dev/null +++ b/examples/minimum_required_blank_deployment/nfss.yml @@ -0,0 +1,3 @@ +- + hostname: "" + nfs_ip: "" diff --git a/examples/minimum_required_blank_deployment/nsgv_network_port_vlans.yml b/examples/minimum_required_blank_deployment/nsgv_network_port_vlans.yml new file mode 100644 index 0000000000..e401adb12d --- /dev/null +++ b/examples/minimum_required_blank_deployment/nsgv_network_port_vlans.yml @@ -0,0 +1,6 @@ +- + name: "" + vlan_number: 0 + vsc_infra_profile_name: "" + first_controller_address: "" + uplink: false diff --git a/examples/nsgv_bootstrap/common.yml b/examples/nsgv_bootstrap/common.yml index b1f3a12486..315c1ec071 100644 --- a/examples/nsgv_bootstrap/common.yml +++ b/examples/nsgv_bootstrap/common.yml @@ -129,7 +129,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -164,7 +164,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/nsgv_bootstrap/credentials.yml b/examples/nsgv_bootstrap/credentials.yml index 7135e0355e..8a898de041 100644 --- a/examples/nsgv_bootstrap/credentials.yml +++ b/examples/nsgv_bootstrap/credentials.yml @@ -39,22 +39,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # # vsc_custom_username: "" # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # # vsc_custom_password: "" @@ -63,17 +63,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -82,17 +82,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -169,7 +169,7 @@ ##### OpenStack Credentials # < OpenStack Username > - # Username for OpenStack + # Username for OpenStack. # # openstack_username: "" @@ -183,7 +183,7 @@ ##### vcenter # < vCenter Username > - # vCenter Username + # vCenter Username. # # vcenter_username: "" @@ -197,12 +197,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -237,7 +237,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -249,27 +249,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -278,12 +283,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -292,7 +297,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -306,22 +311,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/examples/nsgv_bootstrap/nsgv_network_port_vlans.yml b/examples/nsgv_bootstrap/nsgv_network_port_vlans.yml new file mode 100644 index 0000000000..a7c4ae4619 --- /dev/null +++ b/examples/nsgv_bootstrap/nsgv_network_port_vlans.yml @@ -0,0 +1,89 @@ +############################################################################### +# NSGv Network Port VLANs +# +# Specify NSGvs network port VLAN configuration. +# +# Automatically generated by script. +# + + + +# +# NSGv 1 +# +- + ##### Network ports + + # < NSGv Network Port VLAN Name > + # VLAN name of the NSGv network port + # + name: "vlan1" + + # < NSGv Network Port VLAN Number > + # VLAN number of the NSGv network port + # + vlan_number: 0 + + # < VSC Infra Profile Name > + # Name of the VSC infra profile for the NSG on the VSD + # + vsc_infra_profile_name: "vsc1" + + # < VSC Infra Profile First Controller > + # Host name or IP address of the VSC infra profile first controller for the NSG + # + first_controller_address: "10.0.0.1" + + # < VSC Infra Profile Second Controller > + # Host name or IP address of the VSC infra profile second controller for the NSG + # + second_controller_address: "10.0.0.2" + + # < Create uplink connection on this Vlan > + # If vlan 0 has an uplink, then other vlans can't. If multiple uplinks are defined, then network acceleration will be enabled + # + uplink: true + + ################### + + +# +# NSGv 2 +# +- + ##### Network ports + + # < NSGv Network Port VLAN Name > + # VLAN name of the NSGv network port + # + name: "vlan2" + + # < NSGv Network Port VLAN Number > + # VLAN number of the NSGv network port + # + vlan_number: 2 + + # < VSC Infra Profile Name > + # Name of the VSC infra profile for the NSG on the VSD + # + vsc_infra_profile_name: "vsc2" + + # < VSC Infra Profile First Controller > + # Host name or IP address of the VSC infra profile first controller for the NSG + # + first_controller_address: "10.0.1.1" + + # < VSC Infra Profile Second Controller > + # Host name or IP address of the VSC infra profile second controller for the NSG + # + second_controller_address: "10.0.1.2" + + # < Create uplink connection on this Vlan > + # If vlan 0 has an uplink, then other vlans can't. If multiple uplinks are defined, then network acceleration will be enabled + # + uplink: false + + ################### + + + diff --git a/examples/nsgv_bootstrap/nsgvs.yml b/examples/nsgv_bootstrap/nsgvs.yml index 7a3a0a4fa9..31ddaae424 100644 --- a/examples/nsgv_bootstrap/nsgvs.yml +++ b/examples/nsgv_bootstrap/nsgvs.yml @@ -278,6 +278,11 @@ # # network_port_physical_name: port1 + # < NSG Network Port VLAN list > + # VLAN name list of the network port for the NSG + # + # network_port_vlans: [] + # < NSG Access Port Name > # Name of the access port for the NSG. Deprecated in favor of access_ports # diff --git a/examples/nsgv_bootstrap/vsds.yml b/examples/nsgv_bootstrap/vsds.yml index 351551cae3..6df15b435a 100644 --- a/examples/nsgv_bootstrap/vsds.yml +++ b/examples/nsgv_bootstrap/vsds.yml @@ -126,6 +126,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -245,6 +264,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -364,5 +402,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/nsgv_bootstrap/vstats.yml b/examples/nsgv_bootstrap/vstats.yml index 00ef89bcb1..6a311794a2 100644 --- a/examples/nsgv_bootstrap/vstats.yml +++ b/examples/nsgv_bootstrap/vstats.yml @@ -154,6 +154,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 2 @@ -301,6 +320,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 3 @@ -448,5 +486,24 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/examples/openstack_install/common.yml b/examples/openstack_install/common.yml index def6d3352a..d287b77cd2 100644 --- a/examples/openstack_install/common.yml +++ b/examples/openstack_install/common.yml @@ -124,7 +124,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -159,7 +159,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/openstack_install/credentials.yml b/examples/openstack_install/credentials.yml index dec25e865b..c0318aab51 100644 --- a/examples/openstack_install/credentials.yml +++ b/examples/openstack_install/credentials.yml @@ -34,22 +34,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # # vsc_custom_username: "" # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # # vsc_custom_password: "" @@ -58,17 +58,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -77,17 +77,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -150,7 +150,7 @@ ##### OpenStack Credentials # < OpenStack Username > - # Username for OpenStack + # Username for OpenStack. # openstack_username: YOUR_OPENSTACK_USERNAME @@ -164,12 +164,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -204,7 +204,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -216,27 +216,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -245,12 +250,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -259,7 +264,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -273,22 +278,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/examples/openstack_install/vsds.yml b/examples/openstack_install/vsds.yml index 7bf84eacdf..6d8a098cff 100644 --- a/examples/openstack_install/vsds.yml +++ b/examples/openstack_install/vsds.yml @@ -168,6 +168,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -329,6 +348,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -490,5 +528,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/openstack_install/vstats.yml b/examples/openstack_install/vstats.yml index 1d65422091..f05163ab26 100644 --- a/examples/openstack_install/vstats.yml +++ b/examples/openstack_install/vstats.yml @@ -174,6 +174,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 2 @@ -341,6 +360,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 3 @@ -508,5 +546,24 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/examples/stats_out/common.yml b/examples/stats_out/common.yml index 414deb49d1..2471050174 100644 --- a/examples/stats_out/common.yml +++ b/examples/stats_out/common.yml @@ -129,7 +129,7 @@ stats_out_proxy: "192.168.120.150" # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -164,7 +164,7 @@ stats_out_proxy: "192.168.120.150" # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/stats_out/vsds.yml b/examples/stats_out/vsds.yml index 388672d5e0..f63afa4b4b 100644 --- a/examples/stats_out/vsds.yml +++ b/examples/stats_out/vsds.yml @@ -126,6 +126,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -245,6 +264,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -364,6 +402,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 4 @@ -483,6 +540,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 5 @@ -602,6 +678,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 6 @@ -721,5 +816,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/stats_out/vstats.yml b/examples/stats_out/vstats.yml index b117daa9aa..a021f49b86 100644 --- a/examples/stats_out/vstats.yml +++ b/examples/stats_out/vstats.yml @@ -154,6 +154,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 2 @@ -301,6 +320,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 3 @@ -448,6 +486,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 4 @@ -595,6 +652,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 5 @@ -742,6 +818,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 6 @@ -889,5 +984,24 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/examples/upgrade_datacenter_vcenter/common.yml b/examples/upgrade_datacenter_vcenter/common.yml index 6fe2344ee1..653dc20ee0 100644 --- a/examples/upgrade_datacenter_vcenter/common.yml +++ b/examples/upgrade_datacenter_vcenter/common.yml @@ -124,7 +124,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -159,7 +159,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/upgrade_datacenter_vcenter/credentials.yml b/examples/upgrade_datacenter_vcenter/credentials.yml index 72444e9edd..5b40ce17c2 100644 --- a/examples/upgrade_datacenter_vcenter/credentials.yml +++ b/examples/upgrade_datacenter_vcenter/credentials.yml @@ -34,22 +34,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # # vsc_custom_username: "" # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # # vsc_custom_password: "" @@ -58,17 +58,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -77,17 +77,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -150,7 +150,7 @@ ##### vcenter # < vCenter Username > - # vCenter Username + # vCenter Username. # vcenter_username: admin @@ -164,12 +164,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -204,7 +204,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -216,27 +216,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -245,12 +250,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -259,7 +264,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -273,22 +278,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/examples/upgrade_datacenter_vcenter/upgrade.yml b/examples/upgrade_datacenter_vcenter/upgrade.yml index fbf1ab518e..3af1056baf 100644 --- a/examples/upgrade_datacenter_vcenter/upgrade.yml +++ b/examples/upgrade_datacenter_vcenter/upgrade.yml @@ -38,4 +38,9 @@ upgrade_to_version: "5.2.3" # # upgrade_portal: False +# < VSD Pre-upgrade Database Check Script Path > +# Path on the MetroAE host to the VSD pre-upgrade database check script +# +# vsd_preupgrade_db_check_script_path: "" + ############# diff --git a/examples/upgrade_datacenter_vcenter/vsds.yml b/examples/upgrade_datacenter_vcenter/vsds.yml index 7f2a022e39..dafba711ad 100644 --- a/examples/upgrade_datacenter_vcenter/vsds.yml +++ b/examples/upgrade_datacenter_vcenter/vsds.yml @@ -159,6 +159,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -311,6 +330,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -463,5 +501,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/upgrade_datacenter_vcenter/vstats.yml b/examples/upgrade_datacenter_vcenter/vstats.yml index a886feeabb..b0e9e21b2d 100644 --- a/examples/upgrade_datacenter_vcenter/vstats.yml +++ b/examples/upgrade_datacenter_vcenter/vstats.yml @@ -175,6 +175,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 2 @@ -343,6 +362,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 3 @@ -511,5 +549,24 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/examples/vcenter_datacenter_install/common.yml b/examples/vcenter_datacenter_install/common.yml index 6fe2344ee1..653dc20ee0 100644 --- a/examples/vcenter_datacenter_install/common.yml +++ b/examples/vcenter_datacenter_install/common.yml @@ -124,7 +124,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -159,7 +159,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/vcenter_datacenter_install/credentials.yml b/examples/vcenter_datacenter_install/credentials.yml index 72444e9edd..5b40ce17c2 100644 --- a/examples/vcenter_datacenter_install/credentials.yml +++ b/examples/vcenter_datacenter_install/credentials.yml @@ -34,22 +34,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # # vsc_custom_username: "" # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # # vsc_custom_password: "" @@ -58,17 +58,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -77,17 +77,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -150,7 +150,7 @@ ##### vcenter # < vCenter Username > - # vCenter Username + # vCenter Username. # vcenter_username: admin @@ -164,12 +164,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -204,7 +204,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -216,27 +216,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -245,12 +250,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -259,7 +264,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -273,22 +278,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/examples/vcenter_datacenter_install/vsds.yml b/examples/vcenter_datacenter_install/vsds.yml index 7f2a022e39..dafba711ad 100644 --- a/examples/vcenter_datacenter_install/vsds.yml +++ b/examples/vcenter_datacenter_install/vsds.yml @@ -159,6 +159,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -311,6 +330,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -463,5 +501,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/vcenter_datacenter_install/vstats.yml b/examples/vcenter_datacenter_install/vstats.yml index a913bc537c..bfd362891f 100644 --- a/examples/vcenter_datacenter_install/vstats.yml +++ b/examples/vcenter_datacenter_install/vstats.yml @@ -175,6 +175,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 2 @@ -343,5 +362,24 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/examples/vcenter_sdwan_install/common.yml b/examples/vcenter_sdwan_install/common.yml index 6fe2344ee1..653dc20ee0 100644 --- a/examples/vcenter_sdwan_install/common.yml +++ b/examples/vcenter_sdwan_install/common.yml @@ -124,7 +124,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_run_cluster_rtt_test: False # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # # vsd_ignore_errors_rtt_test: False @@ -159,7 +159,7 @@ dns_server_list: [ "10.1.0.2", ] # vsd_disk_performance_test_max_time: 300 # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # # vsd_ignore_disk_performance_test_errors: False diff --git a/examples/vcenter_sdwan_install/credentials.yml b/examples/vcenter_sdwan_install/credentials.yml index 72444e9edd..5b40ce17c2 100644 --- a/examples/vcenter_sdwan_install/credentials.yml +++ b/examples/vcenter_sdwan_install/credentials.yml @@ -34,22 +34,22 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # # vsd_custom_username: root # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vsd_custom_password: Alcateldc # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # # vsc_custom_username: "" # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # # vsc_custom_password: "" @@ -58,17 +58,17 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # # vstat_custom_username: "" # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # # vstat_custom_password: "" # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # # vstat_custom_root_password: "" @@ -77,17 +77,17 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_username: csproot # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # # vsd_auth_password: csproot # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # # vsd_mysql_password: "" @@ -150,7 +150,7 @@ ##### vcenter # < vCenter Username > - # vCenter Username + # vCenter Username. # vcenter_username: admin @@ -164,12 +164,12 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # # compute_username: root # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # # compute_password: "" @@ -204,7 +204,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -216,27 +216,32 @@ # # smtp_auth_password: "" - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + # nfs_custom_username: root + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # # netconf_vm_username: root - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # # netconf_vm_password: "" # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # # netconf_username: netconfmgr # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # # netconf_password: password @@ -245,12 +250,12 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # # health_report_email_server_username: "" # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # # health_report_email_server_password: "" @@ -259,7 +264,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # # vsd_monit_mail_server_username: "" @@ -273,22 +278,22 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_username: "" # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_1_password: "" # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_username: "" # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # # nuh_notification_app_2_password: "" diff --git a/examples/vcenter_sdwan_install/nsgvs.yml b/examples/vcenter_sdwan_install/nsgvs.yml index 57e5c43afd..0c10ea155a 100644 --- a/examples/vcenter_sdwan_install/nsgvs.yml +++ b/examples/vcenter_sdwan_install/nsgvs.yml @@ -192,6 +192,11 @@ # # network_port_physical_name: port1 + # < NSG Network Port VLAN list > + # VLAN name list of the network port for the NSG + # + # network_port_vlans: [] + # < NSG Access Port Name > # Name of the access port for the NSG. Deprecated in favor of access_ports # diff --git a/examples/vcenter_sdwan_install/vsds.yml b/examples/vcenter_sdwan_install/vsds.yml index 7f2a022e39..dafba711ad 100644 --- a/examples/vcenter_sdwan_install/vsds.yml +++ b/examples/vcenter_sdwan_install/vsds.yml @@ -159,6 +159,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 2 @@ -311,6 +330,25 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + # # VSD 3 @@ -463,5 +501,24 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_ram: (global VSD RAM) + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + # vsd_cpu_cores: (global VSD CPU Cores) + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vsd_fallocate_size_gb: (global VSD CPU Cores) + + ###################################### + diff --git a/examples/vcenter_sdwan_install/vstats.yml b/examples/vcenter_sdwan_install/vstats.yml index a913bc537c..bfd362891f 100644 --- a/examples/vcenter_sdwan_install/vstats.yml +++ b/examples/vcenter_sdwan_install/vstats.yml @@ -175,6 +175,25 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + # # VSTAT 2 @@ -343,5 +362,24 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + # vstat_ram: (global VSTAT RAM) + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + # vstat_cpu_cores: (global VSTAT CPU) + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + # vstat_allocate_size_gb: (global VSTAT DISK) + + ######################################## + diff --git a/generate_example_from_schema.py b/generate_example_from_schema.py index c88f4d1505..9781f8e0b9 100755 --- a/generate_example_from_schema.py +++ b/generate_example_from_schema.py @@ -270,7 +270,7 @@ def main(): default=False, help="Generates as a jinja2 template") parser.add_argument( "--as-example", dest="as_example", action='store_true', default=False, - help="Generates a example schema using example data") + help="Generates an example schema using example data") parser.add_argument( "--example_data_folder", help="Location of example data folder") args = parser.parse_args() diff --git a/metroae b/metroae index 7eecdfdf4d..c190d69704 100755 --- a/metroae +++ b/metroae @@ -1,28 +1,16 @@ #!/usr/bin/env bash set -e -METROAE_VERSION="4.6.1" -SCRIPT_VERSION="1.5.1" +METROAE_VERSION="5.0.0" +SCRIPT_VERSION="1.5.2" MENU=() ################################################################################# # CONTAINER COMMANDS # ################################################################################# -MENU+=(',container' 'Manage the MetroAE container' 'container' '' '') -MENU+=(',container,pull' 'Pull a new MetroAE image from the registry' 'container' 'pull' ',container,pull') -MENU+=(',container,setup' 'Setup the MetroAE container' 'container' 'setup' ',container,setup') -MENU+=(',container,start' 'Start the MetroAE container' 'container' 'start' ',container,start') -MENU+=(',container,stop' 'Stop the MetroAE container' 'container' 'stop' ',container,stop') -MENU+=(',container,status' 'Display the status of the MetroAE container' 'container' 'status' ',container,status') -MENU+=(',container,destroy' 'Destroy the MetroAE container' 'container' 'destroy' ',container,destroy') -MENU+=(',container,update' 'Update the MetroAE container to the latest version' 'container' 'upgrade-engine' ',container,update') -MENU+=(',container,download' 'Download the container in tar format' 'tools' 'download' ',container,download') -MENU+=(',container,download,image' 'Download the container in tar format' 'tools' 'download' ',container,download') -MENU+=(',container,download,templates' 'Download the MetroAE Config templates and VSD specifications in tar format' 'tools' 'download-templates' ',container,download,templates') -MENU+=(',container,load' 'Load container onto host from downloaded tar file' 'container' 'load' ',container,download') -MENU+=(',container,load,image' 'Load container onto host from downloaded tar file' 'container' 'load' ',container,download') -MENU+=(',container,load,templates' 'Load MetroAE Config templates and VSD specifications onto host from downloaded tar file' 'container' 'load-templates' ',container,load,templates') +MENU+=(',container' 'Manage the Old MetroAE container' 'container' '' '') +MENU+=(',container,destroy' 'Destroy the Old MetroAE container' 'container' 'destroy' ',container,destroy') ################################################################################# # VARIABLES # @@ -63,10 +51,6 @@ CONTAINER_TAR_FILE=metroaecontainer.tar if [[ ! -z $IMAGE_NAME ]]; then CONTAINER_TAR_FILE="$IMAGE_NAME.tar" fi -DOCKER_REGISTRY_BASE='https://registry-1.docker.io' -DOCKER_AUTH_BASE='https://auth.docker.io' -DOCKER_AUTH_SERVICE='registry.docker.io' -DOCKER_DOWNLOAD_WORK_DIR="metroaecontainer-download" CONTAINER_HOME=/source UUID_FILE=.uuid if [[ -z $UUID ]]; then @@ -118,10 +102,6 @@ VSD_SPECIFICATIONS_LOCATION="https://vsd-api-specifications.s3.us-east-2.amazona TEMPLATE_DIR="standard-templates" SPECIFICATION_DIR="vsd-api-specifications" -function read_setup_files { - while read -r line; do declare -g $line; done < $SETUP_FILE -} - function print_write_permission_warning { debug ${FUNCNAME[0]} echo "" @@ -377,47 +357,6 @@ function check_for_prerequisite { write_to_screen_and_script_log "It looks like you are trying to run using a clone of MetroAE, but we couldn't find the menu file in the workspace. Please update your workspace and try again." exit 1 fi - else # Must be CONTAINER_RUN_MODE - check_for_user_group "$@" - get_running_container_id - if [[ -f $SETUP_FILE ]]; then - debug "${FUNCNAME[0]}: Found $SETUP_FILE" - if [[ -z $RUNNING_CONTAINER_ID ]]; then - debug "${FUNCNAME[0]}: No container found" - get_container_id - if [[ -z $CONTAINER_ID ]]; then - echo "" - write_to_screen_and_script_log "It looks like there is a MetroAE container setup file, but we couldn't find the container." - write_to_screen_and_script_log "Please run 'metroae container setup' to fix this and try again." - echo "" - exit 1 - else - debug "${FUNCNAME[0]}: Container found" - echo "" - write_to_screen_and_script_log ">>> Starting the stopped container" - echo "" - docker start $CONTAINER_ID - fi - fi - if [[ ! -f $METROAE_MOUNT_POINT/$PLAYBOOK_MENU ]]; then - write_to_screen_and_script_log "It looks like you are trying to run the MetroAE container, but we couldn't find the container's menu file in the data directory." - write_to_screen_and_script_log "Please run 'metroae container setup' to fix this and try again." - exit 1 - fi - else - debug "${FUNCNAME[0]}: Not found $SETUP_FILE" - if [[ ! -z $RUNNING_CONTAINER_ID ]]; then - write_to_screen_and_script_log "It looks like the MetroAE container is running, but we couldn't find the container's setup file." - write_to_screen_and_script_log "Please run 'metroae container setup' to fix this and try again." - exit 1 - else - echo "" - write_to_screen_and_script_log "It looks like the MetroAE container has not been setup." - write_to_screen_and_script_log "Please run 'metroae container setup' to fix this and try again." - echo "" - exit 1 - fi - fi fi } @@ -441,46 +380,6 @@ function print_version_and_exit { # COMMON # ################################################################################# -function check_docker { - debug ${FUNCNAME[0]} - set +e - - debug "${FUNCNAME[0]}: Checking docker version" - if [[ -f $SCRIPT_LOG_FILE ]]; then - if [ -w "$SCRIPT_LOG_FILE" ]; then - docker --version >> $SCRIPT_LOG_FILE 2>> $SCRIPT_LOG_FILE - else - if [ -z $METROAE_WRITE_WARNING_PRINTED ]; then - print_write_permission_warning - METROAE_WRITE_WARNING_PRINTED=1 - fi - fi - else - docker --version >> /dev/null 2>> /dev/null - fi - - if [[ $? -ne 0 ]]; then - write_to_screen_and_script_log "Docker engine must be installed in order to run MetroAE. Quitting. Please install Docker and try again. See https://docs.docker.com for details" - print_version_and_exit 1 - else - DOCKER_VERSION=$(docker --version 2>> /dev/null) - debug "${FUNCNAME[0]}: Docker version = $DOCKER_VERSION" - fi - set -e -} - -function get_host_operating_system { - debug ${FUNCNAME[0]} - set +e - stat /etc/os-release >> /dev/null 2>> /dev/null - if [ $? -ne 0 ] - then - OS_RELEASE=$NON_LINUX - fi - debug "${FUNCNAME[0]}: OS_RELEASE: $OS_RELEASE" - set -e -} - function get_max_container_version { debug ${FUNCNAME[0]} if [[ ! -z $IMAGE_TAG ]]; then @@ -559,6 +458,11 @@ function stop_running_container { return $status } +function docker_exec_config { + debug ${FUNCNAME[0]} + docker_exec env /usr/bin/python3 $CONTAINER_HOME/nuage-metroae-config/metroae_config.py "$@" +} + function delete_container_id { debug ${FUNCNAME[0]} echo "" @@ -675,2138 +579,224 @@ function destroy { } -function pull { - debug ${FUNCNAME[0]} - if [[ ! -z $1 ]]; then - MAX_CONTAINER_VERSION=$1 - fi - debug "${FUNCNAME[0]}: MAX_CONTAINER_VERSION: $MAX_CONTAINER_VERSION" - - set -e - - echo "" - write_to_screen_and_script_log ">>> Pulling the MetroAE container image from Docker hub" - echo "" - if [[ -f $SCRIPT_LOG_FILE ]]; then - if [ -w "$SCRIPT_LOG_FILE" ]; then - debug "${FUNCNAME[0]}: Script log is writable" - sudo docker pull $METRO_AE_IMAGE:$MAX_CONTAINER_VERSION | tee -a $SCRIPT_LOG_FILE - else - if [ -z $METROAE_WRITE_WARNING_PRINTED ]; then - print_write_permission_warning - METROAE_WRITE_WARNING_PRINTED=1 - fi - debug "${FUNCNAME[0]}: Script log is not writable" - sudo docker pull $METRO_AE_IMAGE:$MAX_CONTAINER_VERSION - fi - else - debug "${FUNCNAME[0]}: Script log is not found" - sudo docker pull $METRO_AE_IMAGE:$MAX_CONTAINER_VERSION - fi - - status=$? - if [[ $status -ne 0 ]]; then - write_to_screen_and_script_log "Attempt to pull the $MAX_CONTAINER_VERSION MetroAE container image failed. Quitting." - exit 1 - else - write_to_screen_and_script_log "Successfully pulled the MetroAE container image from Docker hub" - fi - set -e - - return $status -} +################################################################################# +# DEPLOYMENT # +################################################################################# -function run_container { +function list_workflows { debug ${FUNCNAME[0]} - - get_image_id - - if [[ -z $IMAGE_ID ]]; then - set +e - write_to_screen_and_script_log "We didn't find the MetroAE container image. We are pulling a new container from the repo." - pull - status=$? - set -e - if [ $status -ne 0 ] - then - write_to_screen_and_script_log "Attempt to pull the MetroAE container failed. Quitting." - return $status - fi - else - get_container_id - - set +e - echo "" - write_to_screen_and_script_log ">>> Starting the MetroAE container" - echo "" - if [[ -z $CONTAINER_ID ]]; then - network_args="" - get_host_operating_system - if [[ $OS_RELEASE -eq $LINUX ]]; then - network_args="--network host" - else - network_args="-p $UI_PORT:5001" - fi - - mount_args="-v $METROAE_MOUNT_POINT:/metroae_data:Z" - - if [[ ! -z $IMAGES_MOUNT_POINT ]]; then - mount_args="$mount_args -v $IMAGES_MOUNT_POINT:/images:Z" - fi - - if [[ $METROAE_SETUP_TYPE == $CONFIG_SETUP_TYPE ]]; then - levi_args="-e LEVISTATE_CONTAINER=1" - else - levi_args="" - fi - - user_name=`id -u $(whoami)` - group_name=`id -g $(whoami)` - - if [[ ! -z $IMAGE_TAG ]]; then - MAX_CONTAINER_VERSION=$IMAGE_TAG - fi - - DOCKER_COMMAND=$(cat <<-END - docker run \ - -e RUN_MODE=$INSIDE_RUN_MODE \ - -e USER_NAME=$user_name \ - -e GROUP_NAME=$group_name \ - $levi_args \ - -t -d \ - $network_args \ - $mount_args \ - --name metroae $METRO_AE_IMAGE:$MAX_CONTAINER_VERSION - END - ) - debug "${FUNCNAME[0]}: DOCKER_COMMAND: $DOCKER_COMMAND" - eval $DOCKER_COMMAND - else - debug "${FUNCNAME[0]}: docker start $CONTAINER_ID" - docker start $CONTAINER_ID + for file in $PLAYBOOK_DIR/*.yml + do + if [[ -f $file ]]; then + filename=$(basename "$file") + filename="${filename%.*}" + echo $filename fi - - status=$? - if [[ $status -ne 0 ]]; then - echo "" - write_to_screen_and_script_log "Attempt to run the MetroAE container failed. Quitting." - print_version_and_exit 1 - else - echo "" - write_to_screen_and_script_log "MetroAE container started successfully" + done + for file in $PLAYBOOK_WITH_BUILD_DIR/*.yml + do + if [[ -f $file ]]; then + filename=$(basename "$file") + filename="${filename%.*}" + echo $filename fi - set -e - return $status - fi - - return 0 + done } -function setup_container { +function check_password_needed { debug ${FUNCNAME[0]} - local is_ui_run=false - get_host_operating_system - - if [[ ! -d $INSTALL_FOLDER ]]; then - sudo mkdir -p $INSTALL_FOLDER - fi - sudo chown -R root:docker $INSTALL_FOLDER - sudo chmod -R 775 $INSTALL_FOLDER - if [[ ! -f $SETUP_FILE ]]; then - sudo touch $SETUP_FILE - if [[ $OS_RELEASE -eq $NON_LINUX ]]; then - debug "${FUNCNAME[0]}: sudo chmod 777 $SETUP_FILE" - sudo chmod 777 $SETUP_FILE - else - debug "${FUNCNAME[0]}: sudo chmod 0774 $SETUP_FILE" - sudo chmod 0774 $SETUP_FILE - fi - fi - sudo chown root:docker $SETUP_FILE - if [[ ! -f $SCRIPT_LOG_FILE ]]; then - sudo touch $SCRIPT_LOG_FILE - if [[ $OS_RELEASE -eq $NON_LINUX ]]; then - sudo chmod 777 $SCRIPT_LOG_FILE - else - sudo chmod 0774 $SCRIPT_LOG_FILE - fi - fi - sudo chown root:docker $SCRIPT_LOG_FILE - - get_container_id - get_running_container_id - - if [[ ! -z $CONTAINER_ID ]] || [[ ! -z $RUNNING_CONTAINER_ID ]]; then - echo "" - write_to_screen_and_script_log "It looks like the MetroAE container is already setup." - echo "If you want to update to the latest MetroAE container version, please quit setup" - echo "and then run 'metroae container update' to get the latest container while keeping" - echo "your current setup and data. Your data on disk will be preserved." - echo "" - echo "If you want to run setup, the existing MetroAE container will not be replaced." - echo "We will stop the MetroAE container, reconfigure it according to your inputs," - echo "and restart the existing MetroAE container with the new configuration." - echo "" - echo "Note that if you want to re-use your existing deployments on disk after" - echo "running setup, you must either specify the same directories when prompted" - echo "for input or you must manually copy the files from the existing data locations" - echo "on disk to the new locations." - echo "" - - declare -l continue_confirm - if [[ -z $1 ]]; then - continue_confirm="init" - while [[ $continue_confirm != "y" ]] && [[ $continue_confirm != "n" ]] && [[ $continue_confirm != "" ]] - do - read -p "Do you want to continue with setup? (y/N): " continue_confirm - done - else - continue_confirm=$1 - shift - fi - - if [[ $continue_confirm != "y" ]] ; then - echo "" - write_to_screen_and_script_log ">>> Setup canceled" - echo "" - return 0 - fi - fi - - write_to_screen_and_script_log ">>> Setup MetroAE container" + deployment_dir="$1" - set +e + encrypted_files=$(grep -Ril $ENCRYPTED_TOKEN $deployment_dir) - images_on_disk=`docker images 2>/dev/null | grep $METRO_AE_IMAGE | awk '{ print $2 }'` - num_images=0 - for image in $images_on_disk - do - if [[ $image != "" ]]; then - num_images=$((num_images+1)) - fi - done - if [[ $num_images -gt 1 ]]; then - if [[ -z $3 ]]; then - echo "You can set up a container with one of the images currently" - echo "pulled on the host machine" - echo "" - echo $images_on_disk - read -p "Which image would you like to use for container setup? Please specify the image tag: " IMAGE_TAG_SELECTION - if [[ ! -z $IMAGE_TAG_SELECTION ]]; then - IMAGE_TAG=$IMAGE_TAG_SELECTION - fi - IMAGE_ID=`docker images 2>/dev/null | grep $METRO_AE_IMAGE | grep $IMAGE_TAG | awk '{ print $3 }'` - else - for image in $images_on_disk - do - if [[ $image == $3 ]]; then - IMAGE_TAG=$image - fi - done - if [[ -z $IMAGE_TAG ]]; then - echo "" - write_to_screen_and_script_log "The image you would like to set up does not exist on the host machine." - echo "Pulling the requested image..." - pull $3 - if [[ $? -ne 0 ]]; then - set -e - return 1 - fi - IMAGE_TAG=$3 - IMAGE_ID=`docker images 2>/dev/null | grep $METRO_AE_IMAGE | grep $IMAGE_TAG | awk '{ print $3 }'` - else - IMAGE_ID=`docker images 2>/dev/null | grep $METRO_AE_IMAGE | grep $IMAGE_TAG | awk '{ print $3 }'` - fi - fi - elif [[ $num_images -eq 1 ]]; then - if [[ -z $3 ]]; then - for image in $images_on_disk - do - if [[ $image != "" ]]; then - IMAGE_TAG=$image - fi - done - IMAGE_ID=`docker images 2>/dev/null | grep $METRO_AE_IMAGE | grep $IMAGE_TAG | awk '{ print $3 }'` - else - if [[ $3 == $images_on_disk ]]; then - IMAGE_TAG=$3 - IMAGE_ID=`docker images 2>/dev/null | grep $METRO_AE_IMAGE | grep $IMAGE_TAG | awk '{ print $3 }'` - else - echo "" - write_to_screen_and_script_log "The image you have specified does not match the image on the host machine." - echo "Pulling the requested image..." - pull $3 - if [[ $? -ne 0 ]]; then - set -e - return 1 - fi - IMAGE_TAG=$3 - IMAGE_ID=`docker images 2>/dev/null | grep $METRO_AE_IMAGE | grep $IMAGE_TAG | awk '{ print $3 }'` - fi - fi - else - if [[ ! -z $3 ]]; then - echo "" - write_to_screen_and_script_log "There are no images on the host machine, but an image has been specified for setup." - echo "Pulling the requested image..." - echo "" - pull $3 - if [[ $? -ne 0 ]]; then - set -e - return 1 - fi - IMAGE_TAG=$3 - IMAGE_ID=`docker images 2>/dev/null | grep $METRO_AE_IMAGE | grep $IMAGE_TAG | awk '{ print $3 }'` + if [[ -z $METROAE_PASSWORD ]]; then + if [[ -z "$encrypted_files" ]]; then + SKIP_PASSWORD=1 else - get_image_id - if [[ -z $IMAGE_ID ]]; then - echo "" - write_to_screen_and_script_log ">>> Pulling the MetroAE container image from the repository" - pull - fi - - if [[ $? -ne 0 ]]; then - set -e - return 1 - fi - fi - fi - set -e - - if [[ -z $1 ]]; then - echo "" - write_to_screen_and_script_log "Data directory configuration" - echo "" - echo "The MetroAE container needs access to your user data. It gets access by internally" - echo "mounting a directory from the host. We refer to this as the 'data directory'." - echo "The data directory is where you will have deployments, templates, documentation," - echo "and other useful files." - echo "" - echo "Please specify the full path to the data directory on the Docker host. Setup will" - echo "make sure that the path ends with 'metroae_data'. If the path you specify does" - echo "not end with 'metroae_data', setup will create it." - get_valid_data_dir - - create_data_directory $SETUP_USER_DATA_PATH - - echo "" - write_to_screen_and_script_log "Data directory path set to: $SETUP_USER_DATA_PATH" - echo "" - else - echo "" - valid_path=1 - validate_directory $1 - if [[ $valid_path -ne 0 ]]; then - exit 1 + SKIP_PASSWORD=0 fi - SETUP_USER_DATA_PATH=$1 - shift - write_to_screen_and_script_log ">>> Setting data directory path to passed in parameter $SETUP_USER_DATA_PATH" - fi - - METROAE_MOUNT_POINT=$SETUP_USER_DATA_PATH - if [ -w "$SETUP_FILE" ]; then - sed -i '/^METROAE_MOUNT_POINT/d' $SETUP_FILE - echo METROAE_MOUNT_POINT=$METROAE_MOUNT_POINT >> $SETUP_FILE - debug "${FUNCNAME[0]}: METROAE_MOUNT_POINT=$METROAE_MOUNT_POINT" - else - echo "" - write_to_screen_and_script_log "We couldn't write data directory name to the MetroAE setup file ($SETUP_FILE)." - echo "Please ensure that the file has user:group ownership set to 'root:docker' and try again." - echo "You can also delete the file and re-run 'metroae container setup'. Quitting." - exit 1 - fi - - declare -l setup_type - setup_type="init" - if [[ -z $1 ]]; then - while [[ $setup_type != $CONFIG_SETUP_TYPE ]] && [[ $setup_type != $DEPLOY_SETUP_TYPE ]] && [[ $setup_type != $BOTH_SETUP_TYPE ]] && [[ $setup_type != "" ]] - do - echo "" - write_to_screen_and_script_log "Setup can configure the container to support MetroAE (c)onfig, MetroAE (d)eploy," - echo "or (b)oth MetroAE config and deploy. MetroAE config is used for day-zero VSD" - echo "configuration tasks. MetroAE deploy is used for installing, upgrading, and" - echo "health checking of Nuage VSP components in your environment." - echo "" - read -p "Do you want to setup MetroAE config, deploy, or both? (c/d/B): " setup_type - done - else - echo "" - setup_type=$1 - shift - write_to_screen_and_script_log ">>> Setting setup type to passed in parameter $setup_type" - fi - - if [[ $setup_type == $CONFIG_SETUP_TYPE ]]; then - echo "" - write_to_screen_and_script_log ">>> Setup container for MetroAE config" - elif [[ $setup_type == $DEPLOY_SETUP_TYPE ]]; then - echo "" - write_to_screen_and_script_log ">>> Setup container for MetroAE deploy" - else - echo "" - write_to_screen_and_script_log ">>> Setup container for both MetroAE config and deploy" - setup_type=$BOTH_SETUP_TYPE - fi - - if [ -w "$SETUP_FILE" ]; then - sed -i '/^METROAE_SETUP_TYPE/d' $SETUP_FILE - echo "METROAE_SETUP_TYPE=$setup_type" >> $SETUP_FILE - debug "${FUNCNAME[0]}: METROAE_SETUP_TYPE=$setup_type" - else - echo "" - write_to_screen_and_script_log "We couldn't write setup type to the MetroAE setup file ($SETUP_FILE). Please ensure" - echo "that the file has user:group ownership set to 'root:docker' and try again." - echo "You can also delete the file and re-run 'metroae container setup'. Quitting." - exit 1 - fi - - # stop and remove existing container if any - get_running_container_id - if [[ ! -z $RUNNING_CONTAINER_ID ]]; then - stop_running_container - fi - - get_container_id - if [[ ! -z $CONTAINER_ID ]]; then - delete_container_id - fi - - echo "" - write_to_screen_and_script_log ">>> Prepare data directory for updates from new container." - echo "" - rm -rf $METROAE_MOUNT_POINT/version >> /dev/null 2>> /dev/null - - run_container - update_container_files - - if [[ -z $NOT_INTERACTIVE ]]; then - update_metroae - fi - - setup_tab_completion - - set -e - - generate_and_save_UUID - - if [[ $config_template_status -ne 0 || $config_docs_status -ne 0 || $deploy_keys_status -ne 0 || $deploy_status -ne 0 || $common_status -ne 0 ]]; then - echo "" - write_to_screen_and_script_log "Setup of the MetroAE container is complete. However, we had a problem copying one or more files to the data directory." - echo "The MetroAE container is running. You can use it, but you may not have the latest files on disk." - echo "You can try running 'metroae container setup' again to correct this problem." - echo "" - exit 1 else - echo "" - write_to_screen_and_script_log "Setup of the MetroAE container is complete. Execute 'metroae container status' for status." - echo "" - return 0 + SKIP_PASSWORD=1 fi } -function update_container_files { +function ask_password { debug ${FUNCNAME[0]} - - set +e - common_status=0 - echo "" - write_to_screen_and_script_log ">>> Updating common files and documentation for MetroAE in the container" - echo "" - docker_exec_copy_common_defaults - common_status=$? - - deploy_status=0 - if [[ $setup_type != "c" ]]; then - echo "" - write_to_screen_and_script_log ">>> Updating with the latest documentation and examples for MetroAE deploy in the container" - echo "" - docker_exec_copy_deploy_defaults - deploy_status=$? - fi - - deploy_keys_status=0 - echo "" - write_to_screen_and_script_log ">>> Generating and copying ssh keys for the container" - echo "" - docker_exec_generate_and_copy_keys - deploy_keys_status=$? - - config_template_status=0 - if [[ $setup_type != "d" ]]; then - echo "" - write_to_screen_and_script_log ">>> Updating with the latest feature templates and files for MetroAE Config in the container" - echo "" - docker_exec_config templates update - config_template_status=$? - - if [[ $config_template_status -eq 9 ]]; then - echo "" - read -p "Would you like to continue with MetroAE container setup? (y/n) " CONTINUE_SETUP - if [[ $CONTINUE_SETUP == "n" ]]; then - echo "" - write_to_screen_and_script_log "Quitting MetroAE container setup..." - echo "You can run 'metroae container setup' at any time to complete the setup process" - exit 1 - else - echo "" - write_to_screen_and_script_log "Continuing with MetroAE container setup..." - config_template_status=0 - fi - fi - - fi - - config_docs_status=0 - if [[ $setup_type != "d" ]]; then - echo "" - write_to_screen_and_script_log ">>> Updating with the documentation for MetroAE Config in the container" - echo "" - docker_exec_copy_config_defaults - config_docs_status=$? - fi -} - -function validate_directory { - debug ${FUNCNAME[0]} - - set +e - if [[ -w $SCRIPT_LOG_FILE ]]; then - stat $1 >> $SCRIPT_LOG_FILE 2>> $SCRIPT_LOG_FILE - else - stat $1 - fi - - valid_path=$? - if [[ $valid_path -ne 0 ]]; then - write_to_screen_and_script_log "We had a problem validating the path you entered: $1. Please try again." - fi -} - -function create_data_directory { - debug ${FUNCNAME[0]} - - if [[ $1 != *metroae_data ]]; then - SETUP_USER_DATA_PATH=${SETUP_USER_DATA_PATH%/}/metroae_data - fi - mkdir -p $SETUP_USER_DATA_PATH -} - -function get_valid_data_dir { - debug ${FUNCNAME[0]} - valid_path=1 - set +e - - while [[ $valid_path -ne 0 ]] - do - echo "" - read -p "Data directory path: " SETUP_USER_DATA_PATH - echo "" - write_to_screen_and_script_log "Checking path: $1" - validate_directory $SETUP_USER_DATA_PATH - done - set -e -} - -function update_container { - debug ${FUNCNAME[0]} - declare -l confirmation - if [[ -z $2 ]]; then - confirmation="init" - echo "" - write_to_screen_and_script_log "You are about to update your MetroAE container to the latest version." - echo "We will destroy the existing container, pull the latest version of the container," - echo "and start the container using the existing configuration." - echo "Your data on disk will be preserved." - while [[ $confirmation != "n" ]] && [[ $confirmation != "y" ]] && [[ $confirmation != "" ]] - do - echo "" - read -p "Do you really want to update the MetroAE container? (y/N): " confirmation - done - else - confirmation=$2 - fi - - if [[ $confirmation != "y" ]]; then - echo "" - write_to_screen_and_script_log "Update of metroae container was canceled" - echo "" - print_version_and_exit 0 - fi - - set +e - destroy y - if [[ $? -ne 0 ]]; then - return 1 - fi - set -e - if [[ ! -z $1 ]]; then - pull $1 - else - pull - fi - run_container - update_container_files - - if [[ -z $NOT_INTERACTIVE ]]; then - update_metroae - fi - - read_setup_files - if [ -d $METROAE_MOUNT_POINT/metro_plugins/ ]; then - for plugin_name in `ls $METROAE_MOUNT_POINT/metro_plugins/`; do - if [ -d "$METROAE_MOUNT_POINT/metro_plugins/${plugin_name}" ]; then - docker_metro_ae_exec plugin install "/metroae_data/metro_plugins/${plugin_name}" - fi - done - fi -} - -function check_if_mount_point_configured { - debug ${FUNCNAME[0]} - if [[ -z "$METROAE_MOUNT_POINT" ]]; then - echo "" - write_to_screen_and_script_log "We couldn't find the data directory configuration for the container in the configuration file." - echo "Please run 'metroae container setup' to correct this and try again." - echo "" - return 1 - fi -} - -function update_metroae { - debug ${FUNCNAME[0]} - declare -l confirmation - if [[ $RUN_MODE == "CONTAINER" ]]; then - if [[ -z $2 ]]; then - confirmation="init" - while [[ $confirmation != "n" ]] && [[ $confirmation != "y" ]] && [[ $confirmation != "" ]] - do - echo "" - echo "We recommend that you update your metroae script to the same version as the container you are loading." - read -p "Would you like us to update the metroae script for you? (Y/n): " confirmation - done - else - confirmation=$2 - fi - - if [[ $confirmation == "n" ]]; then - echo "" - write_to_screen_and_script_log "Update of metroae Script was canceled" - echo "" - return 0 - fi - echo "" - write_to_screen_and_script_log ">>> Updating your metroae script" - echo "" - - if [[ -f $SETUP_FILE ]]; then - while read -r line; do declare $line; done < $SETUP_FILE - fi - - check_if_mount_point_configured - - echo "" - write_to_screen_and_script_log ">>> Wait for the container to copy a new metroae file to the data directory" - echo "" - set +e - timeout 10 bash -c -- "while [[ ! -f $METROAE_MOUNT_POINT/metroae ]]; do sleep 1; done" - TEMP_RC=$? - debug $TEMP_RC - if [[ $TEMP_RC -ne 0 ]]; then - write_to_screen_and_script_log "We didn't find $METROAE_MOUNT_POINT/metroae." - echo "It could be that the version of metroae script you are running is the latest." - echo "Your copy of the metroae script will not be updated. You can continue to use your copy." - echo "" - set -e - return 0 - fi - set +e - cp $METROAE_MOUNT_POINT/metroae "${BASH_SOURCE[0]}" - - if [[ $? -ne 0 ]]; then - write_to_screen_and_script_log "We encountered an error copying $METROAE_MOUNT_POINT/metroae." - echo "Your copy of the metroae script will not be updated. You can continue to use your copy," - echo " but we suggest that you update your metroae script from the Github repo before continuing." - echo "" - set -e - return 0 - fi - set -e - write_to_screen_and_script_log "Successfully updated your metroae script." - echo "" - fi -} - -function setup_tab_completion { - debug ${FUNCNAME[0]} - set +e - echo "" - write_to_screen_and_script_log ">>> Setting up optional tab completion for your metroae script." - echo "" - - check_if_mount_point_configured - - if [[ ! -d $COMPLETION_DIR ]]; then - debug "${FUNCNAME[0]}: Creating tab-completion dir: $COMPLETION_DIR" - set +e - mkdir $COMPLETION_DIR - if [[ $? -ne 0 ]]; then - write_to_screen_and_script_log "We encountered an error creating the tab-completion directory on your host." - echo "We will not be configuring optional metroae script tab completion." - echo "You may continue to use the metroae script without tab completion." - echo "" - else - debug "${FUNCNAME[0]}: Copying tab-completion to standard dir: $METROAE_MOUNT_POINT/$TAB_COMPLETION_SCRIPT to $COMPLETION_DIR" - sudo cp $METROAE_MOUNT_POINT/$TAB_COMPLETION_SCRIPT $COMPLETION_DIR - if [[ $? -ne 0 ]]; then - write_to_screen_and_script_log "We encountered an error copying files to the tab-completion directory on your host." - echo "We will not be configuring optional metroae script tab completion." - echo "You may continue to use the metroae script without tab completion." - echo "" - else - write_to_screen_and_script_log "Successfully configured optional metroae script tab completion." - echo "" - fi - fi - fi - - set -e -} - -function run_container_if_not_running { - debug ${FUNCNAME[0]} - get_running_container_id - - if [[ -z $RUNNING_CONTAINER_ID ]]; then - run_container - get_running_container_id - fi -} - -function docker_exec { - debug ${FUNCNAME[0]} - run_container_if_not_running - environment="" - working_dir="" - - if [[ $1 == "env" ]]; then - shift - environment='-e ANSIBLE_FORCE_COLOR=true -e UUID=$UUID' - for env in `env` - do - filtered=0 - for filter in ${ENVIRONMENT_FILTERS[@]} - do - if [[ "$env" =~ ^$filter ]] - then - filtered=1 - fi - done - - if [[ $filtered -eq 0 ]]; then - environment="$environment -e $env" - fi - done - - if [[ ! -z $USER_DATA_PATH ]]; then - environment="$environment -e USER_DATA_PATH=$USER_DATA_PATH" - fi - fi - - if [[ $1 == "workdir" ]]; then - working_dir="-w $2" - shift - shift - fi - debug "${FUNCNAME[0]}: Env = $environment" - debug "${FUNCNAME[0]}: Pwd = $working_dir" - debug "${FUNCNAME[0]}: Container = $RUNNING_CONTAINER_ID" - if [[ ! -z $METROAE_DEBUG ]]; then - echo "$@" - fi - docker exec $environment $working_dir $RUNNING_CONTAINER_ID "$@" -} - -function docker_exec_interactive { - debug ${FUNCNAME[0]} - environment="" - if [[ ! -z $METROAE_PASSWORD ]]; then - environment=" -e METROAE_PASSWORD=$METROAE_PASSWORD" - fi - if [[ ! -z $SSHPASS_PASSWORD ]]; then - sshpass="sshpass -p$SSHPASS_PASSWORD" - fi - - echo "" - write_to_screen_and_script_log ">>> Starting interactive session with the container" - echo "" - - if [[ ! -z $NOT_INTERACTIVE ]]; then - environment="$environment -e NON_INTERACTIVE=true" - docker exec $environment $RUNNING_CONTAINER_ID $sshpass "$@" - else - docker exec -it $environment $RUNNING_CONTAINER_ID $sshpass "$@" - fi - - echo "" - write_to_screen_and_script_log ">>> Exiting interactive session with the container" - echo "" -} - -function interactive { - debug ${FUNCNAME[0]} - run_container_if_not_running - docker_exec_interactive /bin/bash -} - -function check_for_user_group { - debug ${FUNCNAME[0]} - uid=`id -u` - debug "${FUNCNAME[0]}: uid=$uid" - get_host_operating_system - if [[ $uid -eq 0 ]] || [[ $OS_RELEASE -eq $NON_LINUX ]]; then - return 0 - fi - - set +e - docker_group=`getent group docker` - debug "${FUNCNAME[0]}: docker_group=$docker_group" - set -e - if [[ -z $docker_group ]]; then - echo "" - write_to_screen_and_script_log ">>> Adding docker group" - sudo groupadd docker - echo "" - write_to_screen_and_script_log ">>> Restarting Docker" - sudo systemctl restart docker - if [[ -d $INSTALL_FOLDER ]]; then - sudo chown -R root:docker $INSTALL_FOLDER - sudo chmod -R 775 $INSTALL_FOLDER - fi - fi - - uid=`id -u` - debug "${FUNCNAME[0]}: uid=$uid" - get_host_operating_system - - set +e - current_user=`whoami` - debug "${FUNCNAME[0]}: current_user=$current_user" - docker_part_of_groups=`id -nG $current_user | grep docker` - debug "${FUNCNAME[0]}: docker_part_of_groups=$docker_part_of_groups" - set -e - if [[ -z "$docker_part_of_groups" ]]; then - echo "" - write_to_screen_and_script_log "It looks like the user `whoami` isn't a member of the 'docker' group. `whoami` must" - write_to_screen_and_script_log "be a member of the 'docker' group to proceed." - declare -l confirmation - if [[ -z $1 ]]; then - confirmation="init" - while [[ $confirmation != "n" ]] && [[ $confirmation != "y" ]] && [[ $confirmation != "" ]] - do - echo "" - read -p "Do you want the current user to be added to the docker group: (y/N) " confirmation - done - else - confirmation=$1 - fi - - if [[ $confirmation != "y" ]]; then - echo "" - write_to_screen_and_script_log "Please add the current user to the docker group and retry. Quitting" - print_version_and_exit 1 - fi - - echo "" - write_to_screen_and_script_log ">>> Adding `whoami` to the docker group" - sudo gpasswd -a `whoami` docker >> /dev/null - if [[ -z $NOT_INTERACTIVE ]]; then - echo "" - write_to_screen_and_script_log "We added user `whoami` to the docker group. For this one-time change to take effect," - write_to_screen_and_script_log "you must logout and back in before trying again." - exit 0 - fi - fi - -} - -function container_load { - debug ${FUNCNAME[0]} - - if [[ -z $1 ]]; then - write_to_screen_and_script_log "Please specify the tar file downloaded from Docker hub" - print_version_and_exit 1 - fi - - if ! [[ -f $1 ]]; then - write_to_screen_and_script_log "Tar file $1 does not exist" - print_version_and_exit 1 - fi - - docker load < $1 - - write_to_screen_and_script_log ">>> MetroAE container sucessfully loaded from tar file" - write_to_screen_and_script_log "You can run 'metroae container setup' to setup" -} - -function container_load_templates { - debug ${FUNCNAME[0]} - - if [[ -z $1 ]]; then - write_to_screen_and_script_log "Please specify the path to the tar file" - print_version_and_exit 1 - fi - - if ! [[ -f $1 ]]; then - write_to_screen_and_script_log "Tar file $1 does not exist" - print_version_and_exit 1 - fi - - write_to_screen_and_script_log "Extracting template and specifications tarballs..." - tar -xvf "$1" - if [[ $? -ne 0 ]]; then - write_to_screen_and_script_log "There was an issue extracting tarballs from $1" - exit 1 - fi - - templates_filename=$(basename $TEMPLATE_TAR_LOCATION) - specs_filename=$(basename $VSD_SPECIFICATIONS_LOCATION) - - template_path=$METROAE_MOUNT_POINT/$TEMPLATE_DIR - mkdir -p "$template_path" - write_to_screen_and_script_log "Extracting template files to $template_path..." - tar -xvf "$templates_filename" -C $template_path - if [[ $? -ne 0 ]]; then - write_to_screen_and_script_log "There was an issue extracting template files from $templates_filename" - exit 1 - fi - - specs_path=$METROAE_MOUNT_POINT/$SPECIFICATION_DIR - mkdir -p "$specs_path" - write_to_screen_and_script_log "Extracting specification files to $specs_path..." - tar -xvf "$specs_filename" -C $specs_path - if [[ $? -ne 0 ]]; then - write_to_screen_and_script_log "There was an issue extracting VSD API Specifications from $specs_filename" - exit 1 - fi - - write_to_screen_and_script_log "Templates and specifications have been extracted" - write_to_screen_and_script_log "They can be found in $template_path and $specs_path" - - rm -f $templates_filename - rm -f $specs_filename - - if [[ -e $templates_filename ]] || [[ -e $specs_filename ]]; then - write_to_screen_and_script_log "Could not remove one or both of the intermediate tar files" - write_to_screen_and_script_log "They can be found here: $templates_filename and $specs_filename" - exit 1 - fi - - exit 0 - -} - -################################################################################# -# CONTAINER DOWNLOAD # -################################################################################# - -# The following code was adapted from the Moby project (under Apache 2 license) -# https://github.com/moby/moby/ -# Specifically from the Docker hub download script -# https://raw.githubusercontent.com/moby/moby/master/contrib/download-frozen-image-v2.sh - -function download_container { - debug ${FUNCNAME[0]} - - if [[ -z $1 ]]; then - imageTag=current - else - imageTag=$1 - fi - - set +e - docker --version >> /dev/null 2>> /dev/null - exitcode=$? - set -e - - if [[ $exitcode -eq 0 ]]; then - download_using_docker "$METRO_AE_IMAGE:$imageTag" - else - download_from_docker_hub "$METRO_AE_IMAGE:$imageTag" - fi -} - -function download_container_templates { - debug ${FUNCNAME[0]} - if ! command -v curl &> /dev/null; then - write_to_screen_and_script_log "The command 'curl' was not found in PATH" - write_to_screen_and_script_log "This is a small utility for downloading files from the Internet" - write_to_screen_and_script_log "It is needed for downloading templates from Amazon S3" - write_to_screen_and_script_log "Please install this utility with:" - write_to_screen_and_script_log "" - write_to_screen_and_script_log "sudo yum install curl" - write_to_screen_and_script_log "" - exit 1 - fi - { - curl -O $TEMPLATE_TAR_LOCATION &> /dev/null && - curl -O $VSD_SPECIFICATIONS_LOCATION &> /dev/null - } || { - echo "Warning: Internet connection not detected from the container." - echo "MetroAE Config Templates and VSD Specification files were not downloaded." - echo "Please exit the container first and download the tar files from the provided URLs below." - echo "Download the MetroAE Config Template files from the following URL:" - echo "$TEMPLATE_TAR_LOCATION" - echo "" - echo "Download the VSD Specification files from the following URL:" - echo "$VSD_SPECIFICATIONS_LOCATION" - echo "Upon successful download of the two tar files" - echo "untar the files to: $METROAE_MOUNT_POINT" - echo "" - exit 0 - } - template_tar_file=$(basename $TEMPLATE_TAR_LOCATION) - specs_tar_file=$(basename $VSD_SPECIFICATIONS_LOCATION) - template_and_specs_tar_file="metroae_templates_specs.tar" - - set +e - tar -rf $template_and_specs_tar_file $template_tar_file $specs_tar_file - tar_exit_code=$? - set -e - - if [[ $tar_exit_code -eq 0 ]]; then - write_to_screen_and_script_log "The latest MetroAE templates and VSD API Specifications have been downloaded" - write_to_screen_and_script_log "They can be found in $template_and_specs_tar_file" - else - write_to_screen_and_script_log "There was an issue when archiving the tarballs, please try again." - exit 1 - fi - - rm -f $template_tar_file - rm -f $specs_tar_file - - if [[ -e $template_tar_file ]] || [[ -e $specs_tar_file ]]; then - write_to_screen_and_script_log "Could not remove one or both of the intermediate tar files" - write_to_screen_and_script_log "They can be found here: $template_tar_file and $specs_tar_file" - exit 1 - fi - - exit 0 -} - -function download_using_docker { - debug ${FUNCNAME[0]} - - write_to_screen_and_script_log ">>> Downloading $1 using docker" - - set +e - docker save $1 > $CONTAINER_TAR_FILE - exitcode=$? - set -e - - if [[ $exitcode -eq 0 ]]; then - write_to_screen_and_script_log "Download of images into '$CONTAINER_TAR_FILE' complete." - echo "Copy this file to MetroAE Docker host and issue:" - echo " metroae container load $CONTAINER_TAR_FILE" - else - write_to_screen_and_script_log "Error using docker to download, trying directly from Docker hub..." - download_from_docker_hub $1 - fi -} - -function download_from_docker_hub { - debug ${FUNCNAME[0]} - - write_to_screen_and_script_log ">>> Downloading $1 from Docker hub" - - check_download_requirements - - mkdir -p "$DOCKER_DOWNLOAD_WORK_DIR" - - # hacky workarounds for Bash 3 support (no associative arrays) - images=() - rm -f "$DOCKER_DOWNLOAD_WORK_DIR"/tags-*.tmp - manifestJsonEntries=() - doNotGenerateManifestJson= - - # bash v4 on Windows CI requires CRLF separator... and linux doesn't seem to care either way - newlineIFS=$'\n' - major=$(echo "${BASH_VERSION%%[^0.9]}" | cut -d. -f1) - if [ "$major" -ge 4 ]; then - newlineIFS=$'\r\n' - fi - - imageTag="$1" - - image="${imageTag%%[:@]*}" - imageTag="${imageTag#*:}" - digest="${imageTag##*@}" - tag="${imageTag%%@*}" - - # add prefix library if passed official image - if [[ "$image" != *"/"* ]]; then - image="library/$image" - fi - - imageFile="${image//\//_}" # "/" can't be in filenames :) - - token="$(curl -fsSL "$DOCKER_AUTH_BASE/token?service=$DOCKER_AUTH_SERVICE&scope=repository:$image:pull" | jq --raw-output '.token')" - - manifestJson="$( - curl -fsSL \ - -H "Authorization: Bearer $token" \ - -H 'Accept: application/vnd.docker.distribution.manifest.v2+json' \ - -H 'Accept: application/vnd.docker.distribution.manifest.list.v2+json' \ - -H 'Accept: application/vnd.docker.distribution.manifest.v1+json' \ - "$DOCKER_REGISTRY_BASE/v2/$image/manifests/$digest" - )" - if [ "${manifestJson:0:1}" != '{' ]; then - write_to_screen_and_script_log "error: /v2/$image/manifests/$digest returned something unexpected:" - write_to_screen_and_script_log " $manifestJson" - exit 1 - fi - - imageIdentifier="$image:$tag@$digest" - - schemaVersion="$(echo "$manifestJson" | jq --raw-output '.schemaVersion')" - case "$schemaVersion" in - 2) - mediaType="$(echo "$manifestJson" | jq --raw-output '.mediaType')" - - case "$mediaType" in - application/vnd.docker.distribution.manifest.v2+json) - handle_single_manifest_v2 "$manifestJson" - ;; - application/vnd.docker.distribution.manifest.list.v2+json) - layersFs="$(echo "$manifestJson" | jq --raw-output --compact-output '.manifests[]')" - IFS="$newlineIFS" - while IFS= read -r line; do - layers+=("$line") - done <<< "$layersFs" - unset IFS - - found="" - targetArch="$(get_target_arch)" - # parse first level multi-arch manifest - for i in "${!layers[@]}"; do - layerMeta="${layers[$i]}" - maniArch="$(echo "$layerMeta" | jq --raw-output '.platform.architecture')" - if [ "$maniArch" = "${targetArch}" ]; then - digest="$(echo "$layerMeta" | jq --raw-output '.digest')" - # get second level single manifest - submanifestJson="$( - curl -fsSL \ - -H "Authorization: Bearer $token" \ - -H 'Accept: application/vnd.docker.distribution.manifest.v2+json' \ - -H 'Accept: application/vnd.docker.distribution.manifest.list.v2+json' \ - -H 'Accept: application/vnd.docker.distribution.manifest.v1+json' \ - "$DOCKER_REGISTRY_BASE/v2/$image/manifests/$digest" - )" - handle_single_manifest_v2 "$submanifestJson" - found="found" - break - fi - done - if [ -z "$found" ]; then - write_to_screen_and_script_log "error: manifest for $maniArch is not found" - exit 1 - fi - ;; - *) - write_to_screen_and_script_log "error: unknown manifest mediaType ($imageIdentifier): '$mediaType'" - exit 1 - ;; - esac - ;; - - 1) - if [ -z "$doNotGenerateManifestJson" ]; then - write_to_screen_and_script_log "warning: '$imageIdentifier' uses schemaVersion '$schemaVersion'" - write_to_screen_and_script_log " this script cannot (currently) recreate the 'image config' to put in a 'manifest.json' (thus any schemaVersion 2+ images will be imported in the old way, and their 'docker history' will suffer)" - write_to_screen_and_script_log - doNotGenerateManifestJson=1 - fi - - layersFs="$(echo "$manifestJson" | jq --raw-output '.fsLayers | .[] | .blobSum')" - IFS="$newlineIFS" - while IFS= read -r line; do - layers+=("$line") - done <<< "$layersFs" - unset IFS - - history="$(echo "$manifestJson" | jq '.history | [.[] | .v1Compatibility]')" - imageId="$(echo "$history" | jq --raw-output '.[0]' | jq --raw-output '.id')" - - echo "Downloading '$imageIdentifier' (${#layers[@]} layers)..." - for i in "${!layers[@]}"; do - imageJson="$(echo "$history" | jq --raw-output ".[${i}]")" - layerId="$(echo "$imageJson" | jq --raw-output '.id')" - imageLayer="${layers[$i]}" - - mkdir -p "$DOCKER_DOWNLOAD_WORK_DIR/$layerId" - echo '1.0' > "$DOCKER_DOWNLOAD_WORK_DIR/$layerId/VERSION" - - echo "$imageJson" > "$DOCKER_DOWNLOAD_WORK_DIR/$layerId/json" - - # TODO figure out why "-C -" doesn't work here - # "curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume." - # "HTTP/1.1 416 Requested Range Not Satisfiable" - if [ -f "$DOCKER_DOWNLOAD_WORK_DIR/$layerId/layer.tar" ]; then - # TODO hackpatch for no -C support :'( - echo "skipping existing ${layerId:0:12}" - continue - fi - token="$(curl -fsSL "$DOCKER_AUTH_BASE/token?service=$DOCKER_AUTH_SERVICE&scope=repository:$image:pull" | jq --raw-output '.token')" - fetch_blob "$token" "$image" "$imageLayer" "$DOCKER_DOWNLOAD_WORK_DIR/$layerId/layer.tar" --progress-bar - done - ;; - - *) - write_to_screen_and_script_log "error: unknown manifest schemaVersion ($imageIdentifier): '$schemaVersion'" - exit 1 - ;; - esac - - echo - - if [ -s "$DOCKER_DOWNLOAD_WORK_DIR/tags-$imageFile.tmp" ]; then - echo -n ', ' >> "$DOCKER_DOWNLOAD_WORK_DIR/tags-$imageFile.tmp" - else - images=("${images[@]}" "$image") - fi - echo -n '"'"$tag"'": "'"$imageId"'"' >> "$DOCKER_DOWNLOAD_WORK_DIR/tags-$imageFile.tmp" - - echo -n '{' > "$DOCKER_DOWNLOAD_WORK_DIR/repositories" - firstImage=1 - for image in "${images[@]}"; do - imageFile="${image//\//_}" # "/" can't be in filenames :) - image="${image#library\/}" - - [ "$firstImage" ] || echo -n ',' >> "$DOCKER_DOWNLOAD_WORK_DIR/repositories" - firstImage= - echo -n $'\n\t' >> "$DOCKER_DOWNLOAD_WORK_DIR/repositories" - echo -n '"'"$image"'": { '"$(cat "$DOCKER_DOWNLOAD_WORK_DIR/tags-$imageFile.tmp")"' }' >> "$DOCKER_DOWNLOAD_WORK_DIR/repositories" - done - echo -n $'\n}\n' >> "$DOCKER_DOWNLOAD_WORK_DIR/repositories" - - rm -f "$DOCKER_DOWNLOAD_WORK_DIR"/tags-*.tmp - - if [ -z "$doNotGenerateManifestJson" ] && [ "${#manifestJsonEntries[@]}" -gt 0 ]; then - echo '[]' | jq --raw-output ".$(for entry in "${manifestJsonEntries[@]}"; do echo " + [ $entry ]"; done)" > "$DOCKER_DOWNLOAD_WORK_DIR/manifest.json" - else - rm -f "$DOCKER_DOWNLOAD_WORK_DIR/manifest.json" - fi - - - tar -cC $DOCKER_DOWNLOAD_WORK_DIR . > $CONTAINER_TAR_FILE - rm -rf $DOCKER_DOWNLOAD_WORK_DIR - write_to_screen_and_script_log "Download of images into '$CONTAINER_TAR_FILE' complete." - echo "Copy this file to MetroAE Docker host and issue:" - echo " metroae container load $CONTAINER_TAR_FILE" - -} - -function check_download_requirements { - debug ${FUNCNAME[0]} - - # check if essential commands are in our PATH - all_commands_present=1 - - if ! command -v jq &> /dev/null; then - write_to_screen_and_script_log "The command 'jq' was not found in PATH" - write_to_screen_and_script_log "This is a small utility for parsing JSON output" - write_to_screen_and_script_log "It is needed for parsing Docker hub manifest output" - write_to_screen_and_script_log "Please install this utility with:" - write_to_screen_and_script_log "" - write_to_screen_and_script_log "sudo yum install jq" - write_to_screen_and_script_log "" - all_commands_present=0 - fi - - if ! command -v curl &> /dev/null; then - write_to_screen_and_script_log "The command 'curl' was not found in PATH" - write_to_screen_and_script_log "This is a small utility for downloading files from the Internet" - write_to_screen_and_script_log "It is needed for downloading files from Docker hub" - write_to_screen_and_script_log "Please install this utility with:" - write_to_screen_and_script_log "" - write_to_screen_and_script_log "sudo yum install curl" - write_to_screen_and_script_log "" - all_commands_present=0 - fi - - if ! command -v sha256sum &> /dev/null; then - write_to_screen_and_script_log "The command 'sha256sum' was not found in PATH" - write_to_screen_and_script_log "This is a small utility for calculating SHA256 checksums" - write_to_screen_and_script_log "It is needed for validating files downloaded from Docker hub" - write_to_screen_and_script_log "Please install this utility with:" - write_to_screen_and_script_log "" - write_to_screen_and_script_log "sudo yum install coreutils" - write_to_screen_and_script_log "" - all_commands_present=0 - fi - - if [[ $all_commands_present -ne 1 ]]; then - print_version_and_exit 1 - fi -} - -fetch_blob() { - debug ${FUNCNAME[0]} - - local token="$1" - shift - local image="$1" - shift - local digest="$1" - shift - local targetFile="$1" - shift - local curlArgs=("$@") - - local curlHeaders - curlHeaders="$( - curl -S "${curlArgs[@]}" \ - -H "Authorization: Bearer $token" \ - "$DOCKER_REGISTRY_BASE/v2/$image/blobs/$digest" \ - -o "$targetFile" \ - -D- - )" - curlHeaders="$(echo "$curlHeaders" | tr -d '\r')" - if grep -qE "^HTTP/[0-9].[0-9] 3" <<< "$curlHeaders"; then - rm -f "$targetFile" - - local blobRedirect - blobRedirect="$(echo "$curlHeaders" | awk -F ': ' 'tolower($1) == "location" { print $2; exit }')" - if [ -z "$blobRedirect" ]; then - write_to_screen_and_script_log "error: failed fetching '$image' blob '$digest'" - echo "$curlHeaders" | head -1 >&2 - return 1 - fi - - curl -fSL "${curlArgs[@]}" \ - "$blobRedirect" \ - -o "$targetFile" - fi -} - -# handle 'application/vnd.docker.distribution.manifest.v2+json' manifest -handle_single_manifest_v2() { - debug ${FUNCNAME[0]} - - local manifestJson="$1" - shift - - local configDigest - configDigest="$(echo "$manifestJson" | jq --raw-output '.config.digest')" - local imageId="${configDigest#*:}" # strip off "sha256:" - - local configFile="$imageId.json" - fetch_blob "$token" "$image" "$configDigest" "$DOCKER_DOWNLOAD_WORK_DIR/$configFile" -s - - local layersFs - layersFs="$(echo "$manifestJson" | jq --raw-output --compact-output '.layers[]')" - local IFS="$newlineIFS" - local layers - while IFS= read -r line; do - layers+=("$line") - done <<< "$layersFs" - unset IFS - - echo "Downloading '$imageIdentifier' (${#layers[@]} layers)..." - local layerId= - local layerFiles=() - for i in "${!layers[@]}"; do - local layerMeta="${layers[$i]}" - - if [[ -z $layerMeta ]]; then - continue - fi - - local layerMediaType - layerMediaType="$(echo "$layerMeta" | jq --raw-output '.mediaType')" - local layerDigest - layerDigest="$(echo "$layerMeta" | jq --raw-output '.digest')" - - # save the previous layer's ID - local parentId="$layerId" - # create a new fake layer ID based on this layer's digest and the previous layer's fake ID - layerId="$(echo "$parentId"$'\n'"$layerDigest" | sha256sum | cut -d' ' -f1)" - # this accounts for the possibility that an image contains the same layer twice (and thus has a duplicate digest value) - - mkdir -p "$DOCKER_DOWNLOAD_WORK_DIR/$layerId" - echo '1.0' > "$DOCKER_DOWNLOAD_WORK_DIR/$layerId/VERSION" - - if [ ! -s "$DOCKER_DOWNLOAD_WORK_DIR/$layerId/json" ]; then - local parentJson - parentJson="$(printf ', parent: "%s"' "$parentId")" - local addJson - addJson="$(printf '{ id: "%s"%s }' "$layerId" "${parentId:+$parentJson}")" - # this starter JSON is taken directly from Docker's own "docker save" output for unimportant layers - jq "$addJson + ." > "$DOCKER_DOWNLOAD_WORK_DIR/$layerId/json" << EOJSON - { - "created": "0001-01-01T00:00:00Z", - "container_config": { - "Hostname": "", - "Domainname": "", - "User": "", - "AttachStdin": false, - "AttachStdout": false, - "AttachStderr": false, - "Tty": false, - "OpenStdin": false, - "StdinOnce": false, - "Env": null, - "Cmd": null, - "Image": "", - "Volumes": null, - "WorkingDir": "", - "Entrypoint": null, - "OnBuild": null, - "Labels": null - } - } -EOJSON -# ^^ Do not indent or put anything else on previous line (multiline block string) - fi - - case "$layerMediaType" in - application/vnd.docker.image.rootfs.diff.tar.gzip) - local layerTar="$layerId/layer.tar" - layerFiles=("${layerFiles[@]}" "$layerTar") - # TODO figure out why "-C -" doesn't work here - # "curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume." - # "HTTP/1.1 416 Requested Range Not Satisfiable" - if [ -f "$DOCKER_DOWNLOAD_WORK_DIR/$layerTar" ]; then - # TODO hackpatch for no -C support :'( - echo "skipping existing ${layerId:0:12}" - continue - fi - local token - token="$(curl -fsSL "$DOCKER_AUTH_BASE/token?service=$DOCKER_AUTH_SERVICE&scope=repository:$image:pull" | jq --raw-output '.token')" - fetch_blob "$token" "$image" "$layerDigest" "$DOCKER_DOWNLOAD_WORK_DIR/$layerTar" --progress-bar - ;; - - *) - write_to_screen_and_script_log "error: unknown layer mediaType ($imageIdentifier, $layerDigest): '$layerMediaType'" - exit 1 - ;; - esac - done - - # change "$imageId" to be the ID of the last layer we added (needed for old-style "repositories" file which is created later -- specifically for older Docker daemons) - imageId="$layerId" - - # munge the top layer image manifest to have the appropriate image configuration for older daemons - local imageOldConfig - imageOldConfig="$(jq --raw-output --compact-output '{ id: .id } + if .parent then { parent: .parent } else {} end' "$DOCKER_DOWNLOAD_WORK_DIR/$imageId/json")" - jq --raw-output "$imageOldConfig + del(.history, .rootfs)" "$DOCKER_DOWNLOAD_WORK_DIR/$configFile" > "$DOCKER_DOWNLOAD_WORK_DIR/$imageId/json" - - local manifestJsonEntry - manifestJsonEntry="$( - echo '{}' | jq --raw-output '. + { - Config: "'"$configFile"'", - RepoTags: ["'"${image#library\/}:$tag"'"], - Layers: '"$(echo '[]' | jq --raw-output ".$(for layerFile in "${layerFiles[@]}"; do echo " + [ \"$layerFile\" ]"; done)")"' - }' - )" - manifestJsonEntries=("${manifestJsonEntries[@]}" "$manifestJsonEntry") -} - -get_target_arch() { - debug ${FUNCNAME[0]} - - if [ -n "${TARGETARCH:-}" ]; then - echo "${TARGETARCH}" - return 0 - fi - - if type go > /dev/null; then - go env GOARCH - return 0 - fi - - if type dpkg > /dev/null; then - debArch="$(dpkg --print-architecture)" - case "${debArch}" in - armel | armhf) - echo "arm" - return 0 - ;; - *64el) - echo "${debArch%el}le" - return 0 - ;; - *) - echo "${debArch}" - return 0 - ;; - esac - fi - - if type uname > /dev/null; then - uArch="$(uname -m)" - case "${uArch}" in - x86_64) - echo amd64 - return 0 - ;; - arm | armv[0-9]*) - echo arm - return 0 - ;; - aarch64) - echo arm64 - return 0 - ;; - mips*) - write_to_screen_and_script_log "I see you are running on mips but I don't know how to determine endianness yet, so I cannot select a correct arch to fetch." - write_to_screen_and_script_log "Consider installing \"go\" on the system which I can use to determine the correct arch or specify it explictly by setting TARGETARCH" - exit 1 - ;; - *) - echo "${uArch}" - return 0 - ;; - esac - - fi - - # default value - write_to_screen_and_script_log "Unable to determine CPU arch, falling back to amd64. You can specify a target arch by setting TARGETARCH" - echo amd64 -} - - -################################################################################# -# DEPLOYMENT # -################################################################################# - -function list_workflows { - debug ${FUNCNAME[0]} - for file in $PLAYBOOK_DIR/*.yml - do - if [[ -f $file ]]; then - filename=$(basename "$file") - filename="${filename%.*}" - echo $filename - fi - done - for file in $PLAYBOOK_WITH_BUILD_DIR/*.yml - do - if [[ -f $file ]]; then - filename=$(basename "$file") - filename="${filename%.*}" - echo $filename - fi - done -} - -function check_password_needed { - debug ${FUNCNAME[0]} - deployment_dir="$1" - - encrypted_files=$(grep -Ril $ENCRYPTED_TOKEN $deployment_dir) - - if [[ -z $METROAE_PASSWORD ]]; then - if [[ -z "$encrypted_files" ]]; then - SKIP_PASSWORD=1 - else - SKIP_PASSWORD=0 - fi - else - SKIP_PASSWORD=1 - fi -} - -function ask_password { - debug ${FUNCNAME[0]} - if [[ $SKIP_PASSWORD -ne 1 ]]; then - write_to_screen_and_script_log "The deployment contains encrypted content which requires a password to access." - echo "Enter the password, below, or add the environment variable METROAE_PASSWORD and retry." - echo "" - read -s -p "Enter password: " METROAE_PASSWORD - export METROAE_PASSWORD - fi -} - -function write_audit_log_entry { - debug ${FUNCNAME[0]} - echo "`date` MetroAE $METROAE_VERSION $ORIGINAL_ARGS" >> $AUDIT_LOG -} - -function audit_log_and_exit { - debug ${FUNCNAME[0]} - echo "`date` MetroAE $METROAE_VERSION exit code $1" >> $AUDIT_LOG - print_version_and_exit $1 -} - -function deployment_main { - generate_and_save_UUID - - debug ${FUNCNAME[0]} - set +e - - ORIGINAL_ARGS="$*" - - # - # Parse arguments - # - SKIP_BUILD=0 - SKIP_PASSWORD=0 - POSITIONAL=() - while [[ $# -gt 0 ]] - do - key="$1" - - case $key in - -h|--help) - main_help - print_version_and_exit 0 - ;; - -v|--version) - help_header - print_version_and_exit 0 - ;; - --ansible-help) - $(which ansible-playbook) --help - print_version_and_exit 0 - ;; - --list) - list_workflows - print_version_and_exit 0 - ;; - --set-group) - GROUP="$2" - touch ansible.log - chgrp $GROUP ansible.log - touch $AUDIT_LOG - chgrp $GROUP $AUDIT_LOG - shift # past argument - shift # past value - ;; - --skip-build) - SKIP_BUILD=1 - shift # past argument - ;; - --skip-password) - SKIP_PASSWORD=1 - shift # past argument - ;; - *) # unknown option - POSITIONAL+=("$1") # save it in an array for later - shift # past argument - ;; - esac - done - set -- "${POSITIONAL[@]}" # restore positional parameters - - # Missing workflow, show usage - if [[ $# -eq 0 ]] || [[ $1 == -* ]]; then - main_help - print_version_and_exit 1 - fi - - # argument - WORKFLOW=$1 - debug "${FUNCNAME[0]}: WORKFLOW = @WORKFLOW" - - # Add .yml extension if needed - EXTENSION="${WORKFLOW##*.}" - if [[ "$EXTENSION" != "yml" ]]; then - WORKFLOW=${WORKFLOW}.yml - fi - - if [[ ! -a $PLAYBOOK_DIR/$WORKFLOW ]] && [[ ! -a $PLAYBOOK_WITH_BUILD_DIR/$WORKFLOW ]]; then - write_to_screen_and_script_log "Requested MetroAE workflow ($1) could not be found" - print_version_and_exit 1 - fi - shift - - # [deployment] argument - if [[ $# -gt 0 ]]; then - if [[ $1 != -* ]]; then - DEPLOYMENT="$1" - shift - if [[ $DEPLOYMENT == *.csv ]]; then - filename=$(basename -- "$DEPLOYMENT") - deployment_name="${filename%.*}" - DEPLOYMENT_DIR="$DEPLOYMENTS_BASE_DIR/$deployment_name" - rm -f $DEPLOYMENT_DIR/*.yml - ./convert_csv_to_deployment.py $DEPLOYMENT $deployment_name - elif [[ $DEPLOYMENT == *.xlsx ]]; then - filename=$(basename -- "$DEPLOYMENT") - deployment_name="${filename%.*}" - DEPLOYMENT_DIR="$DEPLOYMENTS_BASE_DIR/$deployment_name" - rm -f $DEPLOYMENT_DIR/*.yml - ./convert_excel_to_deployment.py $DEPLOYMENT $deployment_name - elif [[ -d $DEPLOYMENT ]]; then - DEPLOYMENT_DIR=$DEPLOYMENT - elif [[ -d $DEPLOYMENTS_BASE_DIR/$DEPLOYMENT ]]; then - DEPLOYMENT_DIR=$DEPLOYMENTS_BASE_DIR/$DEPLOYMENT - else - write_to_screen_and_script_log "Could not find deployment '$DEPLOYMENT' under $DEPLOYMENTS_BASE_DIR" - print_version_and_exit 1 - fi - fi - fi - - # Get password if needed - check_password_needed "$DEPLOYMENT_DIR" - ask_password - if [[ ! -z $METROAE_PASSWORD ]]; then - export ANSIBLE_VAULT_PASSWORD_FILE=$VAULT_ENV_FILE - fi - - # Run playbooks - if [[ -a $PLAYBOOK_DIR/$WORKFLOW ]]; then - write_audit_log_entry - $(which ansible-playbook) -e deployment_dir=\'"$DEPLOYMENT_DIR"\' -e schema_dir=$SCHEMA_DIR $PLAYBOOK_DIR/$WORKFLOW "$@" || audit_log_and_exit $? - elif [[ -a $PLAYBOOK_WITH_BUILD_DIR/$WORKFLOW ]]; then - write_audit_log_entry - phone_home_args=("\"UUID\":\"${UUID}\"" "\"Actions\":\"${WORKFLOW}\"" "\"Mode\":\"${RUN_MODE}\"" "\"timestamp\":\"$(date)\"" "\"Version\":\"${METROAE_VERSION}\"") - phone_home "${phone_home_args[@]}" - if [[ $SKIP_BUILD -ne 1 ]]; then - $(which ansible-playbook) -e deployment_dir=\'"$DEPLOYMENT_DIR"\' -e schema_dir=$SCHEMA_DIR $PLAYBOOK_DIR/build.yml "$@" || audit_log_and_exit $? - if [[ $GROUP ]]; then chgrp -R $GROUP $INVENTORY_DIR; fi - fi - $(which ansible-playbook) $PLAYBOOK_WITH_BUILD_DIR/$WORKFLOW "$@" || audit_log_and_exit $? - else - write_to_screen_and_script_log "Requested MetroAE workflow ($WORKFLOW) could not be found" - print_version_and_exit 1 - fi - - audit_log_and_exit 0 - - set -e - -} - -################################################################################# -# Config # -################################################################################# - -function docker_exec_config { - debug ${FUNCNAME[0]} - docker_exec env /usr/bin/python $CONTAINER_HOME/nuage-metroae-config/metroae_config.py "$@" -} - -function docker_exec_generate_and_copy_keys { - debug ${FUNCNAME[0]} - docker_exec env $CONTAINER_HOME/generateAndCopySshKeys.sh -} - -function docker_exec_copy_common_defaults { - debug ${FUNCNAME[0]} - docker_exec env $CONTAINER_HOME/copyCommonDefaults.sh -} - -function docker_exec_copy_deploy_defaults { - debug ${FUNCNAME[0]} - docker_exec env $CONTAINER_HOME/copyDeployDefaults.sh -} - -function docker_exec_copy_config_defaults { - debug ${FUNCNAME[0]} - docker_exec env $CONTAINER_HOME/copyConfigDefaults.sh -} - -function config_status { - debug ${FUNCNAME[0]} - container_status -} - -function config_main { - debug ${FUNCNAME[0]} - - check_docker - - check_for_user_group - - if [[ $# -eq 0 ]]; then - config_help ",config" - exit 0 - fi - - shopt -s extglob - POSITIONAL=() - exec=false - usage_last=false - no_arguments=$# - while [[ $# -gt 0 ]] - do - key=$1 - case "$key" in - help|--h|-h|--help|-help) - - get_running_container_id - if [[ -z $RUNNING_CONTAINER_ID ]]; then - config_help ",config" - else - if [[ $no_arguments -lt 2 ]]; then - usage_last=true - fi - fi - POSITIONAL+=("$1") - exec=true - shift - ;; - pull) - if [[ -z $2 ]]; then - pull - else - pull $2 - shift - fi - shift - ;; - setup) - if [[ -z $2 ]]; then - setup - else - setup $2 - shift - fi - shift - ;; - start) - run_container - shift - ;; - stop) - stop_running_container - shift - ;; - destroy) - if [[ -z $2 ]]; then - destroy - else - destroy $2 - shift - fi - shift - ;; - upgrade-engine) - if [[ -z $2 ]]; then - update_container - else - shift - update_container $* - fi - exit 0 - ;; - interactive) - interactive - shift - ;; - status) - config_status - shift - ;; - *) - POSITIONAL+=("$1") - exec=true - shift - ;; - esac - done - - generate_and_save_UUID - - if [[ $exec == true ]]; then - phone_home_args=("\"UUID\":\"${UUID}\"" "\"Actions\":\"${POSITIONAL[1]}\"" "\"Mode\":\"${RUN_MODE}\"" "\"timestamp\":\"$(date)\"" "\"Version\":\"${METROAE_VERSION}\"") - phone_home "${phone_home_args[@]}" - docker_exec_config "${POSITIONAL[@]}" - fi - - if [[ $usage_last == true ]]; then - config_help ",config" - fi - -} - -################################################################################# -# Container # -################################################################################# - -function copy_ssh_id { - debug ${FUNCNAME[0]} - if [[ -z $1 ]]; then - echo "Usage: metroae tools ssh copyid " - print_version_and_exit 1 - fi - run_container_if_not_running - - sshpass=" " - if [[ ! -z $SSHPASS_PASSWORD ]]; then - sshpass="sshpass -p$SSHPASS_PASSWORD" - fi - if [[ ! -z $NOT_INTERACTIVE ]]; then - docker exec $RUNNING_CONTAINER_ID $sshpass ssh-copy-id -i $CONTAINER_HOME/id_rsa.pub -o StrictHostKeyChecking=no $1 - else - docker exec -it $RUNNING_CONTAINER_ID $sshpass ssh-copy-id -i $CONTAINER_HOME/id_rsa.pub -o StrictHostKeyChecking=no $1 - fi -} - -function docker_metro_ae_exec { - debug ${FUNCNAME[0]} - docker_exec env $CONTAINER_HOME/nuage-metroae/metroae "$@" -} - -function unzip_files { - debug ${FUNCNAME[0]} - docker_exec $CONTAINER_HOME/nuage-metroae/nuage-unzip.sh "$@" -} - -function gen_example_from_schema { - debug ${FUNCNAME[0]} - docker_exec /usr/bin/python $CONTAINER_HOME/nuage-metroae/generate_example_from_schema.py "$@" -} - -function container_status { - debug ${FUNCNAME[0]} - get_running_container_id - if [[ ! -z $RUNNING_CONTAINER_ID ]]; then - write_to_screen_and_script_log "MetroAE Container Status:" - echo "" - debug "${FUNCNAME[0]}: Getting the versions in the container itself" - docker exec $RUNNING_CONTAINER_ID cat $CONTAINER_HOME/version - else - echo "" - write_to_screen_and_script_log "The MetroAE container is not running." - echo "Please run 'metroae container start' to restart a stopped container." - echo "Please run 'metroae container setup' to setup a container. Quitting." - echo "" - exit 0 - fi - if [[ -f $SETUP_FILE ]]; then - debug "${FUNCNAME[0]}: Using container configuration from host" - if [[ -z $METROAE_SETUP_TYPE ]]; then - echo "" - write_to_screen_and_script_log "We couldn't find the MetroAE setup type in the configuration file on disk." - echo "Please run 'metroae container setup' and try again." - echo "" - print_version_and_exit 1 - elif [[ -z $METROAE_MOUNT_POINT ]]; then - echo "" - write_to_screen_and_script_log "We couldn't find the MetroAE data path in the configuration file on disk." - echo "Please run 'metroae container setup' and try again." - echo "" - print_version_and_exit 1 - fi - if [[ $METROAE_SETUP_TYPE == $CONFIG_SETUP_TYPE ]]; then - current_setup_type="Config" - elif [[ $METROAE_SETUP_TYPE == $DEPLOY_SETUP_TYPE ]]; then - current_setup_type="Deploy" - elif [[ $METROAE_SETUP_TYPE == $BOTH_SETUP_TYPE ]]; then - current_setup_type="Both Config and Deploy" - else - echo "" - write_to_screen_and_script_log "The MetroAE setup type in the configuration file on disk is not valid." - echo "Please run 'metroae container setup' and try again." - echo "" - fi - write_to_screen_and_script_log "MetroAE setup type: $current_setup_type" - write_to_screen_and_script_log "MetroAE container data directory: $METROAE_MOUNT_POINT" - write_to_screen_and_script_log "MetroAE run mode: $RUN_MODE" - else - echo "" - write_to_screen_and_script_log "MetroAE container setup file not found. Please run 'metroae container setup'" - echo "and try again. Quitting." - echo "" - print_version_and_exit 1 - fi - if [[ ! -z $RUNNING_CONTAINER_ID ]]; then - write_to_screen_and_script_log "Docker information:" + if [[ $SKIP_PASSWORD -ne 1 ]]; then + write_to_screen_and_script_log "The deployment contains encrypted content which requires a password to access." + echo "Enter the password, below, or add the environment variable METROAE_PASSWORD and retry." echo "" - debug "${FUNCNAME[0]}: Getting the output of 'docker ps'" - header=`docker ps -a | grep "IMAGE"` - write_to_screen_and_script_log "$header" - container_status=`docker ps -a | grep "$METRO_AE_IMAGE"` - write_to_screen_and_script_log "$container_status" + read -s -p "Enter password: " METROAE_PASSWORD + export METROAE_PASSWORD fi } -function vault_password { +function write_audit_log_entry { debug ${FUNCNAME[0]} - run_container_if_not_running - - docker_exec_interactive /usr/bin/python $CONTAINER_HOME/nuage-metroae/encrypt_credentials.py $1 + echo "`date` MetroAE $METROAE_VERSION $ORIGINAL_ARGS" >> $AUDIT_LOG } -function disable_encryption { +function audit_log_and_exit { debug ${FUNCNAME[0]} - run_container_if_not_running - docker exec $RUNNING_CONTAINER_ID $CONTAINER_HOME/UI.sh disable-encryption + echo "`date` MetroAE $METROAE_VERSION exit code $1" >> $AUDIT_LOG + print_version_and_exit $1 } -function container_main { +function deployment_main { + generate_and_save_UUID + debug ${FUNCNAME[0]} - if [[ ! -z $GROUP_CHECK ]]; then - debug "${FUNCNAME[0]} GROUP_CHECK is defined" - check_for_user_group "$@" - shift + set +e + + ORIGINAL_ARGS="$*" + + # + # Parse arguments + # + SKIP_BUILD=0 + SKIP_PASSWORD=0 + POSITIONAL=() + while [[ $# -gt 0 ]] + do + key="$1" + + case $key in + -h|--help) + main_help print_version_and_exit 0 - else - debug "${FUNCNAME[0]} GROUP_CHECK is not defined" - check_for_user_group "$@" + ;; + -v|--version) + help_header + print_version_and_exit 0 + ;; + --ansible-help) + $(which ansible-playbook) --help + print_version_and_exit 0 + ;; + --list) + list_workflows + print_version_and_exit 0 + ;; + --set-group) + GROUP="$2" + touch ansible.log + chgrp $GROUP ansible.log + touch $AUDIT_LOG + chgrp $GROUP $AUDIT_LOG + shift # past argument + shift # past value + ;; + --skip-build) + SKIP_BUILD=1 + shift # past argument + ;; + --skip-password) + SKIP_PASSWORD=1 + shift # past argument + ;; + destroy) + shift + if [[ -z $2 ]]; then + destroy + else + destroy $2 + shift + fi + shift + ;; + *) # unknown option + POSITIONAL+=("$1") # save it in an array for later + shift # past argument + ;; + esac + done + set -- "${POSITIONAL[@]}" # restore positional parameters + + # Missing workflow, show usage + if [[ $# -eq 0 ]] || [[ $1 == -* ]]; then + main_help + print_version_and_exit 1 fi - check_docker + # argument + WORKFLOW=$1 + debug "${FUNCNAME[0]}: WORKFLOW = @WORKFLOW" - if [[ $# -eq 0 ]]; then - container_help ",container" - exit 0 + # Add .yml extension if needed + EXTENSION="${WORKFLOW##*.}" + if [[ "$EXTENSION" != "yml" ]]; then + WORKFLOW=${WORKFLOW}.yml fi - shopt -s extglob + if [[ ! -a $PLAYBOOK_DIR/$WORKFLOW ]] && [[ ! -a $PLAYBOOK_WITH_BUILD_DIR/$WORKFLOW ]]; then + write_to_screen_and_script_log "Requested MetroAE workflow ($1) could not be found" + print_version_and_exit 1 + fi + shift - POSITIONAL=() - exec=false - debug "${FUNCNAME[0]}: Checking command options" - while [ $# -gt 0 ] - do - key=$1 - debug "POSITIONAL = ${POSITIONAL[*]}" - case $key in - help|--h|-h|--help|-help) - container_help ",container" - POSITIONAL+=("$1") - exec=true - shift - ;; - pull) - if [[ -z $2 ]]; then - pull - else - pull $2 - shift - fi - shift - ;; - start) - run_container - shift - ;; - setup) - num_params=$# - if [[ num_params -gt 1 ]]; then - shift - setup_container "$@" - for (( i=1; i<$num_params; i+=1 )); do - shift - done - else - shift - setup_container - fi - ;; - stop) - stop_running_container - shift - ;; - destroy) - if [[ -z $2 ]]; then - destroy - else - destroy $2 - shift - fi - shift - ;; - upgrade-engine) - if [[ -z $2 ]]; then - update_container - else - shift - update_container $* - fi - exit 0 - ;; - status) - container_status + # [deployment] argument + if [[ $# -gt 0 ]]; then + if [[ $1 != -* ]]; then + DEPLOYMENT="$1" shift - ;; - encrypt-credentials) - if [[ -z $2 ]]; then - vault_password + if [[ $DEPLOYMENT == *.csv ]]; then + filename=$(basename -- "$DEPLOYMENT") + deployment_name="${filename%.*}" + DEPLOYMENT_DIR="$DEPLOYMENTS_BASE_DIR/$deployment_name" + rm -f $DEPLOYMENT_DIR/*.yml + ./convert_csv_to_deployment.py $DEPLOYMENT $deployment_name + elif [[ $DEPLOYMENT == *.xlsx ]]; then + filename=$(basename -- "$DEPLOYMENT") + deployment_name="${filename%.*}" + DEPLOYMENT_DIR="$DEPLOYMENTS_BASE_DIR/$deployment_name" + rm -f $DEPLOYMENT_DIR/*.yml + ./convert_excel_to_deployment.py $DEPLOYMENT $deployment_name + elif [[ -d $DEPLOYMENT ]]; then + DEPLOYMENT_DIR=$DEPLOYMENT + elif [[ -d $DEPLOYMENTS_BASE_DIR/$DEPLOYMENT ]]; then + DEPLOYMENT_DIR=$DEPLOYMENTS_BASE_DIR/$DEPLOYMENT else - vault_password $2 - shift + write_to_screen_and_script_log "Could not find deployment '$DEPLOYMENT' under $DEPLOYMENTS_BASE_DIR" + print_version_and_exit 1 fi - shift - ;; - interactive) - interactive - shift - ;; - unzip-files) - shift - unzip_files "$@" - print_version_and_exit 0 - ;; - generate-example-from-schema) - shift - gen_example_from_schema "$@" - print_version_and_exit 0 - ;; - copy-ssh-id) - copy_ssh_id "$2" - shift - shift - ;; - load) - shift - container_load "$@" - print_version_and_exit 0 - ;; - load-templates) - shift - container_load_templates "$@" - print_version_and_exit 0 - ;; - *) - POSITIONAL+=("$1") - exec=true - shift - ;; - esac - done + fi + fi - if [[ $exec == true ]]; then - if [[ $METROAE_SETUP_TYPE == $CONFIG_SETUP_TYPE ]]; then - echo "" - write_to_screen_and_script_log "Unrecognized command for a config-only setup: ${POSITIONAL[@]}." - echo "Execute 'metroae container setup' to change your setup type." - echo "" - echo "" - main_help - else - docker_metro_ae_exec "${POSITIONAL[@]}" + # Get password if needed + check_password_needed "$DEPLOYMENT_DIR" + ask_password + if [[ ! -z $METROAE_PASSWORD ]]; then + export ANSIBLE_VAULT_PASSWORD_FILE=$VAULT_ENV_FILE + fi + + # Run playbooks + if [[ -a $PLAYBOOK_DIR/$WORKFLOW ]]; then + write_audit_log_entry + $(which ansible-playbook) -e deployment_dir=\'"$DEPLOYMENT_DIR"\' -e schema_dir=$SCHEMA_DIR $PLAYBOOK_DIR/$WORKFLOW "$@" || audit_log_and_exit $? + elif [[ -a $PLAYBOOK_WITH_BUILD_DIR/$WORKFLOW ]]; then + write_audit_log_entry + phone_home_args=("\"UUID\":\"${UUID}\"" "\"Actions\":\"${WORKFLOW}\"" "\"Mode\":\"${RUN_MODE}\"" "\"timestamp\":\"$(date)\"" "\"Version\":\"${METROAE_VERSION}\"") + phone_home "${phone_home_args[@]}" + if [[ $SKIP_BUILD -ne 1 ]]; then + $(which ansible-playbook) -e deployment_dir=\'"$DEPLOYMENT_DIR"\' -e schema_dir=$SCHEMA_DIR $PLAYBOOK_DIR/build.yml "$@" || audit_log_and_exit $? + if [[ $GROUP ]]; then chgrp -R $GROUP $INVENTORY_DIR; fi fi + $(which ansible-playbook) $PLAYBOOK_WITH_BUILD_DIR/$WORKFLOW "$@" || audit_log_and_exit $? + else + write_to_screen_and_script_log "Requested MetroAE workflow ($WORKFLOW) could not be found" + print_version_and_exit 1 fi + + audit_log_and_exit 0 + + set -e + } ################################################################################# # Plugins # ################################################################################# -function copy_plugin_file_into_mount_point_if_necessary { - debug ${FUNCNAME[0]} - read_setup_files - local plugin_path=${PLUGIN_INSTALL_ARGS[0]} - mkdir -p $METROAE_MOUNT_POINT/metro_plugins - if [[ -d $plugin_path ]]; then - debug "${FUNCNAME[0]}: recursive copy from $plugin_path to $METROAE_MOUNT_POINT/metro_plugins/" - cp -r $plugin_path $METROAE_MOUNT_POINT/metro_plugins/ - elif [[ -f $plugin_path ]]; then - debug "${FUNCNAME[0]}: non-recursive copy from $plugin_path to $METROAE_MOUNT_POINT/metro_plugins/" - cp $plugin_path $METROAE_MOUNT_POINT/metro_plugins/ - fi - local container_plugin_path=/metroae_data/metro_plugins/$(basename "$plugin_path") - debug "${FUNCNAME[0]}: Modified plugin path is $container_plugin_path" - PLUGIN_INSTALL_ARGS[0]=$container_plugin_path -} - function version_compare { debug ${FUNCNAME[0]} if [[ $1 == $2 ]] @@ -3085,25 +1075,6 @@ function validate_plugin_dir { function plugin_main { debug ${FUNCNAME[0]} - if [[ $RUN_MODE == "CONTAINER" ]]; then - debug "${FUNCNAME[0]}: RUN_MODE=$RUN_MODE" - local action=$1 - if [[ ! -z $1 ]] - then - debug "${FUNCNAME[0]} args = $@" - shift - debug "${FUNCNAME[0]} args = $@" - PLUGIN_INSTALL_ARGS=("$@") - debug "${FUNCNAME[0]} PLUGIN_INSTALL_ARGS = $PLUGIN_INSTALL_ARGS" - if [[ $action == "install" ]] || [[ $action == "validate" ]]; then - copy_plugin_file_into_mount_point_if_necessary - fi - debug "${FUNCNAME[0]} action = $action" - debug "${FUNCNAME[0]} PLUGIN_INSTALL_ARGS = $PLUGIN_INSTALL_ARGS" - docker_metro_ae_exec plugin $action "${PLUGIN_INSTALL_ARGS[@]}" - exit 0 - fi - fi POSITIONAL=() case $1 in @@ -3136,59 +1107,27 @@ function tools_main { local command_to_run="" case ${MATCH_MENU[3]} in unzip-files) - if [[ $RUN_MODE == $CONTAINER_RUN_MODE ]]; then - docker_exec env $CONTAINER_HOME/nuage-metroae/nuage-unzip.sh ${EXTRA_ARGS[@]} - else - ./nuage-unzip.sh ${EXTRA_ARGS[@]} - fi + ./nuage-unzip.sh ${EXTRA_ARGS[@]} print_version_and_exit 0 ;; get_debug) - if [[ $RUN_MODE == $CONTAINER_RUN_MODE ]]; then - docker_exec env /usr/bin/python $CONTAINER_HOME/nuage-metroae/get_debug.py ${EXTRA_ARGS[@]} - else - /usr/bin/python get_debug.py ${EXTRA_ARGS[@]} - fi + /usr/bin/python3 get_debug.py ${EXTRA_ARGS[@]} print_version_and_exit 0 ;; convert_csv_to_deployment) - if [[ $RUN_MODE == $CONTAINER_RUN_MODE ]]; then - docker_exec env workdir $CONTAINER_HOME/nuage-metroae /usr/bin/python convert_csv_to_deployment.py ${EXTRA_ARGS[@]} - else - /usr/bin/python convert_csv_to_deployment.py ${EXTRA_ARGS[@]} - fi + /usr/bin/python3 convert_csv_to_deployment.py ${EXTRA_ARGS[@]} print_version_and_exit 0 ;; convert_excel_to_deployment) - if [[ $RUN_MODE == $CONTAINER_RUN_MODE ]]; then - docker_exec env workdir $CONTAINER_HOME/nuage-metroae /usr/bin/python convert_excel_to_deployment.py ${EXTRA_ARGS[@]} - else - /usr/bin/python convert_excel_to_deployment.py ${EXTRA_ARGS[@]} - fi + /usr/bin/python3 convert_excel_to_deployment.py ${EXTRA_ARGS[@]} print_version_and_exit 0 ;; generate-example-from-schema) - if [[ $RUN_MODE == $CONTAINER_RUN_MODE ]]; then - docker_exec env /usr/bin/python $CONTAINER_HOME/nuage-metroae/generate_example_from_schema.py ${EXTRA_ARGS[@]} - else - /usr/bin/python generate_example_from_schema.py ${EXTRA_ARGS[@]} - fi + /usr/bin/python3 generate_example_from_schema.py ${EXTRA_ARGS[@]} print_version_and_exit 0 ;; encrypt-credentials) - if [[ $RUN_MODE == $CONTAINER_RUN_MODE ]]; then - docker_exec env /usr/bin/python $CONTAINER_HOME/nuage-metroae/encrypt_credentials.py ${EXTRA_ARGS[@]} - else - /usr/bin/python encrypt_credentials.py ${EXTRA_ARGS[@]} - fi - print_version_and_exit 0 - ;; - download) - download_container ${EXTRA_ARGS[@]} - print_version_and_exit 0 - ;; - download-templates) - download_container_templates ${EXTRA_ARGS[@]} + /usr/bin/python3 encrypt_credentials.py ${EXTRA_ARGS[@]} print_version_and_exit 0 ;; *) @@ -3263,58 +1202,6 @@ function container_help { echo "" } -function config_help { - debug ${FUNCNAME[0]} - if [[ $RUN_MODE == $REPO_RUN_MODE ]]; then - echo "" - echo "It looks like you are trying to get help for MetroAE config. The metroae command" - echo "appears to have been executed from a cloned workspace of the nuage-metroae github" - echo "repo. MetroAE config is only operated from the MetroAE container. Please change" - echo "your working directory and try again." - echo "" - elif [[ -z $METROAE_SETUP_TYPE ]]; then - echo "" - echo "It looks like you are trying to get help for MetroAE config using the container." - echo "The MetroAE container setup type (config, deploy, or both) does not appear to be" - echo "configured. Please run 'metroae container setup' and try again." - echo "" - elif [[ $METROAE_SETUP_TYPE == $DEPLOY_SETUP_TYPE ]]; then - echo "" - echo "It looks like you are trying to get help for MetroAE config using the container." - echo "The MetroAE container appears to have been setup for DEPLOY only. Please change" - echo "your setup type using 'metroae container setup' and try again." - echo "" - else - help_header - echo "MetroAE config is a tool that you can use to apply and manage day-zero configurations" - echo "for Nuage Networks VSD. MetroAE config is only available via the MetroAE container." - echo "You can execute 'metroae config', 'metroae config -h' or 'metroae config --help' to" - echo "print this help and usage message." - echo "" - echo "The output, below, contains a list of 'positional arguments' that are supported by" - echo "metroae config. You can execute 'metroae config -h' for help" - echo "with each positional argument, e.g. 'metroae config create -h'." - echo "" - echo "MetroAE config usage:" - echo "" - print_menu_help "metroae%-50s %-1s\n" $1 - echo "" - - get_running_container_id - if [[ -z $RUNNING_CONTAINER_ID ]]; then - echo "" - echo "The MetroAE container is not running. Full help text can only be accessed" - echo "when the container is running. Please execute 'metroae container start'" - echo "and try again." - elif [[ $NUM_CLI_ARGS == 1 ]]; then - docker_exec_config -h - else - # Remove the string "config" - docker_exec_config "${CLI_ARGS[@]:1}" - fi - fi -} - function tools_help { debug ${FUNCNAME[0]} if [[ ($RUN_MODE == $CONTAINER_RUN_MODE && $METROAE_SETUP_TYPE != $CONFIG_SETUP_TYPE) || $RUN_MODE == $REPO_RUN_MODE ]]; then @@ -3363,7 +1250,7 @@ function plugin_help { function help_header { debug ${FUNCNAME[0]} - + echo "" echo "Nuage Networks Metro Automation Engine (MetroAE) Version:" $METROAE_VERSION echo "Run mode is" $RUN_MODE @@ -3375,7 +1262,7 @@ function help_header { else current_setup_type="Both Config and Deploy" fi - echo "Setup type is" $current_setup_type + echo "Setup type is" $current_setup_type if [[ $current_setup_type == "Both Config and Deploy" || $current_setup_type == "Config" ]]; then metroae_config_engine_version=`docker_exec_config version` echo "Nuage Networks" $metroae_config_engine_version @@ -3391,10 +1278,6 @@ function help_header { function switch_help { debug ${FUNCNAME[0]} case $1 in - ,config*) - config_help "$1" - exit 0 - ;; ,container*) container_help "$1" exit 0 @@ -3526,43 +1409,14 @@ case ${MATCH_MENU[2]} in plugin_main ${MATCH_MENU[3]} "${EXTRA_ARGS[@]}" print_version_and_exit 0 ;; - config) - debug "main: config" - if [[ ! $RUN_MODE == $CONTAINER_RUN_MODE ]]; then - echo "" - write_to_screen_and_script_log "'config' command is not supported without the MetroAE container." - echo "You can setup the container with 'metroae container setup'. Quitting." - echo "" - echo "" - main_help - elif [[ METROAE_SETUP_TYPE == $DEPLOY_SETUP_TYPE ]]; then - echo "" - write_to_screen_and_script_log "'config' command is not supported in a deploy-only setup." - echo "Execute 'metroae container setup' to change your setup type." - echo "" - echo "" - main_help - else - config_main ${MATCH_MENU[3]} "${EXTRA_ARGS[@]}" - fi - exit 0 - ;; playbook) debug "main: playbook" - if [[ $RUN_MODE == $CONTAINER_RUN_MODE ]]; then - container_main ${MATCH_MENU[3]} "${EXTRA_ARGS[@]}" - else - deployment_main ${MATCH_MENU[3]} "${EXTRA_ARGS[@]}" - fi + deployment_main ${MATCH_MENU[3]} "${EXTRA_ARGS[@]}" print_version_and_exit 0 ;; wizard) debug "main: wizard" - if [[ $RUN_MODE == $CONTAINER_RUN_MODE ]]; then - docker_exec_interactive /usr/bin/python $CONTAINER_HOME/nuage-metroae/run_wizard.py "${EXTRA_ARGS[@]}" - else - ./run_wizard.py "${EXTRA_ARGS[@]}" - fi + ./run_wizard.py "${EXTRA_ARGS[@]}" print_version_and_exit 0 ;; setup) @@ -3577,16 +1431,12 @@ case ${MATCH_MENU[2]} in ;; container) debug "main: container" - container_main ${MATCH_MENU[3]} "${EXTRA_ARGS[@]}" + deployment_main ${MATCH_MENU[3]} "${EXTRA_ARGS[@]}" print_version_and_exit 0 ;; *) debug "main: other" - if [[ $RUN_MODE == $CONTAINER_RUN_MODE ]]; then - container_main "$@" - else - deployment_main "$@" - fi + deployment_main "$@" print_version_and_exit 0 ;; esac diff --git a/metroae-container b/metroae-container new file mode 100755 index 0000000000..c111a95f16 --- /dev/null +++ b/metroae-container @@ -0,0 +1,78 @@ +#!/bin/bash +# WARNING: Any paths defined in the deployment files must be relative to this folder. No absolute paths, and no paths outside of this folder + +TAG=$(git rev-parse --short HEAD) +TI="" +ENCRYPTED_TOKEN=\$ANSIBLE_VAULT +CURRENT_DIR=`pwd` +DEPLOYMENTS_BASE_DIR=$CURRENT_DIR/deployments +DEPLOYMENT_DIR=$DEPLOYMENTS_BASE_DIR/default +[ -t 0 ] && TI="-ti" + +function check_password_needed { + deployment_dir="$1" + + encrypted_files=$(grep -Ril $ENCRYPTED_TOKEN $deployment_dir) + + if [[ -z $METROAE_PASSWORD ]]; then + if [[ -z "$encrypted_files" ]]; then + SKIP_PASSWORD=1 + else + SKIP_PASSWORD=0 + fi + else + SKIP_PASSWORD=1 + fi +} + +function ask_password { + if [[ $SKIP_PASSWORD -ne 1 ]]; then + write_to_screen_and_script_log "The deployment contains encrypted content which requires a password to access." + echo "Enter the password, below, or add the environment variable METROAE_PASSWORD and retry." + echo "" + read -s -p "Enter password: " METROAE_PASSWORD + export METROAE_PASSWORD + fi +} + +docker image inspect metroaecontainer:$TAG > /dev/null 2>&1 +if [[ "$?" != "0" ]] +then + if [[ -n "$TI" ]] + then + echo "The docker image for this repo head does not exist. We will build it. Press any key to continue" + read -n 1 + fi + set -e + ./docker/build-container raw + set +e +fi + +SSHMOUNT="" +[[ -e .ssh ]] && SSHMOUNT="-v `pwd`/.ssh:/root/.ssh" + +check_password_needed "$DEPLOYMENT_DIR" +ask_password + +VAULTPASS="" +if [[ ! -z $METROAE_PASSWORD ]]; then + VAULTPASS=" -e METROAE_PASSWORD=$METROAE_PASSWORD" +fi + + +# Run the command inside docker +args=( "$@" ) + +# we use host networking because we want the host's dns server and host file +docker run \ + --network=host \ + --rm \ + $TI \ + $SSHMOUNT \ + $VAULTPASS \ + --env-file <(env | grep -Ev '^(PWD|PATH|HOME|USER|SHELL|MAIL|SSH_CONNECTION|LOGNAME|OLDPWD|LESSOPEN|_XDG_RUNTIME_DIR|HISTCONTROL)') \ + -v \ + "`pwd`:/metroae" \ + -e METROAE_DIR=`pwd` \ + metroaecontainer:$TAG \ + ./metroae "${args[@]}" diff --git a/nuage-unzip.sh b/nuage-unzip.sh index 2881940848..9a182953d6 100755 --- a/nuage-unzip.sh +++ b/nuage-unzip.sh @@ -65,11 +65,11 @@ if [[ $# -lt 2 ]]; then fi # argument -ZIPPED_DIR=$1 +ZIPPED_DIR=$CURRENT_DIR/$1 shift # argument -UNZIP_DIR=$1 +UNZIP_DIR=$CURRENT_DIR/$1 shift $(which ansible-playbook) $PLAYBOOK_DIR/nuage_unzip.yml -e nuage_zipped_files_dir=$ZIPPED_DIR -e nuage_unzipped_files_dir=$UNZIP_DIR "${EXTRA_VARS[@]}" diff --git a/pip_requirements.txt b/pip_requirements.txt index 0792af6409..03d6ad7572 100644 --- a/pip_requirements.txt +++ b/pip_requirements.txt @@ -7,7 +7,7 @@ setuptools-scm==5.0.2 setuptools-rust==0.11.4 cryptography==3.3.2 jinja2==2.11.3 -ansible==2.9.7 +ansible==3.4.0 ipaddr==2.2.0 jsonschema==2.6.0 jmespath==0.9.0 diff --git a/run_wizard.py b/run_wizard.py index 485c2cbab7..2c0da39a4e 100755 --- a/run_wizard.py +++ b/run_wizard.py @@ -8,6 +8,8 @@ import subprocess import sys import traceback +from builtins import input +from io import open METROAE_CONTACT = "devops@nuagenetworks.net" @@ -33,10 +35,11 @@ The following steps will be performed: -- step: Verify proper MetroAE installation +- step: Verify proper MetroAE installation (DEPRECATED) description: | This step will verify that the MetroAE tool has been properly installed - with all required libraries. + with all required libraries. This step is now deprecated as users are encouraged + to use MetroAE with the new container, which ensures a proper MetroAE installation. verify_install: missing_msg: | @@ -81,7 +84,7 @@ - step: Auto-discover existing components description: | - This optional step can beb used to discover components that are already + This optional step can be used to discover components that are already running in your network. By specifying the connection information of the hypervisors, any VMs running on these systems will be analyzed. You will be given the option to identify VMs as VSP components and to have @@ -192,6 +195,12 @@ item_name: NUH upgrade_vmname: false +- step: NUH External Interfaces deployment file + description: | + This step will create or modify the nuh_external_interfaces.yml file in your deployment. + This file provides parameters for the external interfaces on the NUHs in your deployment. + create_interfaces: {} + - step: VNSUtil deployment file description: | This step will create or modify the vnsutils.yml file in your deployment. @@ -286,8 +295,10 @@ def __init__(self, script=WIZARD_SCRIPT): self.current_action_idx = 0 self.progress_display_count = 0 self.progress_display_rate = 1 + self.non_interactive = False if "NON_INTERACTIVE" in os.environ: + self.non_interactive = True self.args = list(sys.argv) self.args.pop(0) else: @@ -308,30 +319,30 @@ def __call__(self): def message(self, action, data): raw_msg = self._get_field(data, "text") format_msg = raw_msg.format(contact=METROAE_CONTACT) - self._print(format_msg) + print(format_msg) def list_steps(self, action, data): for step in self.script: if "step" in step: - self._print(" - " + step["step"]) + print(" - " + step["step"]) def verify_install(self, action, data): - self._print(u"\nVerifying MetroAE installation") + print(u"\nVerifying MetroAE installation") if self.in_container: - self._print("\nWizard is being run inside a Docker container. " - "No need to verify installation. Skipping step...") + print("\nWizard is being run inside a Docker container. " + "No need to verify installation. Skipping step...") return if not os.path.isfile("/etc/os-release"): self._record_problem("wrong_os", "Unsupported operating system") - self._print(self._get_field(data, "wrong_os_msg")) + print(self._get_field(data, "wrong_os_msg")) choice = self._input("Do you wish to continue anyway?", 0, ["(Y)es", "(n)o"]) if choice == 1: - self._print("Quitting wizard...") + print("Quitting wizard...") exit(0) missing = self._verify_pip() @@ -341,14 +352,14 @@ def verify_install(self, action, data): if len(missing) == 0: self._unrecord_problem("install_libraries") - self._print(u"\nMetroAE Installation OK!") + print(u"\nMetroAE Installation OK!") else: self._record_problem( "install_libraries", u"Your MetroAE installation is missing libraries") - self._print(u"\nYour MetroAE installation is missing libraries:\n") - self._print("\n".join(missing)) - self._print(self._get_field(data, "missing_msg")) + print(u"\nYour MetroAE installation is missing libraries:\n") + print("\n".join(missing)) + print(self._get_field(data, "missing_msg")) choice = self._input("Do you want to run setup now?", 0, ["(Y)es", "(n)o"]) @@ -359,13 +370,13 @@ def unzip_images(self, action, data): valid = False while not valid: if self.in_container: - self._print(self._get_field(data, "container_msg")) + print(self._get_field(data, "container_msg")) zip_dir = self._input("Please enter the directory relative to " "the metroae_data mount point that " "contains your zip files", "") if zip_dir.startswith("/"): - self._print("\nDirectory must be a relative path.") + print("\nDirectory must be a relative path.") continue full_zip_dir = os.path.join("/metroae_data", zip_dir) @@ -380,11 +391,11 @@ def unzip_images(self, action, data): "Directory not found. Would you like to skip unzipping", 0, ["(Y)es", "(n)o"]) if choice != 1: - self._print("Skipping unzip step...") + print("Skipping unzip step...") return elif not os.path.isdir(full_zip_dir): - self._print("%s is not a directory, please enter the directory" - " containing the zipped files" % zip_dir) + print("%s is not a directory, please enter the directory" + " containing the zipped files" % zip_dir) else: valid = True @@ -395,7 +406,7 @@ def unzip_images(self, action, data): "to the metroae_data mount point to " "unzip to") if unzip_dir.startswith("/"): - self._print("\nDirectory must be a relative path.") + print("\nDirectory must be a relative path.") else: valid = True @@ -412,7 +423,7 @@ def unzip_images(self, action, data): if choice == 0: self._run_unzip(full_zip_dir, full_unzip_dir) else: - self._print("Skipping unzip step...") + print("Skipping unzip step...") def create_deployment(self, action, data): valid = False @@ -421,11 +432,11 @@ def create_deployment(self, action, data): "Please enter the name of the deployment (will be the " "directory name)", "default") if "/" in deployment_name: - self._print("\nA deployment name cannot contain a slash " - "because it will be a directory name") + print("\nA deployment name cannot contain a slash " + "because it will be a directory name") elif " " in deployment_name: - self._print("\nA deployment name can contain a space, but it " - "will always have to be specified with quotes") + print("\nA deployment name can contain a space, but it " + "will always have to be specified with quotes") choice = self._input("Do you want use it?", 0, ["(Y)es", "(n)o"]) if choice != 1: @@ -437,18 +448,18 @@ def create_deployment(self, action, data): deployment_dir = os.path.join(self.base_deployment_path, deployment_name) if os.path.isdir(deployment_dir): - self._print("\nThe deployment directory was found") + print("\nThe deployment directory was found") found = True else: - self._print("") + print("") choice = self._input('Create deployment directory: "%s"?' % deployment_name, 0, ["(Y)es", "(n)o"]) if choice == 1: - self._print("Skipping deployment creation.") + print("Skipping deployment creation.") return - self._print("Deployment directory: " + deployment_dir) + print("Deployment directory: " + deployment_dir) self.state["deployment_name"] = deployment_name self.state["deployment_dir"] = deployment_dir @@ -494,7 +505,7 @@ def create_common(self, action, data): deployment = self._read_deployment_file(deployment_file, is_list=False) else: - self._print(deployment_file + " not found. It will be created.") + print(deployment_file + " not found. It will be created.") deployment = dict() self._setup_unzip_dir(deployment) @@ -524,13 +535,13 @@ def create_upgrade(self, action, data): if self._check_skip_for_csv("upgrade"): return - self._print("") + print("") choice = self._input("Will you be performing an upgrade?", 0, ["(Y)es", "(N)o"]) if choice == 1: if "upgrade" in self.state: del self.state["upgrade"] - self._print("Skipping step...") + print("Skipping step...") return self.state["upgrade"] = True @@ -544,7 +555,7 @@ def create_upgrade(self, action, data): deployment = self._read_deployment_file(deployment_file, is_list=False) else: - self._print(deployment_file + " not found. It will be created.") + print(deployment_file + " not found. It will be created.") deployment = dict() self._setup_upgrade(deployment, data) @@ -571,22 +582,22 @@ def create_component(self, action, data): deployment = self._read_deployment_file(deployment_file, is_list=True) else: - self._print(deployment_file + " not found. It will be created.") + print(deployment_file + " not found. It will be created.") deployment = list() self._setup_target_server_type() - self._print("\nPlease enter your %s deployment type\n" % item_name) + print("\nPlease enter your %s deployment type\n" % item_name) amount = self._get_number_components(deployment, data) deployment = deployment[0:amount] if not is_nsgv: - self._print("\nIf DNS is configured properly, IP addresses can be " - "auto-discovered.") + print("\nIf DNS is configured properly, IP addresses can be " + "auto-discovered.") for i in range(amount): - self._print("\n%s %d\n" % (item_name, i + 1)) + print("\n%s %d\n" % (item_name, i + 1)) if len(deployment) == i: deployment.append(dict()) @@ -608,7 +619,7 @@ def create_component(self, action, data): self._setup_vmname(deployment, i, hostname, with_upgrade) if not is_nsgv: - self._setup_ip_addresses(deployment, i, hostname, is_vsc) + self._setup_ip_addresses(deployment, i, hostname, is_vsc, is_nuh) else: component = deployment[i] self._setup_target_server(component) @@ -624,16 +635,46 @@ def create_component(self, action, data): else: self._generate_deployment_file(schema, deployment_file, deployment) + def create_interfaces(self, action, data): + if self._check_skip_for_csv("nuh_external_interfaces"): + return + + print("") + deployment_dir = self._get_deployment_dir() + if deployment_dir is None: + return + + deployment_file = os.path.join(deployment_dir, "nuh_external_interfaces.yml") + if os.path.isfile(deployment_file): + deployment = self._read_deployment_file(deployment_file, + is_list=False) + else: + print(deployment_file + " not found. It will be created.") + deployment = list() + + default = 0 + ext_interface_amount = self._input("Number of external interfaces to setup", default, datatype="int") + + for i in range(ext_interface_amount): + print("\n%s %d\n" % ("NUH External Interface", i + 1)) + deployment.append(dict()) + self._setup_external_interfaces(deployment, data, i) + if ext_interface_amount == 0: + if os.path.isfile(deployment_file): + os.remove(deployment_file) + else: + self._generate_deployment_file("nuh_external_interfaces", deployment_file, deployment) + def create_bootstrap(self, action, data): if self._check_skip_for_csv("nsgv_bootstrap"): return - self._print("") + print("") if "metro_bootstrap" not in self.state: choice = self._input("Will you be using metro to bootstrap NSGvs?", 1, ["(y)es", "(N)o"]) if choice == 1: - self._print("Skipping step...") + print("Skipping step...") return deployment_dir = self._get_deployment_dir() @@ -645,7 +686,7 @@ def create_bootstrap(self, action, data): deployment = self._read_deployment_file(deployment_file, is_list=False) else: - self._print(deployment_file + " not found. It will be created.") + print(deployment_file + " not found. It will be created.") deployment = dict() self._setup_bootstrap(deployment, data) @@ -677,17 +718,17 @@ def setup_target_servers(self, action, data): "Enter the username for the target servers (hypervisors)", default) self.state["target_server_username"] = username - self._print("\nWe will now configure SSH access to the target servers" - " (hypervisors). This will likely require the SSH" - " password for each system would need to be entered.") + print("\nWe will now configure SSH access to the target servers" + " (hypervisors). This will likely require the SSH" + " password for each system would need to be entered.") if self.in_container: - self._print(self._get_field(data, "container_msg")) + print(self._get_field(data, "container_msg")) choice = self._input("Setup SSH now?", 0, ["(Y)es", "(n)o"]) if choice == 1: - self._print("Skipping step...") + print("Skipping step...") return for server in servers: @@ -713,17 +754,17 @@ def setup_target_servers(self, action, data): def complete_wizard(self, action, data): if self._has_problems(): - self._print(self._get_field(data, "problem_msg")) + print(self._get_field(data, "problem_msg")) self._list_problems() - self._print(self._get_field(data, "finish_msg")) + print(self._get_field(data, "finish_msg")) choice = self._input(None, None, ["(q)uit", "(b)ack"]) if choice == 1: self.current_action_idx -= 2 return else: - self._print(self._get_field(data, "complete_msg")) + print(self._get_field(data, "complete_msg")) metro = "./metroae" if self.in_container: metro = "metroae" @@ -733,12 +774,12 @@ def complete_wizard(self, action, data): self.state["deployment_name"] != "default"): deployment = self.state["deployment_name"] if "upgrade" in self.state: - self._print( + print( self._get_field(data, "upgrade_msg").format( metro=metro, deployment=deployment)) else: - self._print( + print( self._get_field(data, "install_msg").format( metro=metro, deployment=deployment)) @@ -748,9 +789,6 @@ def complete_wizard(self, action, data): # Private class internals # - def _print(self, msg): - print msg.encode("utf-8") - def _print_progress(self): if self.progress_display_count % self.progress_display_rate == 0: sys.stdout.write(".") @@ -761,15 +799,15 @@ def _input(self, prompt=None, default=None, choices=None, datatype=""): input_prompt = self._get_input_prompt(prompt, default, choices) value = None - if "NON_INTERACTIVE" in os.environ: + if self.non_interactive: if len(self.args) < 1: raise Exception( "Out of args for non-interactive input for %s" % input_prompt) user_value = self.args.pop(0) if prompt is not None: - self._print(prompt) - self._print("From args: " + user_value) + print(prompt) + print("From args: " + user_value) value = self._validate_input(user_value, default, choices, datatype) if value is None: @@ -781,7 +819,7 @@ def _input(self, prompt=None, default=None, choices=None, datatype=""): if datatype == "password": user_value = getpass.getpass(input_prompt) else: - user_value = raw_input(input_prompt) + user_value = input(input_prompt) value = self._validate_input(user_value, default, choices, datatype) @@ -822,32 +860,32 @@ def _validate_input(self, user_value, default=None, choices=None, if default is not None: return default else: - self._print("\nRequired field, please enter a value\n") + print("\nRequired field, please enter a value\n") return None if choices is not None: value = self._match_choice(user_value, choices) if value is None: - self._print( + print( "\nValue is not a valid choice, please reenter\n") elif datatype == "ipaddr": value = self._validate_ipaddr(user_value) if value is None: - self._print("\nValue is not a valid ipaddress\n") + print("\nValue is not a valid ipaddress\n") elif datatype == "int": try: value = int(user_value) except ValueError: - self._print("\nValue is not a valid integer\n") + print("\nValue is not a valid integer\n") return None elif datatype == "hostname": value = self._validate_hostname(user_value) if value is None: - self._print("\nValue is not a valid hostname\n") + print("\nValue is not a valid hostname\n") elif datatype == "version": allowed = re.compile("^[\d]+[.][\d]+[.]([A-Z\d]+)$", re.IGNORECASE) if not allowed.match(user_value): - self._print("\nValue is not a valid version\n") + print("\nValue is not a valid version\n") return None value = user_value else: @@ -864,9 +902,9 @@ def _validate_ipaddr(self, user_value): except netaddr.core.AddrFormatError: return None except ImportError: - self._print("\nWarning: Python netaddr library not installed. " - "Cannot validate IP address. This library is also " - "required for MetroAE to run properly.") + print("\nWarning: Python netaddr library not installed. " + "Cannot validate IP address. This library is also " + "required for MetroAE to run properly.") return user_value def _validate_hostname(self, hostname): @@ -915,20 +953,20 @@ def _import_yaml(self): self._install_yaml() def _install_yaml(self): - self._print("This wizard requires PyYAML library to be installed." - " Running this command requires sudo access. You may be " - "asked for the sudo password.\n") + print("This wizard requires PyYAML library to be installed." + " Running this command requires sudo access. You may be " + "asked for the sudo password.\n") choice = self._input("Install it now?", 0, ["(Y)es", "(n)o"]) if choice == 1: - self._print("Please install PyYAML and run the wizard again.") + print("Please install PyYAML and run the wizard again.") exit(1) rc, output_lines = self._run_shell("sudo pip install " + YAML_LIBRARY) if rc != 0: - self._print("\n".join(output_lines)) - self._print("Could not install PyYAML, exit code: %d" % rc) - self._print("Please install PyYAML and run the wizard again.") + print("\n".join(output_lines)) + print("Could not install PyYAML, exit code: %d" % rc) + print("Please install PyYAML and run the wizard again.") exit(1) def _get_value(self, deployment, field): @@ -990,15 +1028,15 @@ def _run_script(self): self.current_action_idx += 1 continue if choice == 3: - self._print("Exiting MetroAE wizard. All progress made " - "has been saved.") + print("Exiting MetroAE wizard. All progress made " + "has been saved.") exit(0) try: self._run_action(current_action) except KeyboardInterrupt: - self._print("\n\nInterrupt signal received. All progress made " - "before current step has been saved.\n") + print("\n\nInterrupt signal received. All progress made " + "before current step has been saved.\n") choice = self._input( "Would you like to quit?", 1, ["(y)es", "(N)o"]) @@ -1009,13 +1047,13 @@ def _run_script(self): self.current_action_idx += 1 def _display_step(self, action): - self._print("") + print("") if "step" in action: - self._print("**** " + action["step"] + " ****\n") + print("**** " + action["step"] + " ****\n") if "description" in action: - self._print(action["description"]) + print(action["description"]) def _run_action(self, action): action_name = self._get_action_name(action) @@ -1024,7 +1062,7 @@ def _run_action(self, action): action_func(action, data) def _get_action_name(self, action): - keys = action.keys() + keys = list(action.keys()) for standard_field in STANDARD_FIELDS: if standard_field in keys: keys.remove(standard_field) @@ -1049,7 +1087,8 @@ def _run_shell(self, cmd_str): shell=True, cwd=self.metro_path, stdout=subprocess.PIPE, - stderr=subprocess.STDOUT) + stderr=subprocess.STDOUT, + encoding='utf8') output_lines = list() rc = self._capture_output(process, output_lines) @@ -1063,7 +1102,7 @@ def _capture_output(self, process, output_lines): line = process.stdout.readline().rstrip("\n") output_lines.append(line) if "DEBUG_WIZARD" in os.environ: - self._print(line) + print(line) else: self._print_progress() else: @@ -1072,7 +1111,7 @@ def _capture_output(self, process, output_lines): for line in lines.split("\n"): output_lines.append(line) if "DEBUG_WIZARD" in os.environ: - self._print(line) + print(line) else: self._print_progress() return retcode @@ -1096,22 +1135,22 @@ def _has_problems(self): def _list_problems(self): if self._has_problems(): for descr in self.state["problems"].values(): - self._print(" - " + descr) + print(" - " + descr) def _verify_pip(self): try: rc, output_lines = self._run_shell("pip freeze") if rc != 0: - self._print("\n".join(output_lines)) + print("\n".join(output_lines)) raise Exception("pip freeze exit-code: %d" % rc) with open("pip_requirements.txt", "r") as f: required_libraries = f.read().split("\n") except Exception as e: - self._print("\nAn error occurred while reading pip libraries: " + - str(e)) - self._print("Please contact: " + METROAE_CONTACT) + print("\nAn error occurred while reading pip libraries: " + + str(e)) + print("Please contact: " + METROAE_CONTACT) return ["Could not deterimine pip libraries"] return self._compare_libraries(required_libraries, output_lines) @@ -1121,36 +1160,36 @@ def _verify_yum(self): self.progress_display_rate = 30 rc, output_lines = self._run_shell("yum list") if rc != 0: - self._print("\n".join(output_lines)) + print("\n".join(output_lines)) raise Exception("yum list exit-code: %d" % rc) with open("yum_requirements.txt", "r") as f: required_libraries = f.read().split("\n") except Exception as e: - self._print("\nAn error occurred while reading yum libraries: " + - str(e)) - self._print("Please contact: " + METROAE_CONTACT) + print("\nAn error occurred while reading yum libraries: " + + str(e)) + print("Please contact: " + METROAE_CONTACT) return ["Could not deterimine yum libraries"] return self._compare_libraries(required_libraries, output_lines) def _run_setup(self): cmd = "sudo ./setup.sh" - self._print("Command: " + cmd) - self._print("Running setup (may ask for sudo password)") + print("Command: " + cmd) + print("Running setup (may ask for sudo password)") try: rc, output_lines = self._run_shell(cmd) if rc != 0: - self._print("\n".join(output_lines)) + print("\n".join(output_lines)) raise Exception("setup.sh exit-code: %d" % rc) self._unrecord_problem("install_libraries") - self._print(u"\nMetroAE setup completed successfully!") + print(u"\nMetroAE setup completed successfully!") except Exception as e: - self._print("\nAn error occurred while running setup: " + - str(e)) - self._print("Please contact: " + METROAE_CONTACT) + print("\nAn error occurred while running setup: " + + str(e)) + print("Please contact: " + METROAE_CONTACT) def _compare_libraries(self, required_libraries, installed_libraries): missing = list() @@ -1176,47 +1215,47 @@ def _compare_libraries(self, required_libraries, installed_libraries): def _run_unzip(self, zip_dir, unzip_dir): cmd = "./nuage-unzip.sh %s %s" % (zip_dir, unzip_dir) - self._print("Command: " + cmd) - self._print("Unzipping files from %s to %s" % (zip_dir, unzip_dir)) + print("Command: " + cmd) + print("Unzipping files from %s to %s" % (zip_dir, unzip_dir)) for f in glob.glob(os.path.join(zip_dir, "*.gz")): - self._print(f) + print(f) try: rc, output_lines = self._run_shell(cmd) if rc != 0: self._record_problem( "unzip_files", "Unable to unzip files") - self._print("\n".join(output_lines)) + print("\n".join(output_lines)) raise Exception("nuage-unzip.sh exit-code: %d" % rc) self._unrecord_problem("unzip_files") - self._print("\nFiles unzipped successfully!") + print("\nFiles unzipped successfully!") except Exception as e: self._record_problem( "unzip_files", "Error occurred while unzipping files") - self._print("\nAn error occurred while unzipping files: " + - str(e)) - self._print("Please contact: " + METROAE_CONTACT) + print("\nAn error occurred while unzipping files: " + + str(e)) + print("Please contact: " + METROAE_CONTACT) def _get_deployment_dir(self): if "deployment_dir" not in self.state: - self._print("Creating a deployment file requires a deployment to" - " be specified. This step will be skipped if not" - " provided.") + print("Creating a deployment file requires a deployment to" + " be specified. This step will be skipped if not" + " provided.") choice = self._input("Do you want to specify a deployment now?", 0, ["(Y)es", "(n)o"]) if choice != 1: self.create_deployment(None, None) if "deployment_dir" not in self.state: - self._print("No deployment specified, skipping step") + print("No deployment specified, skipping step") return None return self.state["deployment_dir"] def _read_deployment_file(self, deployment_file, is_list): - with open(deployment_file, "r") as f: + with open(deployment_file, "r", encoding='utf-8') as f: import yaml - deployment = yaml.safe_load(f.read().decode("utf-8")) + deployment = yaml.safe_load(f.read()) if is_list and type(deployment) != list: deployment = list() if not is_list and type(deployment) != dict: @@ -1245,14 +1284,14 @@ def _discover_components(self): valid_ssh = self._setup_discovery_ssh(username, hostname) if not valid_ssh: - self._print("Auto-discovery requires password-less SSH access") + print("Auto-discovery requires password-less SSH access") return if self._verify_kvm_hypervisor(username, hostname): self._discover_kvm_components(username, hostname) else: - self._print("Unsupported hypervisor type for auto-discovery " - "(only KVM and vCenter supported)") + print("Unsupported hypervisor type for auto-discovery " + "(only KVM and vCenter supported)") return def _setup_discovery_ssh(self, username, hostname): @@ -1268,9 +1307,9 @@ def _setup_discovery_ssh(self, username, hostname): if choice != 0: return False - self._print("\nWe will now configure SSH access to the target " - "server (hypervisors). This will likely require the " - "SSH password to be entered.\n") + print("\nWe will now configure SSH access to the target " + "server (hypervisors). This will likely require the " + "SSH password to be entered.\n") valid_ssh = self._setup_ssh(username, hostname) @@ -1280,8 +1319,8 @@ def _verify_vcenter_hypervisor(self, hostname): try: import requests except ImportError: - self._print("Could not import libraries required to communicate " - "with vCenter. Was setup completed successfully?") + print("Could not import libraries required to communicate " + "with vCenter. Was setup completed successfully?") return False try: @@ -1306,7 +1345,7 @@ def _verify_kvm_hypervisor(self, username, hostname): self.state["target_server_type"] = "kvm" return True except Exception: - self._print("Could not connect to hypervisor") + print("Could not connect to hypervisor") pass return False @@ -1317,7 +1356,7 @@ def _discover_vcenter_components(self, username, password, hostname): if vms is None: return False - self._print("\nFound VMs: " + ", ".join(sorted(vms.keys()))) + print("\nFound VMs: " + ", ".join(sorted(vms.keys()))) for vm_name, vm in vms.iteritems(): vm_info = self._discover_vcenter_vm_info(vm, hostname) @@ -1326,13 +1365,13 @@ def _discover_vcenter_components(self, username, password, hostname): try: self._add_discovered_vm(vm_info, choice) except Exception: - self._print("\nCould not add VM to deployment.") + print("\nCould not add VM to deployment.") return True except Exception as e: - self._print("\nAn error occurred while attempting to auto-discover" - " components: " + str(e)) - self._print("Please contact: " + METROAE_CONTACT) + print("\nAn error occurred while attempting to auto-discover" + " components: " + str(e)) + print("Please contact: " + METROAE_CONTACT) return False @@ -1343,8 +1382,8 @@ def _get_vcenter_vms(self, username, password, hostname): import ssl import atexit except ImportError: - self._print("Could not import libraries required to communicate " - "with vCenter. Was setup completed successfully?") + print("Could not import libraries required to communicate " + "with vCenter. Was setup completed successfully?") return None try: @@ -1356,7 +1395,7 @@ def _get_vcenter_vms(self, username, password, hostname): atexit.register(Disconnect, si) except Exception: - self._print("Could not connect to vCenter") + print("Could not connect to vCenter") return None return self._get_vcenter_vms_dict(si.content, [vim.VirtualMachine]) @@ -1454,24 +1493,24 @@ def _discover_vcenter_interfaces(self, interfaces, vm): pass def _verify_vcenter_discovery(self, vm_info): - self._print("\n\nDiscovered VM") - self._print("-------------") - self._print("VM name: " + vm_info["vm_name"]) - self._print("Product: " + vm_info["product"]) - self._print("Datacenter: " + vm_info["datacenter"]) - self._print("Cluster: " + vm_info["cluster"]) - self._print("Datastore: " + vm_info["datastore"]) - self._print("Interfaces:") + print("\n\nDiscovered VM") + print("-------------") + print("VM name: " + vm_info["vm_name"]) + print("Product: " + vm_info["product"]) + print("Datacenter: " + vm_info["datacenter"]) + print("Cluster: " + vm_info["cluster"]) + print("Datastore: " + vm_info["datastore"]) + print("Interfaces:") for interface in vm_info["interfaces"]: - self._print(" - address: " + interface["address"]) - self._print(" hostname: " + interface["hostname"]) - self._print(" gateway: " + interface["gateway"]) - self._print(" prefix length: " + str(interface["prefix"])) - - self._print("\nThis VM can be added to your deployment. There will be" - " an opportunity to modify it in later steps of the wizard" - " or those steps can be skipped if the discovered VMs are " - "correct.\n") + print(" - address: " + interface["address"]) + print(" hostname: " + interface["hostname"]) + print(" gateway: " + interface["gateway"]) + print(" prefix length: " + str(interface["prefix"])) + + print("\nThis VM can be added to your deployment. There will be" + " an opportunity to modify it in later steps of the wizard" + " or those steps can be skipped if the discovered VMs are " + "correct.\n") return self._vcenter_component_choice(vm_info["product"]) def _vcenter_component_choice(self, product): @@ -1498,12 +1537,12 @@ def _discover_kvm_components(self, username, hostname): try: self._add_discovered_vm(vm_info, choice) except Exception: - self._print("\nCould not add VM to deployment.") + print("\nCould not add VM to deployment.") return True except Exception as e: - self._print("\nAn error occurred while attempting to auto-discover" - " components: " + str(e)) - self._print("Please contact: " + METROAE_CONTACT) + print("\nAn error occurred while attempting to auto-discover" + " components: " + str(e)) + print("Please contact: " + METROAE_CONTACT) return False @@ -1512,10 +1551,10 @@ def _discover_kvm_vm_names(self, username, hostname): "sudo virsh list --name") if rc == 0: names = sorted([x for x in output_lines if x.strip() != ""]) - self._print("\nFound VMs: " + ", ".join(names)) + print("\nFound VMs: " + ", ".join(names)) return names else: - self._print("\nError while discovering VMs on %s\n%s" % ( + print("\nError while discovering VMs on %s\n%s" % ( hostname, "\n".join(output_lines))) return list() @@ -1544,7 +1583,7 @@ def _parse_kvm_info(self, kvm_xml): try: import xml.etree.ElementTree as ET except ImportError: - self._print( + print( "Could not import the libraries to parse KVM XML. Discovery " "not possible.") return None @@ -1568,7 +1607,7 @@ def _parse_kvm_info(self, kvm_xml): return vm_info except Exception: - self._print( + print( "Could not parse KVM XML. Skipping VM.") return None @@ -1610,22 +1649,22 @@ def _discover_kvm_interface(self, username, hostname, interface): pass def _verify_kvm_discovery(self, vm_info): - self._print("\n\nDiscovered VM") - self._print("-------------") - self._print("VM name: " + vm_info["vm_name"]) - self._print("Image: " + vm_info["image_name"]) - self._print("Interfaces:") + print("\n\nDiscovered VM") + print("-------------") + print("VM name: " + vm_info["vm_name"]) + print("Image: " + vm_info["image_name"]) + print("Interfaces:") for interface in vm_info["interfaces"]: - self._print(" - bridge: " + interface["bridge"]) - self._print(" address: " + interface["address"]) - self._print(" hostname: " + interface["hostname"]) - self._print(" gateway: " + interface["gateway"]) - self._print(" prefix length: " + str(interface["prefix"])) - - self._print("\nThis VM can be added to your deployment. There will be" - " an opportunity to modify it in later steps of the wizard" - " or those steps can be skipped if the discovered VMs are " - "correct.\n") + print(" - bridge: " + interface["bridge"]) + print(" address: " + interface["address"]) + print(" hostname: " + interface["hostname"]) + print(" gateway: " + interface["gateway"]) + print(" prefix length: " + str(interface["prefix"])) + + print("\nThis VM can be added to your deployment. There will be" + " an opportunity to modify it in later steps of the wizard" + " or those steps can be skipped if the discovered VMs are " + "correct.\n") return self._kvm_component_choice(vm_info["image_name"]) def _kvm_component_choice(self, image_name): @@ -1687,7 +1726,7 @@ def _add_discovered_vm(self, vm_info, choice): self._add_discovered_vm_nsgv(vm_info, component) else: # UNKNOWN - self._print("\nError: Unknown choice for discovered VM\n") + print("\nError: Unknown choice for discovered VM\n") return deployment_dir = self._get_deployment_dir() @@ -1772,7 +1811,7 @@ def _import_csv(self): from convert_csv_to_deployment import CsvDeploymentConverter converter = CsvDeploymentConverter() except ImportError: - self._print( + print( "Could not import the libraries to parse CSV files." " Please make sure setup.sh has been run.") return @@ -1786,7 +1825,7 @@ def _import_csv(self): "File not found. Would you like to skip the import?", 0, ["(Y)es", "(n)o"]) if choice != 1: - self._print("Skipping import step...") + print("Skipping import step...") return else: valid = True @@ -1794,16 +1833,16 @@ def _import_csv(self): try: converter.convert(csv_file, deployment_dir) except Exception as e: - self._print( + print( "The following errors occurred while parsing the CSV file") - self._print(str(e)) - self._print("Please correct these and rerun this step") + print(str(e)) + print("Please correct these and rerun this step") return data = converter.get_data() self.state["csv_data"] = data - self._print("Imported the following schemas from CSV: " + - ", ".join(data.keys())) + print("Imported the following schemas from CSV: " + + ", ".join(data.keys())) self._read_csv_data(data) @@ -1825,18 +1864,18 @@ def _read_csv_data(self, data): def _check_skip_for_csv(self, schema_name): if "csv_data" in self.state and schema_name in self.state["csv_data"]: - self._print("This step has been handled via import from CSV.") + print("This step has been handled via import from CSV.") choice = self._input("Do you still wish to continue?", 1, ["(y)es", "(N)o"]) if choice == 1: - self._print("Skipping step...") + print("Skipping step...") return True return False def _setup_dns(self, deployment, data): - self._print(self._get_field(data, "dns_setup_msg")) + print(self._get_field(data, "dns_setup_msg")) dns_domain_default = None if "dns_domain" in deployment: @@ -1852,7 +1891,7 @@ def _setup_dns(self, deployment, data): else: vsd_fqdn_default = "xmpp" - self._print(self._get_field(data, "vsd_fqdn_msg")) + print(self._get_field(data, "vsd_fqdn_msg")) vsd_fqdn = self._input("VSD FQDN (we'll add .%s)" % dns_domain, vsd_fqdn_default, datatype="hostname") @@ -1881,7 +1920,7 @@ def _setup_dns(self, deployment, data): deployment["dns_server_list"] = dns_server_list def _setup_ntp(self, deployment, data): - self._print(self._get_field(data, "ntp_setup_msg")) + print(self._get_field(data, "ntp_setup_msg")) if "ntp_server_list" in deployment: ntp_servers_default = ", ".join(deployment["ntp_server_list"]) @@ -1892,7 +1931,7 @@ def _setup_ntp(self, deployment, data): while ntp_server_list is None: ntp_server_list = self._input( - "Enter NTP server IPs in dotted decmial format (separate " + "Enter NTP server IPs in dotted decimal format (separate " "multiple using commas)", ntp_servers_default) ntp_server_list = self._format_ip_list(ntp_server_list) @@ -1906,7 +1945,7 @@ def _format_ip_list(self, ip_str): def _validate_ip_list(self, ip_list): for ip in ip_list: if self._validate_ipaddr(ip) is None: - self._print("\n%s is not a valid IP address\n" % ip) + print("\n%s is not a valid IP address\n" % ip) return None return ip_list @@ -1914,7 +1953,7 @@ def _validate_ip_list(self, ip_list): def _validate_hostname_list(self, hostname_list): for hostname in hostname_list: if self._validate_hostname(hostname) is None: - self._print("\n%s is not a valid hostname\n" % hostname) + print("\n%s is not a valid hostname\n" % hostname) return None return hostname_list @@ -1936,7 +1975,7 @@ def _setup_unzip_dir(self, deployment): deployment["nuage_unzipped_files_dir"] = unzip_dir def _setup_bridges(self, deployment, data): - self._print(self._get_field(data, "bridge_setup_msg")) + print(self._get_field(data, "bridge_setup_msg")) if "mgmt_bridge" in deployment: mgmt_bridge_default = deployment["mgmt_bridge"] @@ -1978,8 +2017,8 @@ def _setup_target_server_type(self): if "target_server_type" in self.state: return - self._print("\nPlease choose a target server (hypervisor) type for " - "your deployment.\n") + print("\nPlease choose a target server (hypervisor) type for " + "your deployment.\n") server_type = self._input("Target server type", 0, TARGET_SERVER_TYPE_LABELS) @@ -2010,7 +2049,7 @@ def _generate_deployment_file(self, schema, output_file, deployment): except ImportError: self._record_problem( "deployment_create", "Could not create a deployment file") - self._print( + print( "Cannot write deployment files because libraries are missing." " Please make sure setup.sh has been run.") return @@ -2025,14 +2064,14 @@ def _generate_deployment_file(self, schema, output_file, deployment): os.path.join("schemas", schema + ".json")) template = jinja2.Template(example_lines) rendered = template.render(**deployment) - with open(output_file, 'w') as file: - file.write(rendered.encode("utf-8")) + with open(output_file, 'w', encoding='utf-8') as file: + file.write(rendered) self._unrecord_problem("deployment_create") - self._print("\nWrote deployment file: " + output_file) + print("\nWrote deployment file: " + output_file) def _setup_upgrade(self, deployment, data): - self._print(self._get_field(data, "upgrade_msg")) + print(self._get_field(data, "upgrade_msg")) if "upgrade_from_version" in deployment: upgrade_from_version_default = deployment["upgrade_from_version"] @@ -2108,7 +2147,7 @@ def _setup_hostname(self, deployment, i, item_name): component["hostname"] = hostname return hostname - def _setup_ip_addresses(self, deployment, i, hostname, is_vsc): + def _setup_ip_addresses(self, deployment, i, hostname, is_vsc, is_nuh): component = deployment[i] mgmt_ip = self._setup_mgmt_address(component, hostname) @@ -2133,9 +2172,47 @@ def _setup_ip_addresses(self, deployment, i, hostname, is_vsc): system_ip = self._input("System IP address for routing", default, datatype="ipaddr") component["system_ip"] = system_ip + if is_nuh: + + default = self._get_value(component, "internal_ip") + internal_ip = self._input("IP address on Internal interface", + default, datatype="ipaddr") + component["internal_ip"] = internal_ip + + default = "24" + if "internal_ip_prefix" in component: + default = str(component["internal_ip_prefix"]) + + internal_ip_prefix = self._input("Internal IP address prefix length", + default, datatype="int") + component["internal_ip_prefix"] = internal_ip_prefix + + if "internal_gateway" in component and component["internal_gateway"] != "": + default = self._get_value(component, "internal_gateway") + else: + default = None + internal_gateway = self._input("Gateway for Internal interface", + default, datatype="ipaddr") + component["internal_gateway"] = internal_gateway + + if "external_interface_list" in component and component["external_interface_list"] != []: + default = self._get_value(component, "external_interface_list") + else: + default = None + external_interface_list = None + while external_interface_list is None: + external_interface_list = self._input( + "Enter External interface names in string format (separate " + "multiple using commas)", default) + external_interface_list = self._format_interface_list(external_interface_list) + + component["external_interface_list"] = external_interface_list self._setup_target_server(component) + def _format_interface_list(self, inter_str): + return [x.strip() for x in inter_str.split(",")] + def _setup_mgmt_address(self, component, hostname): default = self._resolve_hostname(hostname) @@ -2208,22 +2285,22 @@ def _resolve_hostname(self, hostname): "getent hosts %s" % hostname) if rc == 0: self._unrecord_problem("dns_resolve") - self._print("") + print("") return output_lines[0].split(" ")[0] else: self._record_problem( "dns_resolve", "Could not resolve hostnames with DNS") - self._print( + print( u"\nCould not resolve %s to an IP address, this is " u"required for MetroAE to operate. Is the hostname " u"defined in DNS?" % hostname) except Exception as e: self._record_problem( "dns_resolve", "Error while resolving hostnames with DNS") - self._print("\nAn error occurred while resolving hostname: " + - str(e)) - self._print("Please contact: " + METROAE_CONTACT) + print("\nAn error occurred while resolving hostname: " + + str(e)) + print("Please contact: " + METROAE_CONTACT) return None @@ -2457,8 +2534,57 @@ def _setup_bootstrap(self, deployment, data): default, datatype="ipaddr") deployment["second_controller_address"] = address + def _setup_external_interfaces(self, deployment, data, i): + component = deployment[i] + if "external_ip" in component and component["external_ip"] != "": + default = component["external_ip"] + else: + default = None + external_ip = self._input("IP address of the external interface", + default, datatype="ipaddr") + component["external_ip"] = external_ip + + if "external_ip_prefix" in component and component["external_ip_prefix"] != "": + default = component["external_ip_prefix"] + else: + default = None + external_ip_prefix = self._input("IP prefix length for the external network", + default, datatype="int") + component["external_ip_prefix"] = external_ip_prefix + + if "external_gateway" in component and component["external_gateway"] != "": + default = component["external_gateway"] + else: + default = None + external_gateway = self._input("IP address of the external interface gateway", + default, datatype="ipaddr") + component["external_gateway"] = external_gateway + + if "vlan" in component and component["vlan"] != "": + default = component["vlan"] + else: + default = None + vlan = self._input("VLAN ID of the external interface", default, + datatype="int") + component["vlan"] = vlan + + if "external_bridge" in component and component["external_bridge"] != "": + default = component["external_bridge"] + else: + default = None + external_bridge = self._input("Network bridge used for external interface", default) + component["external_bridge"] = external_bridge + + default = None + name = self._input("Name of the external interface", default) + component["name"] = name + + default = None + external_fqdn = self._input("FQDN for the external interface", default) + component["external_fqdn"] = external_fqdn + def _setup_ssh(self, username, hostname): - self._print("Adding SSH keys for %s@%s, may ask for password" % ( + print("Adding SSH keys for %s@%s, may ask for password" % ( username, hostname)) try: options = "" @@ -2470,29 +2596,29 @@ def _setup_ssh(self, username, hostname): "ssh-copy-id %s%s@%s" % (options, username, hostname)) if rc == 0: self._unrecord_problem("ssh_keys") - self._print("\nSuccessfully setup SSH on host %s" % hostname) + print("\nSuccessfully setup SSH on host %s" % hostname) return True else: self._record_problem( "ssh_keys", "Could not setup password-less SSH") - self._print("\n".join(output_lines)) - self._print( + print("\n".join(output_lines)) + print( u"\nCould not add SSH keys for %s@%s, this is required" u" for MetroAE to operate." % (username, hostname)) except Exception as e: self._record_problem( "ssh_keys", "Error while setting up password-less SSH") - self._print("\nAn error occurred while setting up SSH: " + - str(e)) - self._print("Please contact: " + METROAE_CONTACT) + print("\nAn error occurred while setting up SSH: " + + str(e)) + print("Please contact: " + METROAE_CONTACT) return False def _setup_ssh_key(self): rc, output_lines = self._run_shell("stat ~/.ssh/id_rsa.pub") if rc != 0: - self._print("\nCould not find your SSH public key " - "~/.ssh/id_rsa.pub\n") + print("\nCould not find your SSH public key " + "~/.ssh/id_rsa.pub\n") choice = self._input("Do you wish to generate a new SSH keypair?", 0, ["(Y)es", "(n)o"]) @@ -2502,13 +2628,13 @@ def _setup_ssh_key(self): '-f ~/.ssh/id_rsa') if rc == 0: - self._print("\nSuccessfully generated an SSH keypair") + print("\nSuccessfully generated an SSH keypair") return True else: self._record_problem( "ssh_keys", "Could not generate SSH keypair") - self._print("\n".join(output_lines)) - self._print( + print("\n".join(output_lines)) + print( "\nCould not generate an SSH keypair, this is " "required for MetroAE to operate.") return False @@ -2522,22 +2648,22 @@ def _verify_ssh(self, username, hostname): username, hostname)) if rc == 0: self._unrecord_problem("ssh_access") - self._print("\nSuccessfully connected via SSH to host %s" % - hostname) + print("\nSuccessfully connected via SSH to host %s" % + hostname) return True else: self._record_problem( "ssh_access", "Could not connect via password-less SSH") - self._print("\n".join(output_lines)) - self._print( + print("\n".join(output_lines)) + print( u"\nCould not connect via SSH to %s@%s, this is required" u" for MetroAE to operate." % (username, hostname)) except Exception as e: self._record_problem( "ssh_access", "Error while connecting to host via SSH") - self._print("\nAn error occurred while connecting to host via " - " SSH: " + str(e)) - self._print("Please contact: " + METROAE_CONTACT) + print("\nAn error occurred while connecting to host via " + " SSH: " + str(e)) + print("Please contact: " + METROAE_CONTACT) return False @@ -2549,22 +2675,22 @@ def _verify_bridge(self, username, hostname, bridge): username, hostname, bridge)) if rc == 0: self._unrecord_problem("bridge") - self._print("\nSuccessfully verified bridge %s on host %s" % ( - bridge, hostname)) + print("\nSuccessfully verified bridge %s on host %s" % ( + bridge, hostname)) return True else: self._record_problem( "bridge", "Bridge not present on target server host") - self._print("\n".join(output_lines)) - self._print( + print("\n".join(output_lines)) + print( u"\nBridge %s not present on %s, this is required" u" for components to communicate." % (bridge, hostname)) except Exception as e: self._record_problem( "bridge", "Error while verifying bridge interfaces") - self._print("\nAn error occurred while verifying bridge interface " - " via SSH: " + str(e)) - self._print("Please contact: " + METROAE_CONTACT) + print("\nAn error occurred while verifying bridge interface " + " via SSH: " + str(e)) + print("Please contact: " + METROAE_CONTACT) return False @@ -2575,8 +2701,8 @@ def main(): wizard() except Exception: traceback.print_exc() - print "\nThere was an unexpected error running the wizard" - print "Please contact: " + METROAE_CONTACT + print("\nThere was an unexpected error running the wizard") + print("Please contact: " + METROAE_CONTACT) exit(1) diff --git a/sample_deployment.xlsx b/sample_deployment.xlsx index 6d8641d4a1..d6010a0b9e 100644 Binary files a/sample_deployment.xlsx and b/sample_deployment.xlsx differ diff --git a/schemas/common.json b/schemas/common.json index 4a696d5de7..4d0e97736e 100644 --- a/schemas/common.json +++ b/schemas/common.json @@ -125,7 +125,6 @@ "title": "DNS server IP(s)", "description": "List of one or more DNS server addresses for resolving component domain names. Must be in dotted-decimal (IPv4) or hexidecimal (IPv6) format.", "sectionEnd": "Network Services", - "minItems": 1, "propertyOrder": 150, "items": { "type": "string", @@ -180,7 +179,7 @@ "type": "boolean", "title": "Ignore RTT Test Errors", "default": false, - "description": "When true, do not validate the RTT between VSDs in a cluster is less than max RTT", + "description": "When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error", "propertyOrder": 210, "advanced": true }, @@ -236,7 +235,7 @@ "type": "boolean", "title": "Ignore Disk Performance Test Errors", "default": false, - "description": "When true, ignore the results of the VSD disk performance test", + "description": "When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error", "propertyOrder": 280, "advanced": true }, diff --git a/schemas/credentials.json b/schemas/credentials.json index 61be1bfc52..f105663002 100644 --- a/schemas/credentials.json +++ b/schemas/credentials.json @@ -50,7 +50,7 @@ "vsd_custom_username": { "type": "string", "title": "VSD System Username", - "description": "Username to be used while logging into VSD command line.", + "description": "VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures.", "default": "root", "workflow": "Upgrade", "component_type": "vsd", @@ -61,7 +61,7 @@ "vsd_custom_password": { "type": "string", "title": "VSD System Password", - "description": "Password for the VSD user used to login to the command line", + "description": "VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures.", "default": "Alcateldc", "workflow": "Upgrade", "component_type": "vsd", @@ -71,7 +71,7 @@ "vsc_custom_username": { "type": "string", "title": "VSC System Username", - "description": "VSC username to login into command line. Should have admin privileges.", + "description": "VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only", "default": "", "workflow": "Upgrade", "component_type": "vsc", @@ -81,7 +81,7 @@ "vsc_custom_password": { "type": "string", "title": "VSC System Password", - "description": "VSC password to login into command line", + "description": "VSC password will be used for logging into the command line. Used for upgrade procedure only", "default": "", "workflow": "Upgrade", "component_type": "vsc", @@ -93,7 +93,7 @@ "type": "string", "title": "ElasticSearch (Stats) System Username", "default": "", - "description": "ElasticSearch (Stats) username to login into command line", + "description": "ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures.", "workflow": "Upgrade", "component_type": "vstat", "encrypt": true, @@ -103,7 +103,7 @@ "vstat_custom_password": { "type": "string", "title": "ElasticSearch (Stats) System Password", - "description": "ElasticSearch (Stats) password to login into command line", + "description": "ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures.", "default": "", "workflow": "Upgrade", "component_type": "vstat", @@ -113,7 +113,7 @@ "vstat_custom_root_password":{ "type": "string", "title": "ElasticSearch (Stats) Custom System Password For Root User", - "description": "ElasticSearch (Stats) root password required for Stats Upgrade", + "description": "ElasticSearch (Stats) root password required for VSTAT Upgrade only", "default": "", "workflow": "Upgrade", "component_type": "vstat", @@ -124,7 +124,7 @@ "vsd_auth_username": { "type": "string", "title": "VSD API/Architect username", - "description": "Username for API authentication. Must have csproot privileges. Also known as csproot user", + "description": "This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges.", "default": "csproot", "workflow": "Upgrade", "encrypt": true, @@ -134,7 +134,7 @@ "vsd_auth_password": { "type": "string", "title": "VSD API/Architect Password", - "description": "Password for API authentication. Must have csproot privileges. Also known as csproot password", + "description": "This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges.", "default": "csproot", "workflow": "Upgrade", "encrypt": true, @@ -143,7 +143,7 @@ "vsd_mysql_password": { "type": "string", "title": "mysql Password", - "description": "Mysql password for vsd", + "description": "This VSD Mysql password. Used for both Install and Upgrade procedures.", "default": "", "component_type": "vsd", "encrypt": true, @@ -268,7 +268,7 @@ "openstack_username": { "type": "string", "title": "OpenStack Username", - "description": "Username for OpenStack", + "description": "Username for OpenStack.", "default": "", "target_server_type": "openstack", "sectionBegin": "OpenStack Credentials", @@ -288,7 +288,7 @@ "vcenter_username": { "type": "string", "title": "vCenter Username", - "description": "vCenter Username", + "description": "vCenter Username.", "default": "", "encrypt": true, "target_server_type": "vcenter", @@ -308,7 +308,7 @@ "compute_username": { "type": "string", "title": "Compute Username", - "description": "Username for Compute node to install VRS", + "description": "Username for Compute node to install VRS.", "default": "root", "component_type": "vrs", "sectionBegin": "Compute and Proxy", @@ -318,7 +318,7 @@ "compute_password": { "type": "string", "title": "Compute Password", - "description": "Password for Compute node", + "description": "Password for Compute node, and will be used for installation of VRS", "default": "", "component_type": "vrs", "encrypt": true, @@ -374,7 +374,7 @@ "description": "Username for SMTP authentication", "default": "", "encrypt": true, - "sectionBegin": "SMTP", + "sectionBegin": "SMTP and NFS", "propertyOrder": 380 }, "smtp_auth_password": { @@ -383,66 +383,74 @@ "description": "Password for SMTP authentication", "default": "", "encrypt": true, - "sectionEnd": "SMTP", "propertyOrder": 390 }, + "nfs_custom_username": { + "type": "string", + "title": "NFS System Username", + "description": "NFS username to login into command line, and will be used for NFS configuration. Default user is root.", + "default": "root", + "encrypt": true, + "propertyOrder": 400, + "sectionEnd": "SMTP and NFS" + }, "netconf_vm_username": { "type": "string", "title": "NETCONF Manager VM username", "default": "root", "encrypt": true, - "description": "Username for NETCONF Manager VM. Default user is root", + "description": "Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root.", "sectionBegin": "NETCONF Manager", - "propertyOrder": 400 + "propertyOrder": 410 }, "netconf_vm_password": { "type": "string", - "title": "NETCONF Manager VM password for running sudo commands", + "title": "NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager.", "description": "Password for NETCONF manager VM", "encrypt": true, "default": "", - "propertyOrder": 410 + "propertyOrder": 420 }, "netconf_username": { "type": "string", "title": "NETCONF Manager username", "default": "netconfmgr", "encrypt": true, - "description": "Username for NETCONF Manager user", - "propertyOrder": 420 + "description": "Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager.", + "propertyOrder": 430 }, "netconf_password": { "type": "string", "title": "NETCONF Manager password", - "description": "Password for NETCONF manager user", + "description": "Password for NETCONF manager user, and will be used for the installation of NETCONF Manager.", "default": "password", "encrypt": true, "sectionEnd": "NETCONF Manager", - "propertyOrder": 430 + "propertyOrder": 440 }, "health_report_email_server_username": { "type": "string", "title": "Health Report SMTP Server Username", - "description": "Username for SMTP Server", + "description": "Username for SMTP Server, and will be used for Email health report.", "encrypt": true, "sectionBegin": "Health Report Email Server", - "propertyOrder": 440 + "propertyOrder": 450 }, "health_report_email_server_password": { "type": "string", "title": "Health Report SMTP Server Password", - "description": "Password for SMTP Server", + "description": "Password for SMTP Server, and will be used for Email health report.", "encrypt": true, "sectionEnd": "Health Report Email Server", - "propertyOrder": 450 + "propertyOrder": 460 }, "vsd_monit_mail_server_username": { "type": "string", "title": "VSD Monit Mail Server Username", - "description": "Username for the monit mail server", + "description": "Username for the monit mail server.", "encrypt": true, "sectionBegin": "Monit Alerts Email Server", - "propertyOrder": 460 + "propertyOrder": 470 }, "vsd_monit_mail_server_password": { "type": "string", @@ -450,37 +458,37 @@ "description": "Password for the monit mail server", "encrypt": true, "sectionEnd": "Monit Alerts Email Server", - "propertyOrder": 470 + "propertyOrder": 480 }, "nuh_notification_app_1_username": { "type": "string", "title": "NUH notification application 1 username", - "description": "Username for NUH notification application", + "description": "Username for NUH notification application, and will be used for installation of NUH.", "encrypt": true, "sectionBegin": "NUH notification application", - "propertyOrder": 480 + "propertyOrder": 490 }, "nuh_notification_app_1_password": { "type": "string", "title": "NUH notification application 1 password", - "description": "Password for NUH notification application", + "description": "Password for NUH notification application, and will be used for installation of NUH.", "encrypt": true, - "propertyOrder": 490 + "propertyOrder": 500 }, "nuh_notification_app_2_username": { "type": "string", "title": "NUH notification application 2 username", - "description": "Username for NUH notification application", + "description": "Username for NUH notification application, and will be used for installation of NUH.", "encrypt": true, - "propertyOrder": 500 + "propertyOrder": 510 }, "nuh_notification_app_2_password": { "type": "string", "title": "NUH notification application 2 password", - "description": "Password for NUH notification application", + "description": "Password for NUH notification application, and will be used for installation of NUH.", "encrypt": true, "sectionEnd": "NUH notification application", - "propertyOrder": 510 + "propertyOrder": 520 } }, "required": ["name"] diff --git a/schemas/nfss.json b/schemas/nfss.json new file mode 100644 index 0000000000..ba51bec456 --- /dev/null +++ b/schemas/nfss.json @@ -0,0 +1,44 @@ +{ + "$schema": "http://json-schema.org/draft-06/schema#", + "$id": "urn:nuage-metroae:nfss", + "title": "NFS Server VM", + "description": "Configure NFS Server VM using MetroAE. Note: Metroae will not bring up the NFS server, it will configure it for Elasticsearch mounting.", + "type": "array", + "widget": "form", + "items": { + "widget": "item", + "type": "object", + "title": "NFS", + "additionalProperties": false, + "properties": { + "hostname": { + "type": "string", + "format": "hostname", + "title": "Hostname", + "description": "Hostname of NFS Server", + "sectionBegin": "NFS parameters", + "propertyOrder": 10 + }, + "nfs_ip": { + "type": "string", + "anyOf": [ + {"format": "ipv4"}, + {"format": "ipv6"} + ], + "title": "NFS server IP address", + "description": "IP address of the NFS server.", + "propertyOrder": 20 + }, + "mount_directory_location": { + "type": "string", + "title": "NFS mount directory location", + "description": "Optional user specified location of the mount directory to export for the NFS. Defaults to /nfs.", + "default": "/nfs", + "propertyOrder": 30, + "sectionEnd": "NFS parameters", + "advanced": true + } + }, + "required": ["hostname", "nfs_ip"] + } +} diff --git a/schemas/nsgv_network_port_vlans.json b/schemas/nsgv_network_port_vlans.json new file mode 100644 index 0000000000..6c7f51cada --- /dev/null +++ b/schemas/nsgv_network_port_vlans.json @@ -0,0 +1,74 @@ +{ + "$schema": "http://json-schema.org/draft-06/schema#", + "$id": "urn:nuage-metroae:nsgv-network-port-vlans", + "title": "NSGv Network Port VLANs", + "description": "Specify NSGvs network port VLAN configuration.", + "type": "array", + "widget": "form", + "listName":"nsgv_network_port_vlans", + "items": { + "widget": "item", + "type": "object", + "title": "NSGv", + "additionalProperties": false, + "properties": { + "name": { + "type": "string", + "title": "NSGv Network Port VLAN Name", + "description": "VLAN name of the NSGv network port", + "default": "", + "propertyOrder": 10, + "sectionBegin": "Network ports" + }, + "vlan_number": { + "type": "integer", + "title": "NSGv Network Port VLAN Number", + "description": "VLAN number of the NSGv network port", + "default": 0, + "propertyOrder": 20 + }, + "vsc_infra_profile_name": { + "type": "string", + "title": "VSC Infra Profile Name", + "description": "Name of the VSC infra profile for the NSG on the VSD", + "default": "", + "advanced": true, + "propertyOrder": 30 + }, + "first_controller_address": { + "type": "string", + "title": "VSC Infra Profile First Controller", + "description": "Host name or IP address of the VSC infra profile first controller for the NSG", + "default": "", + "format": "hostname", + "advanced": true, + "propertyOrder": 40 + }, + "second_controller_address": { + "type": "string", + "title": "VSC Infra Profile Second Controller", + "description": "Host name or IP address of the VSC infra profile second controller for the NSG", + "default": "", + "format": "hostname", + "advanced": true, + "propertyOrder": 50 + }, + "uplink": { + "type": "boolean", + "title": "Create uplink connection on this Vlan", + "description": "If vlan 0 has an uplink, then other vlans can't. If multiple uplinks are defined, then network acceleration will be enabled", + "default": "No", + "advanced": true, + "sectionEnd": "Network ports", + "propertyOrder": 60 + } + }, + "required": [ + "name", + "vlan_number", + "vsc_infra_profile_name", + "first_controller_address", + "uplink" + ] + } +} diff --git a/schemas/nsgvs.json b/schemas/nsgvs.json index 06a8fbb994..f80b9d4276 100644 --- a/schemas/nsgvs.json +++ b/schemas/nsgvs.json @@ -385,13 +385,23 @@ "sectionBegin": "Network and Access Ports", "propertyOrder": 450 }, + "network_port_vlans": { + "type": "array", + "title": "NSG Network Port VLAN list", + "description": "VLAN name list of the network port for the NSG", + "advanced": true, + "propertyOrder": 460, + "items": { + "type": "string" + } + }, "access_port_name": { "type": "string", "title": "NSG Access Port Name", "description": "Name of the access port for the NSG. Deprecated in favor of access_ports", "default": "", "advanced": true, - "propertyOrder": 460 + "propertyOrder": 470 }, "access_port_physical_name": { "type": "string", @@ -399,7 +409,7 @@ "description": "Physical name of the access port for the NSG. Deprecated in favor of access_ports", "default": "port2", "advanced": true, - "propertyOrder": 470 + "propertyOrder": 480 }, "access_port_vlan_range": { "type": "string", @@ -407,7 +417,7 @@ "description": "VLAN range of the access port for the NSG. Deprecated in favor of access_ports", "default": "", "advanced": true, - "propertyOrder": 480 + "propertyOrder": 490 }, "access_port_vlan_number": { "type": "integer", @@ -415,13 +425,13 @@ "description": "VLAN number of the NSG access port for the NSG. Deprecated in favor of access_ports", "default": 0, "advanced": true, - "propertyOrder": 490 + "propertyOrder": 500 }, "access_ports": { "type": "array", "title": "NSGv Access ports list name", "description": "Name of access ports list.", - "propertyOrder": 500, + "propertyOrder": 510, "items": { "type": "string" }, @@ -434,14 +444,14 @@ "default": 2300, "advanced": true, "sectionBegin": "Telnet and Credentials", - "propertyOrder": 510 + "propertyOrder": 520 }, "credentials_set": { "type": "string", "title": "Credentials set name", "description": "Name of the credentials set for the vsd", "sectionEnd": "Telnet and Credentials", - "propertyOrder": 520, + "propertyOrder": 530, "advanced": true } }, diff --git a/schemas/upgrade.json b/schemas/upgrade.json index 0ee527af3e..83a0c15295 100644 --- a/schemas/upgrade.json +++ b/schemas/upgrade.json @@ -48,13 +48,13 @@ "propertyOrder": 40 }, "skip_disable_stats_collection": { - "type": "boolean", - "title": "Skip stats collection disable before upgrade", - "description": "Stats collection should be disabled during VSD upgrade. If for some reason, you would like to disable stats collection outside of MetroAE, change this to true.", - "default": false, - "advanced": true, - "workflow": "Upgrade", - "propertyOrder": 50 + "type": "boolean", + "title": "Skip stats collection disable before upgrade", + "description": "Stats collection should be disabled during VSD upgrade. If for some reason, you would like to disable stats collection outside of MetroAE, change this to true.", + "default": false, + "advanced": true, + "workflow": "Upgrade", + "propertyOrder": 50 }, "force_vsc_standalone_upgrade": { "type": "boolean", @@ -71,8 +71,16 @@ "description": "Upgrade the SD-WAN Portal or Cluster", "default": false, "workflow": "Upgrade", - "sectionEnd": "Upgrade", "propertyOrder": 70 + }, + "vsd_preupgrade_db_check_script_path": { + "type": "string", + "title": "VSD Pre-upgrade Database Check Script Path", + "description": "Path on the MetroAE host to the VSD pre-upgrade database check script", + "default": "", + "workflow": "Upgrade", + "sectionEnd": "Upgrade", + "propertyOrder": 80 } } } diff --git a/schemas/vsds.json b/schemas/vsds.json index 808760d9a0..4cfdf43275 100644 --- a/schemas/vsds.json +++ b/schemas/vsds.json @@ -362,6 +362,36 @@ "propertyOrder": 410, "sectionEnd": "Security and Certificates", "advanced": true + }, + "vsd_ram": { + "type": "integer", + "title": "VSD RAM", + "description": "Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default.", + "warning": "Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default.", + "default": "(global VSD RAM)", + "minimum": 0, + "sectionBegin": "VSD RAM, CPU and Disk Parameters", + "propertyOrder": 420, + "advanced": true + }, + "vsd_cpu_cores": { + "type": "integer", + "title": "VSD CPU cores", + "description": "Number of CPU's for VSD.", + "advanced": true, + "propertyOrder": 430, + "default": "(global VSD CPU Cores)" + }, + "vsd_fallocate_size_gb": { + "type": "integer", + "title": "VSD Disk Size", + "description": "Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default.", + "warning": "Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default.", + "default": "(global VSD CPU Cores)", + "minimum": 0, + "propertyOrder": 440, + "sectionEnd": "VSD RAM, CPU and Disk Parameters", + "advanced": true } }, "required": [ diff --git a/schemas/vstats.json b/schemas/vstats.json index 2e4be2d7b0..86d6ba7ede 100644 --- a/schemas/vstats.json +++ b/schemas/vstats.json @@ -382,6 +382,35 @@ "propertyOrder": 430, "sectionEnd": "Other configuration", "advanced": true + }, + "vstat_ram": { + "type": "integer", + "title": "VSTAT RAM", + "description": "Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default.", + "warning": "Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default.", + "default": "(global VSTAT RAM)", + "minimum": 0, + "propertyOrder": 440, + "sectionBegin": "VSTAT RAM, CPU and Disk Parameters", + "advanced": true + }, + "vstat_cpu_cores": { + "type": "integer", + "title": "VSTAT CPU cores", + "description": "Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT.", + "advanced": true, + "propertyOrder": 450, + "default": "(global VSTAT CPU)" + }, + "vstat_allocate_size_gb": { + "type": "integer", + "title": "VSTAT Disk Size", + "description": "Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value.", + "default": "(global VSTAT DISK)", + "minimum": 0, + "sectionEnd": "VSTAT RAM, CPU and Disk Parameters", + "propertyOrder": 460, + "advanced": true } }, "required": [ diff --git a/schemas/webfilters.json b/schemas/webfilters.json index 46801a70b8..1b0d0fa1b3 100644 --- a/schemas/webfilters.json +++ b/schemas/webfilters.json @@ -93,11 +93,35 @@ "propertyOrder": 100, "advanced": true }, + "web_http_proxy": { + "type": "string", + "title": "Http proxy", + "default": "", + "description": "Optional HTTP Proxy for webfilter VM", + "propertyOrder": 110, + "advanced": true + }, + "web_proxy_host": { + "type": "string", + "title": "Http proxy host", + "default": "", + "description": "HTTP Proxy host for webfilter proxy", + "propertyOrder": 120, + "advanced": true + }, + "web_proxy_port": { + "type": "string", + "title": "Http proxy port", + "default": "", + "description": "HTTP Proxy port for webfilter proxy", + "propertyOrder": 130, + "advanced": true + }, "run_incompass_operation": { "type": "boolean", "title": "Run incompass operation command", "description": "Run incompass operation command. This is enabled by default and may take up to 20 minutes and requires internet connection.", - "propertyOrder": 110, + "propertyOrder": 140, "advanced": true, "default": true, "sectionEnd": "Webfilter details" diff --git a/src/callback_plugins/metroae_stdout.py b/src/callback_plugins/metroae_stdout.py index 892c366a33..94d0a05bc0 100644 --- a/src/callback_plugins/metroae_stdout.py +++ b/src/callback_plugins/metroae_stdout.py @@ -44,7 +44,7 @@ def my_represent_scalar(self, tag, value, style=None): if style is None: if should_use_block(value): style = '|' - # we care more about readable than accuracy, so... + # we care more about readability than accuracy, so... # ...no trailing space value = value.rstrip() # ...and non-printable characters diff --git a/src/callback_plugins/report_failures.py b/src/callback_plugins/report_failures.py index 5eef4f8eae..92f3596a26 100644 --- a/src/callback_plugins/report_failures.py +++ b/src/callback_plugins/report_failures.py @@ -45,10 +45,10 @@ def log_failed(self, host, category, res): now = time.strftime(self.TIME_FORMAT, time.localtime()) if type(res) == dict and 'msg' in list(res.keys()): with open(self.DESTINATION_FILE, "ab") as fd: - fd.write(u'{0}: {1}: {2}: {3}\n'.format(now, category, host, res['msg'])) + fd.write('{0}: {1}: {2}: {3}\n'.format(now, category, host, res['msg']).encode()) else: with open(self.DESTINATION_FILE, "ab") as fd: - fd.write(u'{0}: {1}: {2}: {3}\n'.format(now, category, host, "Unknown failure")) + fd.write('{0}: {1}: {2}: {3}\n'.format(now, category, host, "Unknown failure").encode()) def runner_on_failed(self, host, res, ignore_errors=False): self.log_failed(host, 'FAILED', res) diff --git a/src/deployment_templates/common.j2 b/src/deployment_templates/common.j2 index 4f699400fc..17a700fe47 100644 --- a/src/deployment_templates/common.j2 +++ b/src/deployment_templates/common.j2 @@ -192,7 +192,7 @@ vsd_run_cluster_rtt_test: {{ vsd_run_cluster_rtt_test | lower }} {%- endif %} # < Ignore RTT Test Errors > -# When true, do not validate the RTT between VSDs in a cluster is less than max RTT +# When true, continue MetroAE execution upon error and do not validate the RTT between VSDs in a cluster is less than max RTT, else stop MetroAE execution upon error # {%- if vsd_ignore_errors_rtt_test is defined %} vsd_ignore_errors_rtt_test: {{ vsd_ignore_errors_rtt_test | lower }} @@ -255,7 +255,7 @@ vsd_disk_performance_test_max_time: "{{ vsd_disk_performance_test_max_time }}" {%- endif %} # < Ignore Disk Performance Test Errors > -# When true, ignore the results of the VSD disk performance test +# When true, continue MetroAE execution upon error and ignore the results of the VSD disk performance test, else stop MetroAE execution upon error # {%- if vsd_ignore_disk_performance_test_errors is defined %} vsd_ignore_disk_performance_test_errors: {{ vsd_ignore_disk_performance_test_errors | lower }} diff --git a/src/deployment_templates/credentials.j2 b/src/deployment_templates/credentials.j2 index f7e5cd2731..01b84544a9 100644 --- a/src/deployment_templates/credentials.j2 +++ b/src/deployment_templates/credentials.j2 @@ -54,7 +54,7 @@ ##### VSD/VSC credentials # < VSD System Username > - # Username to be used while logging into VSD command line. + # VSD Username will be used for logging into VSD command line. Used for both Install and Upgrade procedures. # {%- if item.vsd_custom_username is defined %} vsd_custom_username: {{ item.vsd_custom_username | indent(8, False) }} @@ -63,7 +63,7 @@ {%- endif %} # < VSD System Password > - # Password for the VSD user used to login to the command line + # VSD password will be used for logging into the command line. Used for both Install and Upgrade procedures. # {%- if item.vsd_custom_password is defined %} vsd_custom_password: {{ item.vsd_custom_password | indent(8, False) }} @@ -72,7 +72,7 @@ {%- endif %} # < VSC System Username > - # VSC username to login into command line. Should have admin privileges. + # VSC Username will be used for logging into command line (should have admin privileges). Used for upgrade procedure only # {%- if item.vsc_custom_username is defined %} vsc_custom_username: {{ item.vsc_custom_username | indent(8, False) }} @@ -81,7 +81,7 @@ {%- endif %} # < VSC System Password > - # VSC password to login into command line + # VSC password will be used for logging into the command line. Used for upgrade procedure only # {%- if item.vsc_custom_password is defined %} vsc_custom_password: {{ item.vsc_custom_password | indent(8, False) }} @@ -94,7 +94,7 @@ ##### VSTAT Credentials # < ElasticSearch (Stats) System Username > - # ElasticSearch (Stats) username to login into command line + # ElasticSearch (Stats) Username will be used for logging into command line. Used for both Install and Upgrade procedures. # {%- if item.vstat_custom_username is defined %} vstat_custom_username: {{ item.vstat_custom_username | indent(8, False) }} @@ -103,7 +103,7 @@ {%- endif %} # < ElasticSearch (Stats) System Password > - # ElasticSearch (Stats) password to login into command line + # ElasticSearch (Stats) password will be used for logging into the command line. Used for both Install and Upgrade procedures. # {%- if item.vstat_custom_password is defined %} vstat_custom_password: {{ item.vstat_custom_password | indent(8, False) }} @@ -112,7 +112,7 @@ {%- endif %} # < ElasticSearch (Stats) Custom System Password For Root User > - # ElasticSearch (Stats) root password required for Stats Upgrade + # ElasticSearch (Stats) root password required for VSTAT Upgrade only # {%- if item.vstat_custom_root_password is defined %} vstat_custom_root_password: {{ item.vstat_custom_root_password | indent(8, False) }} @@ -125,7 +125,7 @@ ##### VSD core services # < VSD API/Architect username > - # Username for API authentication. Must have csproot privileges. Also known as csproot user + # This VSD Username(also known as csproot user). Used for both Install and Upgrade procedures. Must have csproot privileges. # {%- if item.vsd_auth_username is defined %} vsd_auth_username: {{ item.vsd_auth_username | indent(8, False) }} @@ -134,7 +134,7 @@ {%- endif %} # < VSD API/Architect Password > - # Password for API authentication. Must have csproot privileges. Also known as csproot password + # This VSD password(also known as csproot password) will be used for API authentication. Used for both Install and Upgrade procedures. Must have csproot privileges. # {%- if item.vsd_auth_password is defined %} vsd_auth_password: {{ item.vsd_auth_password | indent(8, False) }} @@ -143,7 +143,7 @@ {%- endif %} # < mysql Password > - # Mysql password for vsd + # This VSD Mysql password. Used for both Install and Upgrade procedures. # {%- if item.vsd_mysql_password is defined %} vsd_mysql_password: {{ item.vsd_mysql_password | indent(8, False) }} @@ -280,7 +280,7 @@ ##### OpenStack Credentials # < OpenStack Username > - # Username for OpenStack + # Username for OpenStack. # {%- if item.openstack_username is defined %} openstack_username: {{ item.openstack_username | indent(8, False) }} @@ -308,7 +308,7 @@ ##### vcenter # < vCenter Username > - # vCenter Username + # vCenter Username. # {%- if item.vcenter_username is defined %} vcenter_username: {{ item.vcenter_username | indent(8, False) }} @@ -334,7 +334,7 @@ ##### Compute and Proxy # < Compute Username > - # Username for Compute node to install VRS + # Username for Compute node to install VRS. # {%- if item.compute_username is defined %} compute_username: {{ item.compute_username | indent(8, False) }} @@ -343,7 +343,7 @@ {%- endif %} # < Compute Password > - # Password for Compute node + # Password for Compute node, and will be used for installation of VRS # {%- if item.compute_password is defined %} compute_password: {{ item.compute_password | indent(8, False) }} @@ -402,7 +402,7 @@ ############ - ##### SMTP + ##### SMTP and NFS # < SMTP username > # Username for SMTP authentication @@ -422,12 +422,21 @@ # smtp_auth_password: "" {%- endif %} - ########## + # < NFS System Username > + # NFS username to login into command line, and will be used for NFS configuration. Default user is root. + # + {%- if item.nfs_custom_username is defined %} + nfs_custom_username: {{ item.nfs_custom_username | indent(8, False) }} + {%- else %} + # nfs_custom_username: root + {%- endif %} + + ################## ##### NETCONF Manager # < NETCONF Manager VM username > - # Username for NETCONF Manager VM. Default user is root + # Username for NETCONF Manager VM, and will be used for the installation of NETCONF Manager. Default user is root. # {%- if item.netconf_vm_username is defined %} netconf_vm_username: {{ item.netconf_vm_username | indent(8, False) }} @@ -435,7 +444,7 @@ # netconf_vm_username: root {%- endif %} - # < NETCONF Manager VM password for running sudo commands > + # < NETCONF Manager VM password for running sudo commands, and will be used for the installation of NETCONF Manager. > # Password for NETCONF manager VM # {%- if item.netconf_vm_password is defined %} @@ -445,7 +454,7 @@ {%- endif %} # < NETCONF Manager username > - # Username for NETCONF Manager user + # Username for NETCONF Manager user, and will be used for the installation of NETCONF Manager. # {%- if item.netconf_username is defined %} netconf_username: {{ item.netconf_username | indent(8, False) }} @@ -454,7 +463,7 @@ {%- endif %} # < NETCONF Manager password > - # Password for NETCONF manager user + # Password for NETCONF manager user, and will be used for the installation of NETCONF Manager. # {%- if item.netconf_password is defined %} netconf_password: {{ item.netconf_password | indent(8, False) }} @@ -467,7 +476,7 @@ ##### Health Report Email Server # < Health Report SMTP Server Username > - # Username for SMTP Server + # Username for SMTP Server, and will be used for Email health report. # {%- if item.health_report_email_server_username is defined %} health_report_email_server_username: {{ item.health_report_email_server_username | indent(8, False) }} @@ -476,7 +485,7 @@ {%- endif %} # < Health Report SMTP Server Password > - # Password for SMTP Server + # Password for SMTP Server, and will be used for Email health report. # {%- if item.health_report_email_server_password is defined %} health_report_email_server_password: {{ item.health_report_email_server_password | indent(8, False) }} @@ -489,7 +498,7 @@ ##### Monit Alerts Email Server # < VSD Monit Mail Server Username > - # Username for the monit mail server + # Username for the monit mail server. # {%- if item.vsd_monit_mail_server_username is defined %} vsd_monit_mail_server_username: {{ item.vsd_monit_mail_server_username | indent(8, False) }} @@ -511,7 +520,7 @@ ##### NUH notification application # < NUH notification application 1 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # {%- if item.nuh_notification_app_1_username is defined %} nuh_notification_app_1_username: {{ item.nuh_notification_app_1_username | indent(8, False) }} @@ -520,7 +529,7 @@ {%- endif %} # < NUH notification application 1 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # {%- if item.nuh_notification_app_1_password is defined %} nuh_notification_app_1_password: {{ item.nuh_notification_app_1_password | indent(8, False) }} @@ -529,7 +538,7 @@ {%- endif %} # < NUH notification application 2 username > - # Username for NUH notification application + # Username for NUH notification application, and will be used for installation of NUH. # {%- if item.nuh_notification_app_2_username is defined %} nuh_notification_app_2_username: {{ item.nuh_notification_app_2_username | indent(8, False) }} @@ -538,7 +547,7 @@ {%- endif %} # < NUH notification application 2 password > - # Password for NUH notification application + # Password for NUH notification application, and will be used for installation of NUH. # {%- if item.nuh_notification_app_2_password is defined %} nuh_notification_app_2_password: {{ item.nuh_notification_app_2_password | indent(8, False) }} diff --git a/src/deployment_templates/nfss.j2 b/src/deployment_templates/nfss.j2 new file mode 100644 index 0000000000..b9263ac659 --- /dev/null +++ b/src/deployment_templates/nfss.j2 @@ -0,0 +1,41 @@ +############################################################################### +# NFS Server VM +# +# Configure NFS Server VM using MetroAE. Note: Metroae will not bring up the NFS server, it will configure it for Elasticsearch mounting. +# +# Automatically generated by {{ generator_script | default("script") }}. +# + +{% if nfss is defined and nfss %} +{% for item in nfss %} +# +# NFS {{ loop.index }} +# +- + ##### NFS parameters + + # < Hostname > + # Hostname of NFS Server + # + hostname: "{{ item.hostname }}" + + # < NFS server IP address > + # IP address of the NFS server. + # + nfs_ip: "{{ item.nfs_ip }}" + + # < NFS mount directory location > + # Optional user specified location of the mount directory to export for the NFS. Defaults to /nfs. + # + {%- if item.mount_directory_location is defined %} + mount_directory_location: "{{ item.mount_directory_location }}" + {%- else %} + # mount_directory_location: /nfs + {%- endif %} + + #################### + +{% endfor %} +{% else %} +[ ] +{% endif %} diff --git a/src/deployment_templates/nsgv_network_port_vlans.j2 b/src/deployment_templates/nsgv_network_port_vlans.j2 new file mode 100644 index 0000000000..2f8530bd44 --- /dev/null +++ b/src/deployment_templates/nsgv_network_port_vlans.j2 @@ -0,0 +1,56 @@ +############################################################################### +# NSGv Network Port VLANs +# +# Specify NSGvs network port VLAN configuration. +# +# Automatically generated by {{ generator_script | default("script") }}. +# + +{% if nsgv_network_port_vlans is defined and nsgv_network_port_vlans %} +{% for item in nsgv_network_port_vlans %} +# +# NSGv {{ loop.index }} +# +- + ##### Network ports + + # < NSGv Network Port VLAN Name > + # VLAN name of the NSGv network port + # + name: "{{ item.name }}" + + # < NSGv Network Port VLAN Number > + # VLAN number of the NSGv network port + # + vlan_number: {{ item.vlan_number }} + + # < VSC Infra Profile Name > + # Name of the VSC infra profile for the NSG on the VSD + # + vsc_infra_profile_name: "{{ item.vsc_infra_profile_name }}" + + # < VSC Infra Profile First Controller > + # Host name or IP address of the VSC infra profile first controller for the NSG + # + first_controller_address: "{{ item.first_controller_address }}" + + # < VSC Infra Profile Second Controller > + # Host name or IP address of the VSC infra profile second controller for the NSG + # + {%- if item.second_controller_address is defined %} + second_controller_address: "{{ item.second_controller_address }}" + {%- else %} + # second_controller_address: "" + {%- endif %} + + # < Create uplink connection on this Vlan > + # If vlan 0 has an uplink, then other vlans can't. If multiple uplinks are defined, then network acceleration will be enabled + # + uplink: {{ item.uplink | lower }} + + ################### + +{% endfor %} +{% else %} +[ ] +{% endif %} diff --git a/src/deployment_templates/nsgvs.j2 b/src/deployment_templates/nsgvs.j2 index 1e7507326c..3ddda35140 100644 --- a/src/deployment_templates/nsgvs.j2 +++ b/src/deployment_templates/nsgvs.j2 @@ -506,6 +506,15 @@ # network_port_physical_name: port1 {%- endif %} + # < NSG Network Port VLAN list > + # VLAN name list of the network port for the NSG + # + {%- if item.network_port_vlans is defined %} + network_port_vlans: [ {% for i in item.network_port_vlans | default([]) %}"{{ i }}", {% endfor %}] + {%- else %} + # network_port_vlans: [] + {%- endif %} + # < NSG Access Port Name > # Name of the access port for the NSG. Deprecated in favor of access_ports # diff --git a/src/deployment_templates/upgrade.j2 b/src/deployment_templates/upgrade.j2 index 1a634cd06c..59e4099437 100644 --- a/src/deployment_templates/upgrade.j2 +++ b/src/deployment_templates/upgrade.j2 @@ -62,5 +62,14 @@ upgrade_portal: {{ upgrade_portal | lower }} # upgrade_portal: False {%- endif %} +# < VSD Pre-upgrade Database Check Script Path > +# Path on the MetroAE host to the VSD pre-upgrade database check script +# +{%- if vsd_preupgrade_db_check_script_path is defined %} +vsd_preupgrade_db_check_script_path: "{{ vsd_preupgrade_db_check_script_path }}" +{%- else %} +# vsd_preupgrade_db_check_script_path: "" +{%- endif %} + ############# diff --git a/src/deployment_templates/vsds.j2 b/src/deployment_templates/vsds.j2 index 16db8a7986..1fd42759eb 100644 --- a/src/deployment_templates/vsds.j2 +++ b/src/deployment_templates/vsds.j2 @@ -458,6 +458,37 @@ ############################### + ##### VSD RAM, CPU and Disk Parameters + + # < VSD RAM > + # Amount of VSD RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + {%- if item.vsd_ram is defined %} + vsd_ram: {{ item.vsd_ram }} + {%- else %} + # vsd_ram: (global VSD RAM) + {%- endif %} + + # < VSD CPU cores > + # Number of CPU's for VSD. + # + {%- if item.vsd_cpu_cores is defined %} + vsd_cpu_cores: {{ item.vsd_cpu_cores }} + {%- else %} + # vsd_cpu_cores: (global VSD CPU Cores) + {%- endif %} + + # < VSD Disk Size > + # Amount of VSD disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + {%- if item.vsd_fallocate_size_gb is defined %} + vsd_fallocate_size_gb: {{ item.vsd_fallocate_size_gb }} + {%- else %} + # vsd_fallocate_size_gb: (global VSD CPU Cores) + {%- endif %} + + ###################################### + {% endfor %} {% else %} [ ] diff --git a/src/deployment_templates/vstats.j2 b/src/deployment_templates/vstats.j2 index df47c2ba30..f2406ccfe5 100644 --- a/src/deployment_templates/vstats.j2 +++ b/src/deployment_templates/vstats.j2 @@ -476,6 +476,37 @@ ######################### + ##### VSTAT RAM, CPU and Disk Parameters + + # < VSTAT RAM > + # Valid for only KVM and VCenter deployments. Amount of VSTAT RAM to allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments must use a value greater than or equal to the default. + # + {%- if item.vstat_ram is defined %} + vstat_ram: {{ item.vstat_ram }} + {%- else %} + # vstat_ram: (global VSTAT RAM) + {%- endif %} + + # < VSTAT CPU cores > + # Valid for only KVM and VCenter deployments. Number of CPU's for VSTAT. + # + {%- if item.vstat_cpu_cores is defined %} + vstat_cpu_cores: {{ item.vstat_cpu_cores }} + {%- else %} + # vstat_cpu_cores: (global VSTAT CPU) + {%- endif %} + + # < VSTAT Disk Size > + # Amount of VSTAT disk space to pre-allocate, in GB. Note: Values smaller than the default are for lab and PoC only. Production deployments should not modify this value. + # + {%- if item.vstat_allocate_size_gb is defined %} + vstat_allocate_size_gb: {{ item.vstat_allocate_size_gb }} + {%- else %} + # vstat_allocate_size_gb: (global VSTAT DISK) + {%- endif %} + + ######################################## + {% endfor %} {% else %} [ ] diff --git a/src/deployment_templates/webfilters.j2 b/src/deployment_templates/webfilters.j2 index 82b35d1ed9..19900fa92e 100644 --- a/src/deployment_templates/webfilters.j2 +++ b/src/deployment_templates/webfilters.j2 @@ -97,6 +97,33 @@ # cert_name: (Hostname) {%- endif %} + # < Http proxy > + # Optional HTTP Proxy for webfilter VM + # + {%- if item.web_http_proxy is defined %} + web_http_proxy: "{{ item.web_http_proxy }}" + {%- else %} + # web_http_proxy: "" + {%- endif %} + + # < Http proxy host > + # HTTP Proxy host for webfilter proxy + # + {%- if item.web_proxy_host is defined %} + web_proxy_host: "{{ item.web_proxy_host }}" + {%- else %} + # web_proxy_host: "" + {%- endif %} + + # < Http proxy port > + # HTTP Proxy port for webfilter proxy + # + {%- if item.web_proxy_port is defined %} + web_proxy_port: "{{ item.web_proxy_port }}" + {%- else %} + # web_proxy_port: "" + {%- endif %} + # < Run incompass operation command > # Run incompass operation command. This is enabled by default and may take up to 20 minutes and requires internet connection. # diff --git a/src/generate_deployment_spreadsheet_template.py b/src/generate_deployment_spreadsheet_template.py index 7e7020e067..20bc10c51b 100755 --- a/src/generate_deployment_spreadsheet_template.py +++ b/src/generate_deployment_spreadsheet_template.py @@ -98,6 +98,9 @@ - schema: webfilters headers: [Webfilters, Webfilter1, "", Descriptions] +- schema: nfss + headers: [NFSs, NFS Server 1, NFS Server 2, "", Descriptions] + - "" - When using MetroAE for zero-factor bootstrap of NSGvs diff --git a/src/inventory/hosts b/src/inventory/hosts index 68a598a395..cd7d4324db 100644 --- a/src/inventory/hosts +++ b/src/inventory/hosts @@ -3,3 +3,4 @@ # Changes made to this file may be overwritten. # localhost ansible_connection=local ansible_python_interpreter=python + diff --git a/src/library/create_zfb_profile.py b/src/library/create_zfb_profile.py index 3190548424..acdd389f16 100755 --- a/src/library/create_zfb_profile.py +++ b/src/library/create_zfb_profile.py @@ -27,7 +27,7 @@ vsd_license_file: description: - Set path to VSD license file. - required:True + required:False vsd_auth: description: - Credentials for accessing VSD. Attributes: @@ -69,6 +69,11 @@ - Parameters required for an NSG ports for ZFB. Attributes: - network_port.name - network_port.physicalName + - network_port.vlans: + - vlan_value + - vsc_infra_profile_name + - firstController + - secondController - access_ports: - access_port.name - access_port.physicalName @@ -90,7 +95,7 @@ - name - firstController - secondController - required:True + required:False ''' @@ -131,6 +136,11 @@ network_port: name: port1_network physicalName: port1 + vlans: + - vlan_value: 242 + - vsc_infra_profile_name: vsc_infra + - firstController: 192.168.1.100 + - secondController: 192.168.1.101 access_ports: - name: port2_access physicalName: port2 @@ -236,8 +246,7 @@ def create_nsg_gateway_template(module, csproot, nsg_infra): return nsg_temp -def create_vsc_infra_profile(module, csproot): - vsc_params = module.params['zfb_vsc_infra'] +def create_vsc_infra_profile(module, csproot, vsc_params): vsc_infra = csproot.infrastructure_vsc_profiles.get_first( "name is '%s'" % vsc_params['name']) @@ -249,7 +258,7 @@ def create_vsc_infra_profile(module, csproot): return vsc_infra -def create_nsgv_ports(module, nsg_temp, vsc_infra): +def create_nsgv_ports(module, nsg_temp, csproot, uplinks): port_params = module.params['zfb_ports'] zfb_constants = module.params['zfb_constants'] @@ -264,16 +273,42 @@ def create_nsgv_ports(module, nsg_temp, vsc_infra): network_port['portType'] = zfb_constants['network_port_type'] port_temp = VSPK.NUNSPortTemplate(data=network_port) nsg_temp.create_child(port_temp) - # Attach vlan0 and vsc profile - vlan_temp = VSPK.NUVLANTemplate() - vlan_temp.value = '0' - vlan_temp.associated_vsc_profile_id = vsc_infra.id - port_temp.create_child(vlan_temp) - uplink = VSPK.NUUplinkConnection() - uplink.mode = "Dynamic" - uplink.role = "PRIMARY" - vlan_temp.create_child(uplink) + if 'vlans' in network_port: + for vlan in network_port['vlans']: + vsc_params = { + 'name': vlan['vsc_infra_profile_name'], + 'firstController': vlan['firstController'] + } + if 'secondController' in vlan: + vsc_params['secondController'] = vlan['secondController'] + + vlan_temp = VSPK.NUVLANTemplate() + vlan_temp.value = vlan['vlan_value'] + if vlan['uplink']: + # Attach vlan and vsc profile + vsc_infra = create_vsc_infra_profile(module, csproot, vsc_params) + vlan_temp.associated_vsc_profile_id = vsc_infra.id + port_temp.create_child(vlan_temp) + + if int(vlan['vlan_value']) in uplinks: + uplink = VSPK.NUUplinkConnection() + uplink.mode = "Dynamic" + uplink.role = "PRIMARY" + vlan_temp.create_child(uplink) + else: + vsc_params = module.params['zfb_vsc_infra'] + vsc_infra = create_vsc_infra_profile(module, csproot, vsc_params) + # Attach vlan0 and vsc profile + vlan_temp = VSPK.NUVLANTemplate() + vlan_temp.value = '0' + vlan_temp.associated_vsc_profile_id = vsc_infra.id + port_temp.create_child(vlan_temp) + + uplink = VSPK.NUUplinkConnection() + uplink.mode = "Dynamic" + uplink.role = "PRIMARY" + vlan_temp.create_child(uplink) for access_port in access_ports: port_temp = nsg_temp.ns_port_templates.get_first( @@ -302,7 +337,7 @@ def create_enterprise(csproot, name): return enterprise -def create_nsg_device(module, csproot, nsg_temp): +def create_nsg_device(module, csproot, nsg_temp, uplinks): nsg_params = module.params['zfb_nsg'] nsg_infra = module.params['zfb_nsg_infra'] @@ -322,6 +357,9 @@ def create_nsg_device(module, csproot, nsg_temp): if nsg_infra["instanceSSHOverride"] == "ALLOWED": nsg_data["SSHService"] = nsg_params['ssh_service'] + if len(uplinks) > 1: + nsg_data["networkAcceleration"] = "PERFORMANCE" + nsg_dev = VSPK.NUNSGateway(data=nsg_data) metro_org.create_child(nsg_dev) @@ -360,6 +398,22 @@ def create_iso_file(module, metro_org, nsg_temp): subprocess.call("gzip -f -d %s/user_image.iso.gz" % nsgv_path, shell=True) +def get_uplinks(module): + port_params = module.params['zfb_ports'] + network_port = port_params['network_port'] + + if 'vlans' not in network_port: + return [] + + uplinks = [int(x['vlan_value']) for x in network_port['vlans'] if x['uplink']] + vlan0_uplink = next((x for x in uplinks if x == 0), None) + if vlan0_uplink is not None: + # Don't create uplinks on other vlans if we need one on vlan 0 + uplinks = [0] + + return uplinks + + def main(): arg_spec = dict( nsgv_path=dict( @@ -372,7 +426,7 @@ def main(): required=False, type='str'), vsd_license_file=dict( - required=True, + required=False, type='str'), vsd_auth=dict( required=True, @@ -409,12 +463,12 @@ def main(): # Get VSD license vsd_license = "" - try: - with open(vsd_license_file, 'r') as lf: - vsd_license = lf.read() - except Exception as e: - module.fail_json(msg="ERROR: Failure reading file: %s" % e) - return + if vsd_license_file != '': + try: + with open(vsd_license_file, 'r') as lf: + vsd_license = lf.read() + except Exception as e: + module.fail_json(msg="ERROR: Failure reading file: %s" % e) # Create a session as csp user try: @@ -430,19 +484,21 @@ def main(): nsg_already_configured = False # Create nsg templates and iso file - if (not is_license_already_installed(csproot, vsd_license)): - install_license(csproot, vsd_license) + if vsd_license_file != '': + if (not is_license_already_installed(csproot, vsd_license)): + install_license(csproot, vsd_license) if has_nsg_configuration(module, csproot): nsg_already_configured = True create_proxy_user(module, session) + uplinks = get_uplinks(module) + nsg_infra = create_nsg_infra_profile(module, csproot) nsg_temp = create_nsg_gateway_template(module, csproot, nsg_infra) - vsc_infra = create_vsc_infra_profile(module, csproot) - create_nsgv_ports(module, nsg_temp, vsc_infra) - metro_org = create_nsg_device(module, csproot, nsg_temp) + create_nsgv_ports(module, nsg_temp, csproot, uplinks) + metro_org = create_nsg_device(module, csproot, nsg_temp, uplinks) if ("skip_iso_create" not in module.params or module.params["skip_iso_create"] is not True): diff --git a/src/library/tests/test_create_zfb_profile.py b/src/library/tests/test_create_zfb_profile.py index 3764955eac..a61e871f7d 100644 --- a/src/library/tests/test_create_zfb_profile.py +++ b/src/library/tests/test_create_zfb_profile.py @@ -283,18 +283,6 @@ def test_license__installed(self): assert is_license_already_installed(mock_root, vsd_license_str) is True - @patch(MODULE_PATCH) - def test__bad_license(self, module_patch): - params = dict(TEST_PARAMS) - params["vsd_license_file"] = "invalid_file.bad" - mock_module = setup_module(module_patch, params) - - main() - - mock_module.fail_json.assert_called_once() - args, kwargs = mock_module.fail_json.call_args_list[0] - assert "ERROR: Failure reading file: " in kwargs["msg"] - @patch(MODULE_PATCH) @patch(VSPK_PATCH) def test__cannot_connect(self, vspk_patch, module_patch): diff --git a/src/menu b/src/menu index 7e7f227d9a..94c7da7c38 100644 --- a/src/menu +++ b/src/menu @@ -78,6 +78,7 @@ if [[ ($METROAE_SETUP_TYPE != $CONFIG_SETUP_TYPE && $RUN_MODE == $CONTAINER_RUN_ MENU+=(',install,webfilters' 'Install Webfilter VMs' 'playbook' 'install_webfilters' ',install,webfilters') MENU+=(',install,webfilters,predeploy' 'Pre-deploy Webfilter VMs' 'playbook' 'webfilter_predeploy' ',install,webfilters') MENU+=(',install,webfilters,deploy' 'Deploy Webfilter VMs' 'playbook' 'webfilter_deploy' ',install,webfilters') + MENU+=(',install,nfss' 'Configure NFS Servers (NFSs)' 'playbook' 'configure_nfs_server' ',install') ################################################################################# # Upgrade COMMANDS # @@ -177,6 +178,9 @@ if [[ ($METROAE_SETUP_TYPE != $CONFIG_SETUP_TYPE && $RUN_MODE == $CONTAINER_RUN_ MENU+=(',destroy' 'Destroy Nuage components' 'playbook' 'destroy_everything' '') MENU+=(',destroy,everything' 'Destroys all running Nuage components' 'playbook' 'destroy_everything' ',destroy') MENU+=(',destroy,vsds' 'Destroy Nuage VSDs' 'playbook' 'vsd_destroy' ',destroy') + MENU+=(',destroy,vsds,new' 'Destroy New Nuage VSDs' 'playbook' 'vsd_upgrade_destroy_new' ',destroy') + MENU+=(',destroy,vsds,old' 'Destroy Old Nuage VSDs' 'playbook' 'vsd_upgrade_destroy_old' ',destroy') + MENU+=(',destroy,vsds,everything' 'Destroy Everything Nuage VSDs' 'playbook' 'vsd_upgrade_destroy_everything' ',destroy') MENU+=(',destroy,vscs' 'Destroy Nuage VSCs' 'playbook' 'vsc_destroy' ',destroy') MENU+=(',destroy,vstats' 'Destroy Nuage VSTATs (ES)' 'playbook' 'vstat_destroy' ',destroy') MENU+=(',destroy,vnsutils' 'Destroy Nuage VNSUtils' 'playbook' 'vnsutil_destroy' ',destroy') @@ -241,12 +245,14 @@ if [[ ($METROAE_SETUP_TYPE != $CONFIG_SETUP_TYPE && $RUN_MODE == $CONTAINER_RUN_ MENU+=(',vstat,failback' 'Initiates a failback to primary VSTATs' 'playbook' 'vstat_failback' ',vstat') MENU+=(',vstat,yum,update' 'Run yum update for vstat' 'playbook' 'vstat_yum_update' ',vstat') MENU+=(',vstat,copy,sshid' 'Copy SSH Public Key to VSTAT' 'playbook' 'copy_sshid_vstat' ',vstat') + MENU+=(',vstat,security,hardening' 'Harden the VSTAT for security' 'playbook' 'vstat_security_hardening' ',vstat') MENU+=(',vstat,update,license' 'Update the license on VSTAT' 'playbook' 'vstat_update_license' ',vstat') ################################################################################# # NUH COMMANDS # ################################################################################# MENU+=(',nuh,copy,sshid' 'Copy SSH Public Key to nuh' 'playbook' 'copy_sshid_nuh' ',nuh') + MENU+=(',nuh,generate,certificates' 'Generate NUH certificates after VSD installation' 'playbook' 'generate_nuh_certificates' ',nuh') ################################################################################# # Tools COMMANDS # diff --git a/src/playbooks/build.yml b/src/playbooks/build.yml index d46660829d..624304ca2b 100644 --- a/src/playbooks/build.yml +++ b/src/playbooks/build.yml @@ -1,17 +1,31 @@ --- - hosts: localhost pre_tasks: - - name: Check for Ansible version + - name: Get Ansible version + shell: + cmd: pip3 freeze | grep -w ansible + register: ansible_info + + - name: Extract Ansible version from output + set_fact: + version_ansible: "{{ ansible_info.stdout | regex_search('[^=].\\d.+\\d') }}" + + - name: Check Ansible version + assert: + that: "version_ansible is version('3.4.0', operator='ge', strict=True)" + msg: "Ansible version must be greater than or equal to 3.4.0. Found Ansible version {{ version_ansible }}" + + - name: Check Ansible base version assert: - that: "ansible_version.full is version('2.9.2', operator='ge', strict=True)" - msg: "Ansible version must be greater than or equal to 2.9.2. Found Ansible version {{ansible_version.full}}" + that: "ansible_version.full is version('2.10.15', operator='ge', strict=True)" + msg: "Ansible base version must be greater than or equal to 2.10.15. Found Ansible base version {{ ansible_version.full }}" - name: Get Paramiko version shell: - cmd: pip freeze | grep paramiko + cmd: pip3 freeze | grep paramiko register: paramiko_info - - name: Extract version from output + - name: Extract Paramiko version from output set_fact: paramiko_version: "{{ paramiko_info.stdout | regex_search('[^=].\\d.+\\d') }}" diff --git a/src/playbooks/with_build/brand_vsd.yml b/src/playbooks/with_build/brand_vsd.yml index e78b852bf9..a55bfbf3b6 100644 --- a/src/playbooks/with_build/brand_vsd.yml +++ b/src/playbooks/with_build/brand_vsd.yml @@ -2,9 +2,11 @@ - hosts: primary_vsds gather_facts: no tasks: - - include_role: + - include_role: name: vsd-deploy tasks_from: brand_vsd - vars: - vsd_username: "{{ vsd_custom_username | default(vsd_default_username) }}" - vsd_password: "{{ vsd_custom_password | default(vsd_default_password) }}" + vars: + vsd_username: "{{ vsd_custom_username | default(vsd_default_username) }}" + vsd_password: "{{ vsd_custom_password | default(vsd_default_password) }}" + vsd_branding_host: "{{ item }}" + with_items: "{{ groups['primary_vsds'] }}" diff --git a/src/playbooks/with_build/configure_nfs_server.yml b/src/playbooks/with_build/configure_nfs_server.yml new file mode 100644 index 0000000000..2b67dfcf39 --- /dev/null +++ b/src/playbooks/with_build/configure_nfs_server.yml @@ -0,0 +1,5 @@ +--- +- hosts: nfss + gather_facts: no + roles: + - configure-nfs diff --git a/src/playbooks/with_build/generate_nuh_certificates.yml b/src/playbooks/with_build/generate_nuh_certificates.yml new file mode 100644 index 0000000000..bd34cf9a85 --- /dev/null +++ b/src/playbooks/with_build/generate_nuh_certificates.yml @@ -0,0 +1,7 @@ +--- +- hosts: nuhs + gather_facts: no + tasks: + - include_role: + name: nuh-deploy + tasks_from: nuh_copy_certificates.yml diff --git a/src/playbooks/with_build/setup_vstat_vss_ui.yml b/src/playbooks/with_build/setup_vstat_vss_ui.yml index 6975354170..7a58823740 100644 --- a/src/playbooks/with_build/setup_vstat_vss_ui.yml +++ b/src/playbooks/with_build/setup_vstat_vss_ui.yml @@ -1,5 +1,5 @@ --- -- hosts: vstats +- hosts: vstats, data_vstats gather_facts: no tasks: - include_role: diff --git a/src/playbooks/with_build/vrs_upgrade.yml b/src/playbooks/with_build/vrs_upgrade.yml index 5de908ad39..bb8a9b3e5d 100644 --- a/src/playbooks/with_build/vrs_upgrade.yml +++ b/src/playbooks/with_build/vrs_upgrade.yml @@ -11,4 +11,3 @@ become: yes connection: local - diff --git a/src/playbooks/with_build/vsc_ha_upgrade_backup_and_prep_1.yml b/src/playbooks/with_build/vsc_ha_upgrade_backup_and_prep_1.yml index bfe07996b0..f48eb626af 100644 --- a/src/playbooks/with_build/vsc_ha_upgrade_backup_and_prep_1.yml +++ b/src/playbooks/with_build/vsc_ha_upgrade_backup_and_prep_1.yml @@ -9,7 +9,6 @@ - hosts: vsc_ha_node1 gather_facts: no serial: 1 - connection: local pre_tasks: - name: Set upgrade flag set_fact: diff --git a/src/playbooks/with_build/vsc_ha_upgrade_backup_and_prep_2.yml b/src/playbooks/with_build/vsc_ha_upgrade_backup_and_prep_2.yml index 8ccec95d63..691ae133d3 100644 --- a/src/playbooks/with_build/vsc_ha_upgrade_backup_and_prep_2.yml +++ b/src/playbooks/with_build/vsc_ha_upgrade_backup_and_prep_2.yml @@ -9,7 +9,6 @@ - hosts: vsc_ha_node2 gather_facts: no serial: 1 - connection: local pre_tasks: - name: Set upgrade flag set_fact: diff --git a/src/playbooks/with_build/vsc_ha_upgrade_postdeploy_1.yml b/src/playbooks/with_build/vsc_ha_upgrade_postdeploy_1.yml index e28b6ee7ff..5462913370 100644 --- a/src/playbooks/with_build/vsc_ha_upgrade_postdeploy_1.yml +++ b/src/playbooks/with_build/vsc_ha_upgrade_postdeploy_1.yml @@ -1,7 +1,6 @@ --- - hosts: vsc_ha_node1 gather_facts: no - connection: local pre_tasks: - name: Set upgrade flag set_fact: diff --git a/src/playbooks/with_build/vsc_ha_upgrade_postdeploy_2.yml b/src/playbooks/with_build/vsc_ha_upgrade_postdeploy_2.yml index 74ca033174..d78eec6563 100644 --- a/src/playbooks/with_build/vsc_ha_upgrade_postdeploy_2.yml +++ b/src/playbooks/with_build/vsc_ha_upgrade_postdeploy_2.yml @@ -1,7 +1,6 @@ --- - hosts: vsc_ha_node2 gather_facts: no - connection: local pre_tasks: - name: Set upgrade flag set_fact: diff --git a/src/playbooks/with_build/vsc_sa_upgrade_backup_and_prep.yml b/src/playbooks/with_build/vsc_sa_upgrade_backup_and_prep.yml index 8ce8bdad7b..7becd9fbdd 100644 --- a/src/playbooks/with_build/vsc_sa_upgrade_backup_and_prep.yml +++ b/src/playbooks/with_build/vsc_sa_upgrade_backup_and_prep.yml @@ -8,7 +8,6 @@ - hosts: vsc_sa_node gather_facts: no - connection: local pre_tasks: - name: Set upgrade flag set_fact: diff --git a/src/playbooks/with_build/vsc_sa_upgrade_postdeploy.yml b/src/playbooks/with_build/vsc_sa_upgrade_postdeploy.yml index 64e9e1d20c..21d8f0bcd2 100644 --- a/src/playbooks/with_build/vsc_sa_upgrade_postdeploy.yml +++ b/src/playbooks/with_build/vsc_sa_upgrade_postdeploy.yml @@ -1,7 +1,6 @@ --- - hosts: vsc_sa_node gather_facts: no - connection: local pre_tasks: - name: Set upgrade flag set_fact: diff --git a/src/playbooks/with_build/vsd_ha_upgrade_database_backup_and_decouple.yml b/src/playbooks/with_build/vsd_ha_upgrade_database_backup_and_decouple.yml index 221af32c1a..2c1110169c 100644 --- a/src/playbooks/with_build/vsd_ha_upgrade_database_backup_and_decouple.yml +++ b/src/playbooks/with_build/vsd_ha_upgrade_database_backup_and_decouple.yml @@ -31,7 +31,7 @@ name: common tasks_from: vsd-find-backup-node -- hosts: "{{ hostvars['localhost'].vsd_backup_node | default('vsd_ha_node1') }}" +- hosts: "{{ hostvars['localhost'].vsd_backup_node | default('vsd_upgrade_ha_node1') }}" gather_facts: no pre_tasks: - name: Fail if vsd_migration_iso_path is not defined diff --git a/src/playbooks/with_build/vsd_upgrade_destroy_everything.yml b/src/playbooks/with_build/vsd_upgrade_destroy_everything.yml new file mode 100644 index 0000000000..a9ec58c95d --- /dev/null +++ b/src/playbooks/with_build/vsd_upgrade_destroy_everything.yml @@ -0,0 +1,31 @@ +--- +- hosts: localhost + gather_facts: no + tasks: + - name: Prompt for destroy confirmation + include_role: + name: common + tasks_from: prompt-before-destroy + vars: + destroy_components_name: VSD + +- hosts: vsds + gather_facts: no + serial: 1 + pre_tasks: + - name: Lets run VSD destroy hooks + include_role: + name: hooks + tasks_from: main + vars: + - hooks_file_path: "{{ hook }}" + - hook_location: + - vsd_destroy + loop: "{{ hooks | default([]) }}" + loop_control: + loop_var: hook + roles: + - vsd-destroy + vars: + vm_name: "{{ vmname }}" + vm_name: "{{ upgrade_vmname }}" diff --git a/src/playbooks/with_build/vsd_upgrade_destroy_new.yml b/src/playbooks/with_build/vsd_upgrade_destroy_new.yml new file mode 100644 index 0000000000..e3928a283c --- /dev/null +++ b/src/playbooks/with_build/vsd_upgrade_destroy_new.yml @@ -0,0 +1,30 @@ +--- +- hosts: localhost + gather_facts: no + tasks: + - name: Prompt for destroy confirmation + include_role: + name: common + tasks_from: prompt-before-destroy + vars: + destroy_components_name: VSD + +- hosts: vsds + gather_facts: no + serial: 1 + pre_tasks: + - name: Lets run VSD destroy hooks + include_role: + name: hooks + tasks_from: main + vars: + - hooks_file_path: "{{ hook }}" + - hook_location: + - vsd_destroy + loop: "{{ hooks | default([]) }}" + loop_control: + loop_var: hook + roles: + - vsd-destroy + vars: + vm_name: "{{ upgrade_vmname }}" diff --git a/src/playbooks/with_build/vsd_upgrade_destroy_old.yml b/src/playbooks/with_build/vsd_upgrade_destroy_old.yml new file mode 100644 index 0000000000..d1bd3450e8 --- /dev/null +++ b/src/playbooks/with_build/vsd_upgrade_destroy_old.yml @@ -0,0 +1,30 @@ +--- +- hosts: localhost + gather_facts: no + tasks: + - name: Prompt for destroy confirmation + include_role: + name: common + tasks_from: prompt-before-destroy + vars: + destroy_components_name: VSD + +- hosts: vsds + gather_facts: no + serial: 1 + pre_tasks: + - name: Lets run VSD destroy hooks + include_role: + name: hooks + tasks_from: main + vars: + - hooks_file_path: "{{ hook }}" + - hook_location: + - vsd_destroy + loop: "{{ hooks | default([]) }}" + loop_control: + loop_var: hook + roles: + - vsd-destroy + vars: + vm_name: "{{ vmname }}" diff --git a/src/playbooks/with_build/vstat_deploy.yml b/src/playbooks/with_build/vstat_deploy.yml index 63fb68ecc4..dca6aa869e 100644 --- a/src/playbooks/with_build/vstat_deploy.yml +++ b/src/playbooks/with_build/vstat_deploy.yml @@ -15,6 +15,10 @@ loop_var: hook roles: - vstat-deploy + post_tasks: + - include_role: + name: vstat-deploy + tasks_from: vstat_security_hardening.yml - hosts: primary_vstats gather_facts: no @@ -36,6 +40,10 @@ loop_var: hook roles: - vstat-deploy + post_tasks: + - include_role: + name: vstat-deploy + tasks_from: vstat_security_hardening.yml - name: Run VSTAT Standby Deploy import_playbook: "vstat_standby_deploy.yml" diff --git a/src/playbooks/with_build/vstat_security_hardening.yml b/src/playbooks/with_build/vstat_security_hardening.yml new file mode 100644 index 0000000000..d498a0e3ca --- /dev/null +++ b/src/playbooks/with_build/vstat_security_hardening.yml @@ -0,0 +1,7 @@ +--- +- hosts: vstats,data_vstats,primary_vstats + gather_facts: no + tasks: + - include_role: + name: vstat-deploy + tasks_from: vstat_security_hardening.yml diff --git a/src/playbooks/with_build/vstat_upgrade.yml b/src/playbooks/with_build/vstat_upgrade.yml index b98d05ea49..718593b376 100644 --- a/src/playbooks/with_build/vstat_upgrade.yml +++ b/src/playbooks/with_build/vstat_upgrade.yml @@ -103,7 +103,7 @@ - name: Set Backup vstats flag set_fact: - is_backup_vstats: true + is_backup_vstats: false - name: Check if prereq satisfied for upgrade include_role: diff --git a/src/raw_example_data/kvm_sdwan_install/nfss.yml b/src/raw_example_data/kvm_sdwan_install/nfss.yml new file mode 100644 index 0000000000..981e1f398b --- /dev/null +++ b/src/raw_example_data/kvm_sdwan_install/nfss.yml @@ -0,0 +1,6 @@ +--- +nfss: + - + hostname: "nfs1.company.com" + nfs_ip: 192.168.110.74 + mount_directory_location: /nfs diff --git a/src/raw_example_data/nsgv_bootstrap/nsgv_network_port_vlans.yml b/src/raw_example_data/nsgv_bootstrap/nsgv_network_port_vlans.yml new file mode 100644 index 0000000000..0b20a33b36 --- /dev/null +++ b/src/raw_example_data/nsgv_bootstrap/nsgv_network_port_vlans.yml @@ -0,0 +1,17 @@ +--- +nsgv_network_port_vlans: + - + name: "vlan1" + vlan_number: 0 + vsc_infra_profile_name: "vsc1" + first_controller_address: "10.0.0.1" + second_controller_address: "10.0.0.2" + uplink: True + + - + name: "vlan2" + vlan_number: 2 + vsc_infra_profile_name: "vsc2" + first_controller_address: "10.0.1.1" + second_controller_address: "10.0.1.2" + uplink: False diff --git a/src/roles/build/tasks/main.yml b/src/roles/build/tasks/main.yml index 4865331998..5912e2db9a 100644 --- a/src/roles/build/tasks/main.yml +++ b/src/roles/build/tasks/main.yml @@ -73,6 +73,12 @@ when: nuhs is defined and nuhs|length > 0 tags: vns +- name: Update NFS variables + include_role: + name: common + tasks_from: nfs-process-vars + when: nfss is defined and nfss|length > 0 + - name: Update Netconf variables include_role: name: common diff --git a/src/roles/check-node-running/tasks/vcenter.yml b/src/roles/check-node-running/tasks/vcenter.yml index 31e8fb9a17..62ec290d6e 100644 --- a/src/roles/check-node-running/tasks/vcenter.yml +++ b/src/roles/check-node-running/tasks/vcenter.yml @@ -1,6 +1,6 @@ - name: Finding VM folder (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -16,13 +16,13 @@ fail_msg: "Unexpected error from vmware guest find:\n{{ vm_folder.msg | default('') }}" - name: Gathering info on VM - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vm_folder['folders'][0] }}" + folder: "{{ vm_folder['folders'][0] }}" name: "{{ vm_name }}" validate_certs: no register: vm_facts @@ -30,7 +30,7 @@ - block: - name: Get VSD directory if present (ignoring errors) - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/common/tasks/apply-licenses-to-vsd.yml b/src/roles/common/tasks/apply-licenses-to-vsd.yml index 0e3cab1bdb..de1948213a 100644 --- a/src/roles/common/tasks/apply-licenses-to-vsd.yml +++ b/src/roles/common/tasks/apply-licenses-to-vsd.yml @@ -20,7 +20,7 @@ block: - name: Apply basic license (ignoring errors) - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" state: present @@ -42,7 +42,7 @@ assert: that: "not vsd_license_output.failed or vsd_license_output.msg is search('The license already exists in the system')" msg: "The basic license was not already present and could not be applied" - ignore_errors: vsd_continue_on_license_failure | default(false) + ignore_errors: "{{ vsd_continue_on_license_failure | default(false) }}" when: vsd_license_file is defined @@ -50,7 +50,7 @@ block: - name: Apply cluster license (ignoring errors) - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" type: License @@ -72,7 +72,7 @@ assert: that: "not vsd_cluster_license_output.failed or vsd_cluster_license_output.msg is search('The license already exists in the system')" msg: "The cluster license was not already present and could not be applied" - ignore_errors: not vsd_continue_on_license_failure | default(false) + ignore_errors: "{{ vsd_continue_on_license_failure | default(false) }}" when: - vsd_cluster_license_file is defined diff --git a/src/roles/common/tasks/check-predeploy-prereq.yml b/src/roles/common/tasks/check-predeploy-prereq.yml index 6fc43b1cc5..b17a39c9e6 100644 --- a/src/roles/common/tasks/check-predeploy-prereq.yml +++ b/src/roles/common/tasks/check-predeploy-prereq.yml @@ -108,7 +108,7 @@ - block: - name: Get info on datastore - connection: local + delegate_to: localhost vmware_datastore_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/common/tasks/check-prereq.yml b/src/roles/common/tasks/check-prereq.yml index 987e18eb2c..3b99dc4c4d 100644 --- a/src/roles/common/tasks/check-prereq.yml +++ b/src/roles/common/tasks/check-prereq.yml @@ -5,7 +5,7 @@ msg: "Ansible version 2.7.10 is required. Found Ansible version {{ ansible_version.full }}" - name: get the paramiko version - shell: set -o pipefail && pip show paramiko | grep ^Version + shell: set -o pipefail && pip3 show paramiko | grep ^Version register: paramiko_version changed_when: False @@ -25,7 +25,7 @@ changed_when: False - name : Output installed packages pip - command: pip list installed + command: pip3 list installed register: pipoutput changed_when: False diff --git a/src/roles/common/tasks/create-backup-dir.yml b/src/roles/common/tasks/create-backup-dir.yml index 0964c3e45f..0e02cbf1f7 100644 --- a/src/roles/common/tasks/create-backup-dir.yml +++ b/src/roles/common/tasks/create-backup-dir.yml @@ -1,5 +1,5 @@ - name: Pull facts of localhost - connection: local + delegate_to: localhost action: setup - name: get the username running the deploy diff --git a/src/roles/common/tasks/nfs-process-vars.yml b/src/roles/common/tasks/nfs-process-vars.yml new file mode 100644 index 0000000000..2dfe399fc2 --- /dev/null +++ b/src/roles/common/tasks/nfs-process-vars.yml @@ -0,0 +1,14 @@ +--- +- block: + + - name: Create host_vars files for nfss + include_tasks: write-host-files.yml + vars: + component_template: nfss + component_hostname: "{{ component.hostname }}" + loop_control: + loop_var: component + with_items: "{{ nfss }}" + + when: nfss is defined and nfss|length > 0 + diff --git a/src/roles/common/tasks/nuh-process-vars.yml b/src/roles/common/tasks/nuh-process-vars.yml index 91cf916191..200a8e0fc6 100644 --- a/src/roles/common/tasks/nuh-process-vars.yml +++ b/src/roles/common/tasks/nuh-process-vars.yml @@ -20,11 +20,6 @@ - block: - - name: Assert that Ansible version is at least 2.9.7 - assert: - that: "ansible_version.full is version('2.9.7', operator='ge', strict=True)" - msg: "Ansible version must be greater than or equal to 2.9.7 in order to deploy NUH in VMware. Found Ansible version {{ ansible_version.full }}" - - name: Set NUH VM OVA location include_role: name: common diff --git a/src/roles/common/tasks/read-deployment.yml b/src/roles/common/tasks/read-deployment.yml index e6cfb6820c..9c8cbf1488 100644 --- a/src/roles/common/tasks/read-deployment.yml +++ b/src/roles/common/tasks/read-deployment.yml @@ -188,8 +188,7 @@ - name: Validate that NSGVs ports are properly configured assert: - that: "{{ item.access_port_name is defined and item.access_ports is undefined }} or - {{ item.access_port_name is undefined and item.access_ports is defined }}" + that: "{{ not (item.access_port_name is defined and item.access_ports is defined) }}" msg: "Incorrect configuration {{ item.vmname }} has both access_port_name and access_ports defined." with_items: "{{ nsgvs }}" diff --git a/src/roles/common/tasks/set-major-minor-versions.yml b/src/roles/common/tasks/set-major-minor-versions.yml index cf9a22ab4a..fec57bfc9c 100644 --- a/src/roles/common/tasks/set-major-minor-versions.yml +++ b/src/roles/common/tasks/set-major-minor-versions.yml @@ -56,13 +56,13 @@ - from_major_version == to_major_version - from_minor_version == to_minor_version - (from_patch_version == to_patch_version) or (from_major_version|int >= 6) - - (upgrade_from_version|upper|replace('R', '')).split('U')[0] is version_compare('5.4.1', operator='ge', strict=True) + - (upgrade_from_version|upper|replace('R', '')).split('U')[0] is version('5.4.1', '>=') - name: Set upgrade to 20.10.R4 set_fact: block_and_allow_xmpp_connection: true when: - - upgrade_from_version|upper|replace('R','') is version_compare('20.10.4', '<') - - upgrade_to_version|upper|replace('R','') is version_compare('20.10.4', '>=') + - upgrade_from_version|upper|replace('R','') is version('20.10.4', '<') + - upgrade_to_version|upper|replace('R','') is version('20.10.4', '>=') when: upgrade_from_version is defined and upgrade_to_version is defined diff --git a/src/roles/common/tasks/setup-kvm.yml b/src/roles/common/tasks/setup-kvm.yml index 33574cee37..cf080b173b 100644 --- a/src/roles/common/tasks/setup-kvm.yml +++ b/src/roles/common/tasks/setup-kvm.yml @@ -44,9 +44,13 @@ - bridge-utils - libguestfs - libguestfs-tools + - libvirt-devel - libvirt-python + lock_timeout: 100 state: present when: ansible_os_family == "RedHat" + vars: + ansible_python_interpreter: /usr/bin/python2 - name: If Debian, install packages for Debian OS family distros apt: @@ -54,10 +58,13 @@ - qemu-kvm - libvirt-bin - bridge-utils - - python-libvirt + - libvirt-dev state: present when: ansible_os_family == "Debian" + - name: Install libvirt python package + pip: name=libvirt-python + - name: Make sure libvirtd has started service: name: libvirtd diff --git a/src/roles/common/tasks/vcenter-deploy-image.yml b/src/roles/common/tasks/vcenter-deploy-image.yml index 30fd6ab4ea..aa20f92c53 100644 --- a/src/roles/common/tasks/vcenter-deploy-image.yml +++ b/src/roles/common/tasks/vcenter-deploy-image.yml @@ -1,8 +1,8 @@ - name: Deploy image on vCenter (ignoring errors) (Can take several minutes) - connection: local + delegate_to: localhost command: >- {{ ovftool_command }} - "vi://{{ vcenter.username | urlencode }}:{{ vcenter.password | urlencode }}@{{ vcenter_path }}" + 'vi://{{ "{}".format(vcenter.username) | urlencode }}:{{ "{}".format(vcenter.password) | urlencode }}@{{ vcenter_path }}' register: deploy_result ignore_errors: yes no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" @@ -10,20 +10,20 @@ - name: Show command debug debug: - msg: "{{ deploy_result.cmd | replace(vcenter.password | urlencode, '********') }}" + msg: '{{ deploy_result.cmd | replace("{}".format(vcenter.password) | urlencode, "********") }}' verbosity: 1 - name: Show stdout debug debug: - msg: "{{ deploy_result.stdout | replace(vcenter.password | urlencode, '********') }}" + msg: '{{ deploy_result.stdout | replace("{}".format(vcenter.password) | urlencode, "********") }}' verbosity: 1 - name: Show stderr debug debug: - msg: "{{ deploy_result.stderr | replace(vcenter.password | urlencode, '********') }}" + msg: '{{ deploy_result.stderr | replace("{}".format(vcenter.password) | urlencode, "********") }}' verbosity: 1 - name: Assert command succeeded assert: that: deploy_result.rc == 0 - fail_msg: "{{ deploy_result.stdout_lines | select('match', '.*rror.*') | join('\n') | replace(vcenter.password | urlencode, '********') }}" + fail_msg: '{{ deploy_result.stdout_lines | select("match", ".*rror.*") | join("\n") | replace("{}".format(vcenter.password) | urlencode, "********") }}' diff --git a/src/roles/common/tasks/vcenter-disable-cloud-init.yml b/src/roles/common/tasks/vcenter-disable-cloud-init.yml index bf887883de..9408c94cc6 100644 --- a/src/roles/common/tasks/vcenter-disable-cloud-init.yml +++ b/src/roles/common/tasks/vcenter-disable-cloud-init.yml @@ -1,5 +1,5 @@ - name: Disabling cloud-init - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -23,7 +23,7 @@ - "disable cloud-final" - name: Writing cloud-init disable file in the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -38,7 +38,7 @@ vm_shell_args: " /etc/cloud/cloud-init.disabled" - name: Set the owner and group on the cloud-init disable file in the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/common/tasks/vcenter-inject-ssh-keys.yml b/src/roles/common/tasks/vcenter-inject-ssh-keys.yml index f75e2688fb..ac49eb3227 100644 --- a/src/roles/common/tasks/vcenter-inject-ssh-keys.yml +++ b/src/roles/common/tasks/vcenter-inject-ssh-keys.yml @@ -1,5 +1,5 @@ - name: Create the directory /root/.ssh for authorized_keys on the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -14,7 +14,7 @@ vm_shell_args: " -p /root/.ssh" - name: Set the owner and group for the /root/.ssh directory in the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -29,7 +29,7 @@ vm_shell_args: " 0 0 /root/.ssh" - name: Writing authorized_keys to the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -44,7 +44,7 @@ vm_shell_args: " '{{ lookup('template', 'authorized_keys.j2') }}' > /root/.ssh/authorized_keys" - name: Set the mode on the authorized_keys file in the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/common/tasks/vcenter-setup-networking.yml b/src/roles/common/tasks/vcenter-setup-networking.yml index 79e6d6a746..900df88f01 100644 --- a/src/roles/common/tasks/vcenter-setup-networking.yml +++ b/src/roles/common/tasks/vcenter-setup-networking.yml @@ -1,5 +1,5 @@ - name: Enable IPv6 networking - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -16,7 +16,7 @@ - enable_ipv6 is defined - name: Writing eth0 network script file to the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -31,7 +31,7 @@ vm_shell_args: " '{{ ifcfg_eth0_contents }}' > /etc/sysconfig/network-scripts/ifcfg-eth0" - name: Set the owner and group on the eth0 network script file in the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -46,7 +46,7 @@ vm_shell_args: " 0 0 /etc/sysconfig/network-scripts/ifcfg-eth0" - name: Writing network file to the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -61,7 +61,7 @@ vm_shell_args: " '{{ lookup('template', 'network.j2') }}' > /etc/sysconfig/network" - name: Set the owner and group on the network file in the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -78,7 +78,7 @@ - block: - name: Set hostname for VCenter for current session - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -93,7 +93,7 @@ vm_shell_args: " '{{ inventory_hostname }}'" - name: Writing hostname file to the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -108,7 +108,7 @@ vm_shell_args: " '{{ lookup('template', 'hostname.j2') }}' > /etc/hostname" - name: Set the owner and group on the hostname file in the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/common/tasks/vsc-tls-setup.yml b/src/roles/common/tasks/vsc-tls-setup.yml index ad892030fa..f1504ad94d 100644 --- a/src/roles/common/tasks/vsc-tls-setup.yml +++ b/src/roles/common/tasks/vsc-tls-setup.yml @@ -11,12 +11,17 @@ ssh_user: "{{ vsc_creds.username }}" check_login: True +- name: Add proxy setup + set_fact: + proxy_conf: '-o ProxyCommand="ssh -W %h:%p -q {{ ssh_proxy_configuration }}"' + when: ssh_proxy_configuration is defined + - block: - block: - name: Copy external certificates command: >- sshpass -p'{{ vsc_default_password }}' scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null - {{ item }} {{ vsc_default_username }}@{{ mgmt_ip }}:/{{ item | basename }} + {{ proxy_conf | default("") }} {{ item }} {{ vsc_default_username }}@{{ mgmt_ip }}:/{{ item | basename }} no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" with_items: - "{{ private_key_path }}" diff --git a/src/roles/common/tasks/vsd-generate-transfer-certificates.yml b/src/roles/common/tasks/vsd-generate-transfer-certificates.yml index c5a77044df..8f8ebfaacc 100644 --- a/src/roles/common/tasks/vsd-generate-transfer-certificates.yml +++ b/src/roles/common/tasks/vsd-generate-transfer-certificates.yml @@ -111,7 +111,7 @@ no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" when: "'4.0.4' in vsd_version and scp_user is defined and scp_location is defined" - remote_user: "{{ hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" + remote_user: "{{ hostvars[hostvars[inventory_hostname].vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" become: "{{ 'no' if hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }}" become_flags: '-i' vars: diff --git a/src/roles/common/tasks/vsd-node-info.yml b/src/roles/common/tasks/vsd-node-info.yml index 22aace50bd..c46db06b97 100644 --- a/src/roles/common/tasks/vsd-node-info.yml +++ b/src/roles/common/tasks/vsd-node-info.yml @@ -55,12 +55,19 @@ register: python_exec become: no + - name: Setup custom pass to pass to VSD hostname + set_fact: + sshpass_command: "sshpass -p{{ custom_password }} sudo VSD_VERSION=$VSD_VERSION VSD_BUILD=$VSD_BUILD" + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" + when: custom_password | default(vsd_default_password) != vsd_default_password + - name: Get the vsd node info - shell: "{{ python_exec.stdout }} /opt/vsd/sysmon/showStatus.py" # noqa 305 + shell: "{{ sshpass_command | default('') }} {{ python_exec.stdout }} /opt/vsd/sysmon/showStatus.py" # noqa 305 environment: VSD_VERSION: "{{ vsd_version.stdout }}" VSD_BUILD: "{{ vsd_build.stdout }}" register: vsd_info + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" remote_user: "{{ custom_username | default(vsd_default_username) }}" become: "{{ 'no' if custom_username | default(vsd_default_username) == 'root' else 'yes' }}" diff --git a/src/roles/common/tasks/vsd-transfer-truststore.yml b/src/roles/common/tasks/vsd-transfer-truststore.yml index 0ee2d2cebc..c2d74d7245 100644 --- a/src/roles/common/tasks/vsd-transfer-truststore.yml +++ b/src/roles/common/tasks/vsd-transfer-truststore.yml @@ -14,7 +14,7 @@ scp -o StrictHostKeyChecking=no truststore.jks {{ stats_vsd }}.jks {{ stats_vsd }}.pem root@{{ stats_vsd }}:/opt/vsd/ejbca/p12/ chdir: /opt/vsd/ejbca/p12/ delegate_to: "{{ vsd_hostname_list[0] }}" - remote_user: "{{ hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" + remote_user: "{{ hostvars[hostvars[inventory_hostname].vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" become: "{{ 'no' if hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }}" no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" vars: diff --git a/src/roles/common/tasks/vstat-enable-stats.yml b/src/roles/common/tasks/vstat-enable-stats.yml index f7b04a4980..d2007de33b 100644 --- a/src/roles/common/tasks/vstat-enable-stats.yml +++ b/src/roles/common/tasks/vstat-enable-stats.yml @@ -18,26 +18,29 @@ tasks_from: get-vsd-build - name: Clear existing stats - command: /opt/vsd/vsd-stats.sh -d + command: "{{ sshpass_command | default('') }} /opt/vsd/vsd-stats.sh -d" environment: VSD_VERSION: "{{ vsd_version.stdout }}" VSD_BUILD: "{{ vsd_build.stdout }}" + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" when: clear_stats | default(false) - name: Enable stats collection on standalone vsd when vstat is standalone - command: /opt/vsd/vsd-stats.sh -e {{ inventory_hostname }} + command: "{{ sshpass_command | default('') }} /opt/vsd/vsd-stats.sh -e {{ inventory_hostname }}" environment: VSD_VERSION: "{{ vsd_version.stdout }}" VSD_BUILD: "{{ vsd_build.stdout }}" + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" when: - vsd_sa_or_ha is match('sa') - vstat_sa_or_ha is match('sa') - name: Enable stats collection on the cluster vsds when vstat is standalone - command: /opt/vsd/vsd-stats.sh -e {{ inventory_hostname }} + command: "{{ sshpass_command | default('') }} /opt/vsd/vsd-stats.sh -e {{ inventory_hostname }}" environment: VSD_VERSION: "{{ vsd_version.stdout }}" VSD_BUILD: "{{ vsd_build.stdout }}" + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" when: - vsd_sa_or_ha is match('ha') - vstat_sa_or_ha is match('sa') @@ -45,29 +48,32 @@ - block: - name: Enable stats collection on the vsd(s) when vstat is clustered - command: /opt/vsd/vsd-stats.sh -e {{ groups['vstats'][0] }},{{ groups['vstats'][1] }},{{ groups['vstats'][2] }} + command: "{{ sshpass_command | default('') }} /opt/vsd/vsd-stats.sh -e {{ groups['vstats'][0] }},{{ groups['vstats'][1] }},{{ groups['vstats'][2] }}" environment: VSD_VERSION: "{{ vsd_version.stdout }}" VSD_BUILD: "{{ vsd_build.stdout }}" + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" when: - vstat_sa_or_ha is match('ha') - groups['vstats'] is defined - name: Enable stats collection on the vsd(s) on primary vstats - command: /opt/vsd/vsd-stats.sh -e {{ groups['primary_vstats'][0] }},{{ groups['primary_vstats'][1] }},{{ groups['primary_vstats'][2] }} + command: "{{ sshpass_command | default('') }} /opt/vsd/vsd-stats.sh -e {{ groups['primary_vstats'][0] }},{{ groups['primary_vstats'][1] }},{{ groups['primary_vstats'][2] }}" environment: VSD_VERSION: "{{ vsd_version.stdout }}" VSD_BUILD: "{{ vsd_build.stdout }}" + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" when: - vstat_sa_or_ha is match('ha') - groups['primary_vstats'] is defined - active and not failover - name: Enable stats collection on the vsd(s) for backup vstats in failover - command: /opt/vsd/vsd-stats.sh -e {{ groups['backup_vstats'][0] }},{{ groups['backup_vstats'][1] }},{{ groups['backup_vstats'][2] }} + command: "{{ sshpass_command | default('') }} /opt/vsd/vsd-stats.sh -e {{ groups['backup_vstats'][0] }},{{ groups['backup_vstats'][1] }},{{ groups['backup_vstats'][2] }}" environment: VSD_VERSION: "{{ vsd_version.stdout }}" VSD_BUILD: "{{ vsd_build.stdout }}" + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" when: - vstat_sa_or_ha is match('ha') - groups['backup_vstats'] is defined @@ -80,7 +86,8 @@ - block: - name: Enable stats collection on the stats vsd(s) when deploying stats-out - command: /opt/vsd/vsd-stats.sh -e {{ groups['data_vstats'][0] }},{{ groups['data_vstats'][1] }},{{ groups['data_vstats'][2] }} -s {{ stats_out_proxy }} + command: "{{ sshpass_command | default('') }} /opt/vsd/vsd-stats.sh -e {{ groups['data_vstats'][0] }},{{ groups['data_vstats'][1] }},{{ groups['data_vstats'][2] }} -s {{ stats_out_proxy }}" + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" environment: VSD_VERSION: "{{ vsd_version.stdout }}" VSD_BUILD: "{{ vsd_build.stdout }}" @@ -98,8 +105,9 @@ - name: Enable statistics flag on the primary vsd(s) when deploying stats-out command: >- - /opt/vsd/vsd-stats.sh -e {{ groups['vstats'][0] }},{{ groups['vstats'][1] }},{{ groups['vstats'][2] }} + {{ sshpass_command | default('') }} /opt/vsd/vsd-stats.sh -e {{ groups['vstats'][0] }},{{ groups['vstats'][1] }},{{ groups['vstats'][2] }} -v {{ hostvars[groups['stats_only_vsds'][0]]['mgmt_ip'] }},{{ hostvars[groups['stats_only_vsds'][1]]['mgmt_ip'] }},{{ hostvars[groups['stats_only_vsds'][2]]['mgmt_ip'] }} + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" environment: VSD_VERSION: "{{ vsd_version.stdout }}" VSD_BUILD: "{{ vsd_build.stdout }}" @@ -119,11 +127,13 @@ - name: Get jboss status of all VSDs command: monit status jboss register: jboss_status - remote_user: "{{ hostvars[item].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" - become: "{{ 'no' if hostvars[item].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }}" + remote_user: "{{ hostvars[hostvars[inventory_hostname].vsd_hostname_list].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" + become: "{{ 'no' if hostvars[vsd].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }}" vars: - ansible_become_pass: "{{ hostvars[item].vsd_custom_password | default(vsd_custom_password | default(vsd_default_password)) }}" - delegate_to: "{{ item }}" + ansible_become_pass: "{{ hostvars[vsd].vsd_custom_password | default(vsd_custom_password | default(vsd_default_password)) }}" + delegate_to: "{{ vsd }}" + loop_control: + loop_var: vsd with_items: "{{ vsd_hostname_list }}" changed_when: False diff --git a/src/roles/common/tasks/wait-for-ssh.yml b/src/roles/common/tasks/wait-for-ssh.yml index 771d521acd..a90e43e984 100644 --- a/src/roles/common/tasks/wait-for-ssh.yml +++ b/src/roles/common/tasks/wait-for-ssh.yml @@ -20,3 +20,9 @@ delay: "{{ delay | default(10) }}" remote_user: "{{ username_on_the_host }}" become: false + ignore_errors: True + +- name: Check output of SSH test + assert: + that: "{{ wait_for_ssh_output.rc }} == 0" + msg: "Failed to connect to target_server {{ ssh_host }}. Is passwordless SSH set up?" diff --git a/src/roles/common/tasks/yum-update.yml b/src/roles/common/tasks/yum-update.yml index e2b5598b09..373f9e2efe 100644 --- a/src/roles/common/tasks/yum-update.yml +++ b/src/roles/common/tasks/yum-update.yml @@ -17,6 +17,8 @@ until: result is succeeded delay: 30 when: yum_update + vars: + ansible_python_interpreter: /usr/bin/python2 remote_user: "{{ vsd_default_username }}" when: @@ -41,6 +43,8 @@ until: result is succeeded delay: 30 when: vstat_yum_update + vars: + ansible_python_interpreter: /usr/bin/python2 remote_user: "{{ vstat_default_username }}" when: diff --git a/src/roles/common/templates/group_vars.all.j2 b/src/roles/common/templates/group_vars.all.j2 index 69fa7bff7f..9413598860 100644 --- a/src/roles/common/templates/group_vars.all.j2 +++ b/src/roles/common/templates/group_vars.all.j2 @@ -67,9 +67,9 @@ notification_app2: vsc_command_timeout_seconds: "{{ vsc_command_timeout_seconds|default(180) }}" vsc_scp_timeout_seconds: "{{ vsc_scp_timeout_seconds|default(720) }}" -{% if ssh_proxy_host is defined %} +{% if common.ssh_proxy_host is defined %} ssh_proxy_username: {{ common.ssh_proxy_username | default('root') }} -ssh_proxy_configuration: {{ common.ssh_proxy_username | default('root') }}@{{ ssh_proxy_host }} +ssh_proxy_configuration: {{ common.ssh_proxy_username | default('root') }}@{{ common.ssh_proxy_host }} ssh_proxy_host: {{ common.ssh_proxy_host }} {% endif %} @@ -125,6 +125,10 @@ skip_disable_stats_collection: {{ upgrade.skip_disable_stats_collection | defaul upgrade_portal: {{ upgrade.upgrade_portal | default(false) }} portal_maximum_disk_usage: 70 +{% if upgrade.vsd_preupgrade_db_check_script_path is defined %} +vsd_preupgrade_db_check_script_path: {{ upgrade.vsd_preupgrade_db_check_script_path }} +{% endif %} + vstat_default_username: "root" vstat_default_password: "Alcateldc" @@ -143,6 +147,8 @@ vnsutil_default_password: Alcateldc portal_default_username: root portal_default_password: Alcateldc +nfs_default_username: root + portal_database_default_username: vnsuser portal_database_default_password: Vnsuser1! @@ -219,6 +225,10 @@ vsc_custom_username: {{ encrypted.vsc_custom_username | indent(8, False) }} vsc_custom_password: {{ encrypted.vsc_custom_password | indent(8, False) }} {% endif %} +{% if encrypted.nfs_custom_username is defined %} +nfs_custom_username: {{ encrypted.nfs_custom_username | indent(8, False) }} +{% endif %} + {% if encrypted.netconf_vm_username is defined %} netconf_vm_username: {{ encrypted.netconf_vm_username | indent(8, False) }} {% endif %} @@ -448,4 +458,4 @@ monit_alert_not_on: {{ common.vsd_monit_alert_not_on }} {{ plugin_group_vars | default("")}} -vsd_continue_on_license_failure: {{common.vsd_continue_on_license_failure | default(false) }} +vsd_continue_on_license_failure: {{ common.vsd_continue_on_license_failure | default(false) }} diff --git a/src/roles/common/templates/hosts.j2 b/src/roles/common/templates/hosts.j2 index a8f5c2a4c3..4ac8023e28 100644 --- a/src/roles/common/templates/hosts.j2 +++ b/src/roles/common/templates/hosts.j2 @@ -2,7 +2,7 @@ # This file is automatically generated by build.yml. # Changes made to this file may be overwritten. # -localhost ansible_connection=local ansible_python_interpreter=python +localhost ansible_connection=local {% if vsds is defined and vsds|length > 0 %} @@ -252,7 +252,13 @@ vsc_ha_node2 {% for nuh in nuhs %} {{ nuh.hostname }} {% endfor %} +{% endif %} +{% if nfss is defined and nfss|length > 0 %} +[nfss] +{% for nfs in nfss %} +{{ nfs.hostname }} +{% endfor %} {% endif %} {% if nsgvs is defined and nsgvs|length > 0 %} @@ -386,6 +392,6 @@ portal_ha_node3 {{ plugin_hosts | default("")}} [all:vars] -{% if ssh_proxy_host is defined %} -ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q {{ ssh_proxy_username | default("root") }}@{{ ssh_proxy_host }}"' +{% if common.ssh_proxy_host is defined %} +ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q {{ common.ssh_proxy_username | default("root") }}@{{ common.ssh_proxy_host }}"' {% endif %} diff --git a/src/roles/common/templates/nfss.j2 b/src/roles/common/templates/nfss.j2 new file mode 100644 index 0000000000..722f2aaed6 --- /dev/null +++ b/src/roles/common/templates/nfss.j2 @@ -0,0 +1,9 @@ +# *** WARNING *** +# This is a generated file. Manual changes to this file +# will be lost if reset-build or build is run +# + +required_bridges: [] +hostname: {{ item.hostname }} +nfs_ip: {{ item.nfs_ip }} +mount_directory_location: {{ item.mount_directory_location | default('/nfs') }} diff --git a/src/roles/common/templates/nsgv.j2 b/src/roles/common/templates/nsgv.j2 index 14add05d0e..6123f187d0 100644 --- a/src/roles/common/templates/nsgv.j2 +++ b/src/roles/common/templates/nsgv.j2 @@ -105,6 +105,7 @@ zfb_nsg_infra: remoteLogServerAddress: "{{ nsgv_bootstrap.rsyslog_server | default('') }}" remoteLogServerPort: {{ nsgv_bootstrap.rsyslog_port | default(514) }} +{% if item.network_port_vlans is not defined %} zfb_vsc_infra: name: {{ item.vsc_infra_profile_name | default(nsgv_bootstrap.vsc_infra_profile_name) }} firstController: {{ item.first_controller_address | default(nsgv_bootstrap.first_controller_address) }} @@ -112,10 +113,27 @@ zfb_vsc_infra: secondController: {{ item.second_controller_address | default(nsgv_bootstrap.second_controller_address) }} {% endif %} +{% endif %} + zfb_ports: network_port: name: {{ item.network_port_name }} physicalName: {{ item.network_port_physical_name | default("port1") }} +{% if item.network_port_vlans is defined %} + vlans: +{% for port in nsgv_network_port_vlans %} +{% if port.name in item.network_port_vlans %} + - vlan_value: {{ port.vlan_number }} + uplink: {{port.uplink | default(False) }} + vsc_infra_profile_name: {{port.vsc_infra_profile_name }} + firstController: {{port.first_controller_address }} +{% if item.second_controller_address is defined or nsgv_bootstrap.second_controller_address is defined %} + secondController: {{ item.second_controller_address | default(nsgv_bootstrap.second_controller_address) }} +{% endif %} + +{% endif %} +{% endfor %} +{% endif %} access_ports: {% if item.access_port_name is defined %} - name: {{ item.access_port_name }} diff --git a/src/roles/common/templates/nuhs.j2 b/src/roles/common/templates/nuhs.j2 index 0d4a61522b..a62c737f4d 100644 --- a/src/roles/common/templates/nuhs.j2 +++ b/src/roles/common/templates/nuhs.j2 @@ -52,10 +52,8 @@ external_interface_networks: {% for interface in nuh_external_interfaces %} {% for external_if in item.external_interface_list %} {% if interface.name in external_if %} - - state: new - name: "{{ interface.external_bridge }}" - connected: true - start_connected: true + - network_name: "{{ interface.external_bridge }}" + label: {{ interface.label | default("Network Adapter " ~ ((loop.index|int + 2)|string)) }} {% if interface.dvswitch_name is defined %} dvswitch_name: "{{ interface.dvswitch_name }}" {% endif %} diff --git a/src/roles/common/templates/sdwan_portal.j2 b/src/roles/common/templates/sdwan_portal.j2 index fa873dfa07..b56ce8fdd4 100644 --- a/src/roles/common/templates/sdwan_portal.j2 +++ b/src/roles/common/templates/sdwan_portal.j2 @@ -42,11 +42,15 @@ cpuset: password_reset_email: {{ item.password_reset_email }} new_account_email: {{ item.new_account_email }} forgot_password_email: {{ item.forgot_password_email }} -smtp_fqdn: {{ item.smtp_fqdn }} -smtp_port: {{ item.smtp_port }} +smtp_fqdn: {{ item.smtp_fqdn | default("") }} +smtp_port: {{ item.smtp_port | default("") }} smtp_secure: {{ item.smtp_secure | default("") }} -smtp_user: {{ encrypted.smtp_auth_username | default("") }} -smtp_password: {{ encrypted.smtp_auth_password | default("") }} +{% if encrypted.smtp_auth_username is defined %} +smtp_user: {{ encrypted.smtp_auth_username }} +{% endif %} +{% if encrypted.smtp_auth_password is defined %} +smtp_password: {{ encrypted.smtp_auth_password }} +{% endif %} sdwan_portal_secure: {{ item.sdwan_portal_secure | default("false") }} {% if portal_path is defined %} diff --git a/src/roles/common/templates/vsd.j2 b/src/roles/common/templates/vsd.j2 index 463c3b857f..74c4afbad7 100644 --- a/src/roles/common/templates/vsd.j2 +++ b/src/roles/common/templates/vsd.j2 @@ -130,3 +130,15 @@ intermediate_certificate_path: {{ item.intermediate_certificate_path }} {% if item.certificate_path is defined %} certificate_path: {{ item.certificate_path }} {% endif %} + +{% if item.vsd_ram is defined %} +vsd_ram: {{ item.vsd_ram }} +{% endif %} + +{% if item.vsd_cpu_cores is defined %} +vsd_cpu_cores: {{ item.vsd_cpu_cores }} +{% endif %} + +{% if item.vsd_fallocate_size_gb is defined %} +vsd_fallocate_size_gb: {{ item.vsd_fallocate_size_gb }} +{% endif %} diff --git a/src/roles/common/templates/vstat.j2 b/src/roles/common/templates/vstat.j2 index c51abffafd..8d1d598666 100644 --- a/src/roles/common/templates/vstat.j2 +++ b/src/roles/common/templates/vstat.j2 @@ -104,3 +104,15 @@ vstat_custom_password: {{ creds.vstat_custom_password | indent(8, False) }} {% elif encrypted.vstat_custom_password is defined %} vstat_custom_password: {{ encrypted.vstat_custom_password | indent(8, False) }} {% endif %} + +{% if item.vstat_ram is defined %} +vstat_ram: {{ item.vstat_ram }} +{% endif %} + +{% if item.vstat_cpu_cores is defined %} +vstat_cpu_cores: {{ item.vstat_cpu_cores }} +{% endif %} + +{% if item.vstat_allocate_size_gb is defined %} +vstat_allocate_size_gb: {{ item.vstat_allocate_size_gb }} +{% endif %} diff --git a/src/roles/common/templates/webfilter.j2 b/src/roles/common/templates/webfilter.j2 index 87593895fd..dc05fbd4e8 100644 --- a/src/roles/common/templates/webfilter.j2 +++ b/src/roles/common/templates/webfilter.j2 @@ -41,6 +41,18 @@ cert_name: {{ item.cert_name }} cert_name: {{ item.hostname }} {% endif %} +{% if item.web_http_proxy is defined %} +web_http_proxy: {{ item.web_http_proxy }} +{% endif %} + +{% if item.web_proxy_host is defined %} +web_proxy_host: {{ item.web_proxy_host }} +{% endif %} + +{% if item.web_proxy_port is defined %} +web_proxy_port: {{ item.web_proxy_port }} +{% endif %} + {% if item.run_incompass_operation is defined %} run_incompass_operation: {{ item.run_incompass_operation | lower }} {% endif %} diff --git a/src/roles/common/vars/main.yml b/src/roles/common/vars/main.yml index a45633b76e..885d2a7c97 100644 --- a/src/roles/common/vars/main.yml +++ b/src/roles/common/vars/main.yml @@ -44,6 +44,10 @@ deployment_files: is_list: yes required: no encrypted: no + - name: "nfss" + is_list: yes + required: no + encrypted: no - name: "dnss" is_list: yes required: no @@ -68,6 +72,10 @@ deployment_files: is_list: no required: no encrypted: no + - name: "nsgv_network_port_vlans" + is_list: yes + required: no + encrypted: no - name: "nsgv_access_ports" is_list: yes required: no diff --git a/src/roles/configure-nfs/tasks/main.yml b/src/roles/configure-nfs/tasks/main.yml new file mode 100644 index 0000000000..4cd6907bab --- /dev/null +++ b/src/roles/configure-nfs/tasks/main.yml @@ -0,0 +1,24 @@ +- block: + + - name: Create directory to export + file: + path: "{{ mount_directory_location }}/es-backup" + mode: 0777 + state: directory + + - name: Edit /etc/exports file to make an entry of directory + shell: "echo \"# ES storage\" >> /etc/exports | echo \"{{ mount_directory_location }}/es-backup/ 10.0.0.0/8(rw,sync,no_root_squash,no_subtree_check)\" >> /etc/exports" + + - name: Export the shared directories + command: exportfs -a + + - name: Check that exported content + command: exportfs -v + register: nfs_status + + - name: Check NFS Server Configuration + assert: + that: "'{{ nfs_status.stdout }}' == '{{ mount_directory_location }}/es-backup\t10.0.0.0/8(sync,wdelay,hide,no_subtree_check,sec=sys,rw,no_root_squash,no_all_squash)'" + msg: "Error in NFS Server Configuration in /etc/exports file(invalid entry of directory)" + + remote_user: "{{ nfs_custom_username | default(nfs_default_username) }}" diff --git a/src/roles/dns-deploy/tasks/main.yml b/src/roles/dns-deploy/tasks/main.yml index 7a1c81a7af..c8b81f19bc 100644 --- a/src/roles/dns-deploy/tasks/main.yml +++ b/src/roles/dns-deploy/tasks/main.yml @@ -27,9 +27,13 @@ name: '*' state: latest # noqa 403 when: yum_update + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Install ntp client on {{ target_server }} yum: name=ntp state=latest # noqa 403 + vars: + ansible_python_interpreter: /usr/bin/python2 remote_user: "{{ dns_username }}" @@ -47,9 +51,13 @@ - name: Install bind on {{ target_server }} yum: name=bind state=latest # noqa 403 + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Install bind-utils on {{ target_server }} yum: name=bind-utils state=latest # noqa 403 + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Delete the /etc/named.conf file file: diff --git a/src/roles/netconf-manager-deploy/tasks/main.yml b/src/roles/netconf-manager-deploy/tasks/main.yml index 82ccb62a39..51ef65476a 100644 --- a/src/roles/netconf-manager-deploy/tasks/main.yml +++ b/src/roles/netconf-manager-deploy/tasks/main.yml @@ -17,6 +17,8 @@ yum: name: "/tmp/netconfmanager/{{ rpm_file_name }}" state: present + vars: + ansible_python_interpreter: /usr/bin/python2 - block: diff --git a/src/roles/netconf-manager-deploy/tasks/validate_netconf_connected.yml b/src/roles/netconf-manager-deploy/tasks/validate_netconf_connected.yml index 4acc31e78f..cf48ccd63b 100644 --- a/src/roles/netconf-manager-deploy/tasks/validate_netconf_connected.yml +++ b/src/roles/netconf-manager-deploy/tasks/validate_netconf_connected.yml @@ -16,7 +16,7 @@ no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" - name: Find the VSP on the VSD - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" type: VSP @@ -24,7 +24,7 @@ register: vsp - name: Fetch Netconfmanager with connected status - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" type: NetconfManager diff --git a/src/roles/nsgv-destroy/tasks/vcenter.yml b/src/roles/nsgv-destroy/tasks/vcenter.yml index ee888770f0..6c91e153d0 100644 --- a/src/roles/nsgv-destroy/tasks/vcenter.yml +++ b/src/roles/nsgv-destroy/tasks/vcenter.yml @@ -1,18 +1,27 @@ --- - name: Finding VM folder (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" - datacenter: "{{ vcenter.datacenter }}" name: "{{ vmname }}" validate_certs: no register: nsgv_vm_folder ignore_errors: on +- name: Check output message for unexpected errors + assert: + that: nsgv_vm_folder.msg is search('Unable to find folders for virtual machine') + fail_msg: "{{ nsgv_vm_folder.msg }}" + when: nsgv_vm_folder.msg is defined + +- name: Check for exception in NSGv VM Folder + fail: msg="Exception found {{ nsgv_vm_folder.exception }}" + when: nsgv_vm_folder.exception is defined + - name: Gathering info on VM (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -29,7 +38,7 @@ - block: - name: Power off the NSGv VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -42,7 +51,7 @@ when: nsgv_facts['instance']['hw_power_status'] == 'poweredOn' - name: Removing the NSGv VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/nsgv-postdeploy/tasks/main.yml b/src/roles/nsgv-postdeploy/tasks/main.yml index f6a21577b0..bb63dff9a3 100644 --- a/src/roles/nsgv-postdeploy/tasks/main.yml +++ b/src/roles/nsgv-postdeploy/tasks/main.yml @@ -50,7 +50,7 @@ no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" - name: Get Enterprise from VSD - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" type: Enterprise @@ -60,7 +60,7 @@ register: nuage_enterprise - name: Get NSGateway from VSD - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" type: NSGateway diff --git a/src/roles/nsgv-predeploy/tasks/aws.yml b/src/roles/nsgv-predeploy/tasks/aws.yml index 919ed324a4..97f4598d92 100644 --- a/src/roles/nsgv-predeploy/tasks/aws.yml +++ b/src/roles/nsgv-predeploy/tasks/aws.yml @@ -10,7 +10,7 @@ zfb_nsg: "{{ zfb_nsg }}" zfb_ports: "{{ zfb_ports }}" zfb_nsg_infra: "{{ zfb_nsg_infra }}" - zfb_vsc_infra: "{{ zfb_vsc_infra }}" + zfb_vsc_infra: "{{ zfb_vsc_infra | default({}) }}" delegate_to: localhost - name: Get current VSD API version @@ -29,7 +29,7 @@ no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" - name: Get Enterprise from VSD - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" type: Enterprise @@ -39,7 +39,7 @@ register: nuage_enterprise - name: Get NSG Gateway from VSD - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" type: NSGateway @@ -51,7 +51,7 @@ register: nuage_nsg - name: Find port1 of the NSG-AMI - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" type: NSPort @@ -63,7 +63,7 @@ register: nuage_nsg_port1 - name: Ensure 1:1 NAT is enabled on port1 of the NSG-AMI - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" type: NSPort @@ -73,7 +73,7 @@ nat_traversal: "ONE_TO_ONE_NAT" - name: Create Job to download auto-bootstrap info - connection: local + delegate_to: localhost nuage_vspk: auth: "{{ vspk_auth }}" type: Job diff --git a/src/roles/nsgv-predeploy/tasks/kvm.yml b/src/roles/nsgv-predeploy/tasks/kvm.yml index 6df819b25d..88f2b12afe 100644 --- a/src/roles/nsgv-predeploy/tasks/kvm.yml +++ b/src/roles/nsgv-predeploy/tasks/kvm.yml @@ -13,7 +13,7 @@ name: common tasks_from: retrieve-kvm-image - - name: Set locak variable with nsgv destination + - name: Set local variable with nsgv destination set_fact: nsgv_dest: "{{ images_path }}/{{ vm_name }}/{{ image_file_name }}" @@ -35,14 +35,14 @@ create_zfb_profile: nsgv_path: "{{ mktemp_output.path }}" fact_name: nsgv_already_configured - vsd_license_file: "{{ vsd_license_file }}" + vsd_license_file: "{{ vsd_license_file | default(None) }}" vsd_auth: "{{ vsd_auth }}" zfb_constants: "{{ zfb_constants }}" zfb_proxy_user: "{{ zfb_proxy_user }}" zfb_nsg: "{{ zfb_nsg }}" zfb_ports: "{{ zfb_ports }}" zfb_nsg_infra: "{{ zfb_nsg_infra }}" - zfb_vsc_infra: "{{ zfb_vsc_infra }}" + zfb_vsc_infra: "{{ zfb_vsc_infra | default({}) }}" delegate_to: localhost - name: Ensure NSGV has the correct configuration diff --git a/src/roles/nsgv-predeploy/tasks/vcenter.yml b/src/roles/nsgv-predeploy/tasks/vcenter.yml index 76978efed6..470c6d8842 100644 --- a/src/roles/nsgv-predeploy/tasks/vcenter.yml +++ b/src/roles/nsgv-predeploy/tasks/vcenter.yml @@ -25,14 +25,14 @@ create_zfb_profile: nsgv_path: "{{ mktemp_output.path }}" fact_name: nsgv_already_configured - vsd_license_file: "{{ vsd_license_file }}" + vsd_license_file: "{{ vsd_license_file | default(None) }}" vsd_auth: "{{ vsd_auth }}" zfb_constants: "{{ zfb_constants }}" zfb_proxy_user: "{{ zfb_proxy_user }}" zfb_nsg: "{{ zfb_nsg }}" zfb_ports: "{{ zfb_ports }}" zfb_nsg_infra: "{{ zfb_nsg_infra }}" - zfb_vsc_infra: "{{ zfb_vsc_infra }}" + zfb_vsc_infra: "{{ zfb_vsc_infra | default({}) }}" delegate_to: localhost - name: Ensure NSGV has the correct configuration diff --git a/src/roles/nsgv-predeploy/templates/nsgv.xml.j2 b/src/roles/nsgv-predeploy/templates/nsgv.xml.j2 index 897d5246ea..87da54e598 100644 --- a/src/roles/nsgv-predeploy/templates/nsgv.xml.j2 +++ b/src/roles/nsgv-predeploy/templates/nsgv.xml.j2 @@ -42,7 +42,7 @@ {% endif %} {% if bootstrap_method == 'zfb_external' %} - + {% endif %} diff --git a/src/roles/nuage-unzip/tasks/main.yml b/src/roles/nuage-unzip/tasks/main.yml index e903459197..abcd91869d 100644 --- a/src/roles/nuage-unzip/tasks/main.yml +++ b/src/roles/nuage-unzip/tasks/main.yml @@ -20,7 +20,7 @@ # Install Prerequisites ########################## - name: Pull facts of localhost - connection: local + delegate_to: localhost action: setup - block: @@ -28,13 +28,15 @@ command: cmd: rpm -q unzip warn: no - connection: local + delegate_to: localhost register: rpm_check ignore_errors: True - name: Install packages for RedHat OS family distros yum: name=unzip state=present - connection: local + vars: + ansible_python_interpreter: /usr/bin/python2 + delegate_to: localhost environment: http_proxy: "{{ yum_proxy | default('') }}" https_proxy: "{{ yum_proxy | default('') }}" @@ -43,7 +45,7 @@ - name: Install unzip on Debian OS family distribution apt: name=unzip state=present - connection: local + delegate_to: localhost when: ansible_os_family is match("Debian") diff --git a/src/roles/nuh-deploy/tasks/main.yml b/src/roles/nuh-deploy/tasks/main.yml index 75dae8e32b..7690d55cda 100644 --- a/src/roles/nuh-deploy/tasks/main.yml +++ b/src/roles/nuh-deploy/tasks/main.yml @@ -87,158 +87,57 @@ - name: Set the timezone command: timedatectl set-timezone {{ nuh_timezone }} - - name: Create and transfer certs - include_role: - name: common - tasks_from: vsd-generate-transfer-certificates - when: not skip_vsd_installed_check - vars: - certificate_password: "{{ nuh_default_password }}" - certificate_username: "{{ inventory_hostname }}" - commonName: "{{ inventory_hostname }}" - certificate_type: server - scp_user: "root" - scp_location: /opt/proxy/data/certs - additional_parameters: -d {{ inventory_hostname }} - - - block: - - - name: Get vsd node(s) information - import_role: - name: common - tasks_from: vsd-node-info.yml - vars: - vsd_hostname: "{{ vsd_fqdn }}" - - - block: - - - name: Create and transfer certs - include_role: - name: common - tasks_from: vsd-generate-transfer-certificates - when: not skip_vsd_installed_check - vars: - certificate_password: "{{ nuh_default_password }}" - certificate_username: "proxy" - commonName: "proxy" - certificate_type: server - scp_user: "root" - scp_location: /opt/proxy/data/certs - additional_parameters: -d {{ item.external_fqdn }} - with_items: - - "{{ external_interfaces }}" - - - name: Copy proxy certificates over to secondary nuh - synchronize: - src: "{{ item }}" - dest: "{{ item }}" - mode: pull - private_key: "/root/.ssh/proxypeer" - delegate_to: "{{ groups['nuhs'][1] }}" - run_once: true - when: groups['nuhs'] | length > 1 - with_items: - - "/opt/proxy/data/certs/proxy-CA.pem" - - "/opt/proxy/data/certs/proxyCert.pem" - - "/opt/proxy/data/certs/proxy-Key.pem" - - "/opt/proxy/data/certs/proxy.pem" - - - block: - - name: Create notification application user on vsd - command: /opt/ejabberd/bin/ejabberdctl register {{ notification_app1.username }} {{ vsd_fqdn }} {{ notification_app1.password }} - run_once: true - no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" - when: notification_app1 is defined - - - name: Create notification application user on vsd - command: /opt/ejabberd/bin/ejabberdctl register {{ notification_app2.username }} {{ vsd_fqdn }} {{ notification_app2.password }} - run_once: true - no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" - when: notification_app2 is defined - - delegate_to: "{{ vsd_hostname_list[0] }}" - remote_user: "{{ custom_username | default(vsd_default_username) }}" - become: "{{ 'no' if custom_username | default(vsd_default_username) == 'root' else 'yes' }}" - vars: - ansible_become_pass: "{{ custom_password | default(vsd_default_password) }}" - - - name: Copy the network config for NUH internal and external networks - template: src=config.yml.j2 backup=no dest=/opt/proxy/data/config.yml owner=root group=root mode=0640 - no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" - - - name: Check if NUH Version is 20.X - shell: "rpm -qa | grep 'nuage-proxy-utils-20'" - ignore_errors: true - register: nuhv20 - - - name: Update haproxy Service - lineinfile: - path: /opt/proxy/bin/ansible/roles/common/handlers/main.yml - regex: "shell: systemctl reload-or-restart haproxy" - line: " shell: systemctl restart haproxy" - when: nuhv20.rc == 0 - - - name: Run the network config script on NUH - command: ansible-playbook configure.yml - args: - chdir: /opt/proxy/bin/ansible - - - name: Reset haproxy Service - lineinfile: - path: /opt/proxy/bin/ansible/roles/common/handlers/main.yml - regex: "shell: systemctl restart haproxy" - line: " shell: systemctl reload-or-restart haproxy" - when: nuhv20.rc == 0 - - - name: Check if the eth1 is configured on Internal interface - shell: set -o pipefail && ip netns exec Internal ip addr list | grep eth1 - - - name: Check if the eth2 is configured on External interfaces - shell: set -o pipefail && ip netns exec {{ item.name }} ip addr list | grep eth2 - with_items: "{{ external_interfaces }}" - when: external_interfaces is defined and external_interfaces | length > 0 - - when: inventory_hostname == groups['nuhs'][0] + - name: Create NUH users and generate certificates + import_role: + name: nuh-deploy + tasks_from: nuh_create_users_certs.yml + when: + - internal_ip is defined + - not skip_vsd_installed_check + - name: Copy NUH certificates + import_role: + name: nuh-deploy + tasks_from: nuh_copy_certificates.yml when: - internal_ip is defined - not skip_vsd_installed_check - block: - - name: Copy the Custom configuration file if provided by user - copy: - dest: "/opt/proxy/data/config.yml" - src: "{{ custom_configuration_file_location }}" - mode: 0640 - owner: "root" - group: "root" - when: custom_configuration_file_location is defined - - - block: - - - name: Add stats-out proxy entries to NUH configuration - replace: - path: /opt/proxy/data/config.yml - regexp: "role: vsdconfig.*$" - replace: "role: vsdconfig, enabled: true, firewallports: ['{{ stats_out_proxy_ui_port }}', '{{ stats_out_proxy_api_port }}', '{{ stats_out_proxy_jms_port }}', '{{ stats_out_proxy_xmpp_port }}', '{{ stats_out_proxy_cert_port }}'], settings: {uiport: '{{ stats_out_proxy_ui_port }}', apiport: '{{ stats_out_proxy_api_port }}', xmppport: '{{ stats_out_proxy_xmpp_port }}', jmsport: '{{ stats_out_proxy_jms_port }}', geo: false, certport: '{{ stats_out_proxy_cert_port }}'}}" - - - name: Add stats-out proxy entries to NUH configuration - blockinfile: - path: /opt/proxy/data/config.yml - marker: "" - block: | - nsgstats: - {% for stats_only_vsds in groups['stats_only_vsds'] | list %} - - {{ hostvars[stats_only_vsds]['mgmt_ip'] }} - {% endfor %} + - name: Copy the Custom configuration file if provided by user + copy: + dest: "/opt/proxy/data/config.yml" + src: "{{ custom_configuration_file_location }}" + mode: 0640 + owner: "root" + group: "root" + when: custom_configuration_file_location is defined - when: stats_out_proxy | default('NONE') == internal_ip | default("not_set") + - block: - - name: Run the configuration script on NUH - command: ansible-playbook configure.yml - args: - chdir: /opt/proxy/bin/ansible + - name: Add stats-out proxy entries to NUH configuration + replace: + path: /opt/proxy/data/config.yml + regexp: "role: vsdconfig.*$" + replace: "role: vsdconfig, enabled: true, firewallports: ['{{ stats_out_proxy_ui_port }}', '{{ stats_out_proxy_api_port }}', '{{ stats_out_proxy_jms_port }}', '{{ stats_out_proxy_xmpp_port }}', '{{ stats_out_proxy_cert_port }}'], settings: {uiport: '{{ stats_out_proxy_ui_port }}', apiport: '{{ stats_out_proxy_api_port }}', xmppport: '{{ stats_out_proxy_xmpp_port }}', jmsport: '{{ stats_out_proxy_jms_port }}', geo: false, certport: '{{ stats_out_proxy_cert_port }}'}}" + + - name: Add stats-out proxy entries to NUH configuration + blockinfile: + path: /opt/proxy/data/config.yml + marker: "" + block: | + nsgstats: + {% for stats_only_vsds in groups['stats_only_vsds'] | list %} + - {{ hostvars[stats_only_vsds]['mgmt_ip'] }} + {% endfor %} + + when: stats_out_proxy | default('NONE') == internal_ip | default("not_set") + + - name: Run the configuration script on NUH + command: ansible-playbook configure.yml + args: + chdir: /opt/proxy/bin/ansible when: - custom_configuration_file_location is defined or stats_out_proxy | default('NONE') == internal_ip | default("not_set") diff --git a/src/roles/nuh-deploy/tasks/nuh_copy_certificates.yml b/src/roles/nuh-deploy/tasks/nuh_copy_certificates.yml new file mode 100644 index 0000000000..6e9dfbaa3d --- /dev/null +++ b/src/roles/nuh-deploy/tasks/nuh_copy_certificates.yml @@ -0,0 +1,106 @@ +- block: + + - name: Get vsd node(s) information + import_role: + name: common + tasks_from: vsd-node-info.yml + vars: + vsd_hostname: "{{ vsd_fqdn }}" + + + - block: + + - name: Create and transfer certs + include_role: + name: common + tasks_from: vsd-generate-transfer-certificates + when: not skip_vsd_installed_check + vars: + certificate_password: "{{ nuh_default_password }}" + certificate_username: "proxy" + commonName: "proxy" + certificate_type: server + scp_user: "root" + scp_location: /opt/proxy/data/certs + additional_parameters: -d {{ item.external_fqdn }} + with_items: + - "{{ external_interfaces }}" + + - name: Copy proxy certificates over to secondary nuh + synchronize: + src: "{{ item }}" + dest: "{{ item }}" + mode: pull + private_key: "/root/.ssh/proxypeer" + delegate_to: "{{ groups['nuhs'][1] }}" + run_once: true + when: groups['nuhs'] | length > 1 + with_items: + - "/opt/proxy/data/certs/proxy-CA.pem" + - "/opt/proxy/data/certs/proxyCert.pem" + - "/opt/proxy/data/certs/proxy-Key.pem" + - "/opt/proxy/data/certs/proxy.pem" + + - block: + - name: Create notification application user on vsd + command: /opt/ejabberd/bin/ejabberdctl register {{ notification_app1.username }} {{ vsd_fqdn }} {{ notification_app1.password }} + run_once: true + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" + when: notification_app1 is defined + + - name: Create notification application user on vsd + command: /opt/ejabberd/bin/ejabberdctl register {{ notification_app2.username }} {{ vsd_fqdn }} {{ notification_app2.password }} + run_once: true + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" + when: notification_app2 is defined + + delegate_to: "{{ vsd_hostname_list[0] }}" + remote_user: "{{ custom_username | default(vsd_default_username) }}" + become: "{{ 'no' if custom_username | default(vsd_default_username) == 'root' else 'yes' }}" + vars: + ansible_become_pass: "{{ custom_password | default(vsd_default_password) }}" + + - name: Copy the network config for NUH internal and external networks + template: src=config.yml.j2 backup=no dest=/opt/proxy/data/config.yml owner=root group=root mode=0640 + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" + + - name: Check if NUH Version is 20.X + shell: "rpm -qa | grep 'nuage-proxy-utils-20'" + ignore_errors: true + register: nuhv20 + + - name: Update haproxy Service + lineinfile: + path: /opt/proxy/bin/ansible/roles/common/handlers/main.yml + regex: "shell: systemctl reload-or-restart haproxy" + line: " shell: systemctl restart haproxy" + when: nuhv20.rc == 0 + + - block: + + - name: Run the network config script on NUH + command: ansible-playbook configure.yml + args: + chdir: /opt/proxy/bin/ansible + + - name: Reset haproxy Service + lineinfile: + path: /opt/proxy/bin/ansible/roles/common/handlers/main.yml + regex: "shell: systemctl restart haproxy" + line: " shell: systemctl reload-or-restart haproxy" + when: nuhv20.rc == 0 + + - name: Check if the eth1 is configured on Internal interface + shell: set -o pipefail && ip netns exec Internal ip addr list | grep eth1 + + - name: Check if the eth2 is configured on External interfaces + shell: set -o pipefail && ip netns exec {{ item.name }} ip addr list | grep eth2 + with_items: "{{ external_interfaces }}" + when: external_interfaces is defined and external_interfaces | length > 0 + + when: + - internal_ip is defined + + when: inventory_hostname == groups['nuhs'][0] + + remote_user: "{{ nuh_default_username }}" diff --git a/src/roles/nuh-deploy/tasks/nuh_create_users_certs.yml b/src/roles/nuh-deploy/tasks/nuh_create_users_certs.yml new file mode 100644 index 0000000000..e1260b8e7e --- /dev/null +++ b/src/roles/nuh-deploy/tasks/nuh_create_users_certs.yml @@ -0,0 +1,162 @@ +- block: + + - name: Create and transfer certs + include_role: + name: common + tasks_from: vsd-generate-transfer-certificates + when: not skip_vsd_installed_check + vars: + certificate_password: "{{ nuh_default_password }}" + certificate_username: "{{ inventory_hostname }}" + commonName: "{{ inventory_hostname }}" + certificate_type: server + scp_user: "root" + scp_location: /opt/proxy/data/certs + additional_parameters: -d {{ inventory_hostname }} + + - name: Create and transfer certs for "nuh-pre" user + include_role: + name: common + tasks_from: vsd-generate-transfer-certificates + when: not skip_vsd_installed_check + vars: + certificate_password: "{{ nuh_default_password }}" + certificate_username: "{{ inventory_hostname }}-pre" + commonName: "{{ inventory_hostname }}-pre" + certificate_type: server + scp_user: "root" + scp_location: /opt/proxy/data/certs + additional_parameters: "-d {{ inventory_hostname }}-pre" + + - name: Create and transfer certs for "nuh-post" user + include_role: + name: common + tasks_from: vsd-generate-transfer-certificates + when: not skip_vsd_installed_check + vars: + certificate_password: "{{ nuh_default_password }}" + certificate_username: "{{ inventory_hostname }}-post" + commonName: "{{ inventory_hostname }}-post" + certificate_type: server + scp_user: "root" + scp_location: /opt/proxy/data/certs + additional_parameters: "-d {{ inventory_hostname }}-post" + + - block: + + - name: Get current VSD API version + include_role: + name: common + tasks_from: get-current-vsd-api-version + + - name: Format VSPK auth for VSPK module + set_fact: + vspk_auth: + api_username: "{{ vsd_auth.username }}" + api_password: "{{ vsd_auth.password }}" + api_enterprise: "{{ vsd_auth.enterprise }}" + api_url: "{{ vsd_auth.api_url }}" + api_version: "{{ current_api_version }}" + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" + + - name: Check if the user already exists (ignoring errors) + nuage_vspk: + auth: "{{ vspk_auth }}" + type: User + command: find + properties: + userName: "{{ item }}" + ignore_errors: yes + register: nuage_check_user + with_items: + - "{{ inventory_hostname }}" + - "{{ inventory_hostname }}-pre" + - "{{ inventory_hostname }}-post" + + - block: + + - name: Get CSP Enterprise ID + nuage_vspk: + auth: "{{ vspk_auth }}" + type: Enterprise + command: get_csp_enterprise + register: nuage_csp_enterprise + + - name: Create NUH users + delegate_to: localhost + nuage_vspk: + auth: "{{ vspk_auth }}" + type: User + parent_id: "{{ nuage_csp_enterprise.id }}" + parent_type: Enterprise + state: present + match_filter: "userName == '{{ item }}'" + properties: + email: "example@example.com" + first_name: "Sample" + last_name: "User" + password: "sample-password" + user_name: "{{ item }}" + register: nuh_certs_users + with_items: + - "{{ inventory_hostname }}" + - "{{ inventory_hostname }}-pre" + - "{{ inventory_hostname }}-post" + + - name: Save user ids + set_fact: + nuh_user: "{{ nuh_certs_users.results[0].id }}" + nuh_pre_user: "{{ nuh_certs_users.results[1].id }}" + nuh_post_user: "{{ nuh_certs_users.results[2].id }}" + + - name: Get Root Group + nuage_vspk: + auth: "{{ vspk_auth }}" + type: Group + parent_id: "{{ nuage_csp_enterprise.id }}" + parent_type: Enterprise + command: find + properties: + name: "Root Group" + register: nuh_root_group + + - name: Get BootstrapCA Group + nuage_vspk: + auth: "{{ vspk_auth }}" + type: Group + parent_id: "{{ nuage_csp_enterprise.id }}" + parent_type: Enterprise + command: find + properties: + name: "BootstrapCA Group" + register: nuh_bootstrap_group + + - name: Get VSPCA Group + nuage_vspk: + auth: "{{ vspk_auth }}" + type: Group + parent_id: "{{ nuage_csp_enterprise.id }}" + parent_type: Enterprise + command: find + properties: + name: "VSPCA Group" + register: nuh_vspca_group + + - name: Add NUH users in appropriate group + delegate_to: localhost + nuage_vspk: + auth: "{{ vspk_auth }}" + type: User + id: "{{ item.user }}" + parent_id: "{{ item.group }}" + parent_type: Group + state: present + with_items: + - { user: "{{ nuh_user }}", group: "{{ nuh_root_group.id }}" } + - { user: "{{ nuh_pre_user }}", group: "{{ nuh_bootstrap_group.id }}" } + - { user: "{{ nuh_post_user }}", group: "{{ nuh_vspca_group.id }}" } + + when: nuage_check_user is failed + + delegate_to: localhost + remote_user: "{{ nuh_default_username }}" diff --git a/src/roles/nuh-deploy/templates/config.yml.j2 b/src/roles/nuh-deploy/templates/config.yml.j2 index 36fc2868d3..b01b8d6057 100644 --- a/src/roles/nuh-deploy/templates/config.yml.j2 +++ b/src/roles/nuh-deploy/templates/config.yml.j2 @@ -1,3 +1,6 @@ +geo: + active: true + site: a network: namespaces: - name: Internal @@ -11,6 +14,7 @@ network: dns1: '' dns2: '' dns3: '' + domain: {{ dns_domain }} gateway: {{ internal_gateway }} ipaddr: {{ internal_ip }} {% if nuh_sa_or_ha is match('ha') %} @@ -27,7 +31,7 @@ network: vlan: 0 {% if external_interfaces is defined %} {% for interface in external_interfaces %} - - dhcp: no + - dhcp: false dns1: '' dns2: '' dns3: '' @@ -41,6 +45,8 @@ network: {% endfor %} {% endif %} nuagenetwork: Internal + ssh: + - Internal {% if vrrp is defined and vrrp %} vrrp: {% for vrrp_item in vrrp %} @@ -76,6 +82,12 @@ proxy: - {interface: eth2.{{ interface.vlan }}, role: fileserver, enabled: false, firewallports: [], settings: {port: "", ccert: ""}} {% endfor %} {% endif %} +{% if groups['portals'] | default([]) %} +sdwanportal: +{% for portal in groups['portals'] %} + - {{ hostvars[portal]['mgmt_ip'] }} +{% endfor %} +{% endif %} servers: {% if vsd_sa_or_ha is match('ha') %} vsd1: {{ hostvars[groups['vsd_ha_node1'][0]].mgmt_ip }} @@ -85,9 +97,29 @@ servers: {% if vsd_sa_or_ha is match('sa') %} vsd1: {{ hostvars[groups['vsd_sa_node'][0]]['mgmt_ip'] }} {% endif %} +{% if groups['vstats'] is defined and groups['vstats'] | length >= 1 %} + es1: {{ hostvars[groups['vstats'][0]]['mgmt_ip'] }} +{% if groups['vstats'] is defined and groups['vstats'] | length >= 2 %} + es2: {{ hostvars[groups['vstats'][1]]['mgmt_ip'] }} +{% if groups['vstats'] is defined and groups['vstats'] | length >= 3 %} + es3: {{ hostvars[groups['vstats'][2]]['mgmt_ip'] }} +{% endif %} +{% endif %} +{% endif %} na: {% if vsd_auth is defined %} - vsd: {sleepTimer: 2000, socketTimeout: 60000, password: {{ vsd_auth.password }}, useJmsForEvents: false, useXmppForEvents: true, username: {{ vsd_auth.username }}, enterprise: {{ vsd_auth.enterprise }}} + vsd: + sleepTimer: 2000 + socketTimeout: 60000 + password: {{ vsd_auth.password }} + useJmsForEvents: false +{% if notification_app1 is defined %} + useXmppForEvents: true +{% else %} + useXmppForEvents: false +{% endif %} + username: {{ vsd_auth.username }} + enterprise: {{ vsd_auth.enterprise }} {% endif %} notificationHandlers: [] {% if notification_app1 is defined %} diff --git a/src/roles/nuh-destroy/tasks/vcenter.yml b/src/roles/nuh-destroy/tasks/vcenter.yml index 2ed8aefaa0..4bbb40258f 100644 --- a/src/roles/nuh-destroy/tasks/vcenter.yml +++ b/src/roles/nuh-destroy/tasks/vcenter.yml @@ -1,5 +1,5 @@ - name: Finding VM folder - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -19,13 +19,13 @@ when: nuh_vm_folder.exception is defined - name: Gathering info on VM - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ nuh_vm_folder['folders'][0] }}" + folder: "{{ nuh_vm_folder['folders'][0] }}" name: "{{ vmname }}" validate_certs: no register: nuh_vm_facts @@ -53,7 +53,7 @@ - debug: var=uuid - name: Turn off autostart - connection: local + delegate_to: localhost vmware_autostart: name: "{{ vm_name }}" uuid: "{{ uuid }}" @@ -64,28 +64,28 @@ validate_certs: no - name: Power off the NUH VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" validate_certs: no datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ nuh_vm_folder['folders'][0] }}" + folder: "{{ nuh_vm_folder['folders'][0] }}" name: "{{ vm_name }}" state: "poweredoff" when: nuh_vm_facts['instance']['hw_power_status'] == 'poweredOn' - name: Removing the NUH VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" validate_certs: no datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ nuh_vm_folder['folders'][0] }}" + folder: "{{ nuh_vm_folder['folders'][0] }}" name: "{{ vm_name }}" state: "absent" when: (not preserve_vm | default( False )) diff --git a/src/roles/nuh-postdeploy/tasks/main.yml b/src/roles/nuh-postdeploy/tasks/main.yml index 019f23dac4..80641e51ad 100644 --- a/src/roles/nuh-postdeploy/tasks/main.yml +++ b/src/roles/nuh-postdeploy/tasks/main.yml @@ -61,5 +61,6 @@ when: - internal_ip is defined - groups['vsds'] is defined + - not skip_vsd_installed_check remote_user: "{{ nuh_default_username }}" diff --git a/src/roles/nuh-predeploy/tasks/vcenter.yml b/src/roles/nuh-predeploy/tasks/vcenter.yml index 8385194bd2..071681d57b 100644 --- a/src/roles/nuh-predeploy/tasks/vcenter.yml +++ b/src/roles/nuh-predeploy/tasks/vcenter.yml @@ -41,6 +41,9 @@ {% else %} --machineOutput {% endif %} + {% if vcenter.vmfolder is defined %} + -vf={{ vcenter.vmfolder }} + {% endif %} -n={{ vm_name }} --net:"Management={{ mgmt_bridge }}" {% if internal_bridge is defined %} @@ -54,7 +57,7 @@ tasks_from: vcenter-deploy-image - name: Finding VM folder - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -64,13 +67,13 @@ register: nuh_vm_folder - name: Gathering info on VM - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ nuh_vm_folder['folders'][0] }}" + folder: "{{ nuh_vm_folder['folders'][0] }}" name: "{{ vmname }}" validate_certs: no register: nuh_vm_facts @@ -78,24 +81,30 @@ - debug: var=nuh_vm_facts verbosity=1 - name: Add external network adapters to NUH - connection: local + delegate_to: localhost vmware_guest_network: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ nuh_vm_folder['folders'][0] }}" + folder: "{{ nuh_vm_folder['folders'][0] }}" name: "{{ vmname }}" validate_certs: no gather_network_info: false - networks: "{{ external_interface_networks }}" + connected: true + start_connected: true + state: "present" + network_name: "{{ item.network_name }}" + switch: "{{ item.dvswitch_name | default(omit) }}" + label: "{{ item.label }}" register: network_info + loop: "{{ external_interface_networks }}" when: external_interface_networks is defined and external_interface_networks | length > 0 - debug: var=network_info verbosity=1 - name: Waiting until VMware tools becomes available - connection: local + delegate_to: localhost vmware_guest_tools_wait: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -130,7 +139,7 @@ vm_password: "{{ nuh_default_password }}" - name: Reboot NUH VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/nuh-predeploy/templates/ifcfg-eth0.j2 b/src/roles/nuh-predeploy/templates/ifcfg-eth0.j2 index b93765caa1..83e2a78db1 100644 --- a/src/roles/nuh-predeploy/templates/ifcfg-eth0.j2 +++ b/src/roles/nuh-predeploy/templates/ifcfg-eth0.j2 @@ -13,7 +13,9 @@ NM_CONTROLLED="no" ONBOOT="yes" TYPE="Ethernet" BOOTPROTO="static" +{% if dns_server_list is defined and dns_server_list %} DNS1="{{dns_server_list[0]}}" +{% endif %} {% if dns_server_list[1] is defined %} DNS2="{{dns_server_list[1]}}" {% endif %} diff --git a/src/roles/proxy-deploy/tasks/main.yml b/src/roles/proxy-deploy/tasks/main.yml index e6fb9473bb..66b7b99a3a 100644 --- a/src/roles/proxy-deploy/tasks/main.yml +++ b/src/roles/proxy-deploy/tasks/main.yml @@ -44,11 +44,15 @@ name: '*' state: latest # noqa 403 when: yum_update + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Install requried yum packages yum: name: [haproxy, net-tools, libguestfs-tools] state: latest # noqa 403 + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Delete the /etc/haproxy/haproxy.cfg file file: diff --git a/src/roles/proxy-predeploy/tasks/kvm.yml b/src/roles/proxy-predeploy/tasks/kvm.yml index abaad526c5..ff7949f6e9 100644 --- a/src/roles/proxy-predeploy/tasks/kvm.yml +++ b/src/roles/proxy-predeploy/tasks/kvm.yml @@ -24,6 +24,8 @@ - bridge-utils - libvirt-python when: ansible_os_family == "RedHat" + vars: + ansible_python_interpreter: /usr/bin/python2 - name: If Debian, install packages for Debian OS family distros apt: name={{ item }} state=present diff --git a/src/roles/reset-build/files/hosts b/src/roles/reset-build/files/hosts index 68a598a395..732a45fb7b 100644 --- a/src/roles/reset-build/files/hosts +++ b/src/roles/reset-build/files/hosts @@ -2,4 +2,4 @@ # This file is automatically generated by build.yml. # Changes made to this file may be overwritten. # -localhost ansible_connection=local ansible_python_interpreter=python +localhost ansible_connection=local diff --git a/src/roles/sdwan-portal-deploy/tasks/docker.yml b/src/roles/sdwan-portal-deploy/tasks/docker.yml index fb715e4845..31857ec7ce 100644 --- a/src/roles/sdwan-portal-deploy/tasks/docker.yml +++ b/src/roles/sdwan-portal-deploy/tasks/docker.yml @@ -22,6 +22,8 @@ - wget state: latest # noqa 403 when: rpm_check.rc == 1 + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Add Docker repository yum_repository: @@ -36,6 +38,8 @@ yum: name: "3:docker-ce-18.09.0-3.el7.x86_64" state: present + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Enable and start Docker service: diff --git a/src/roles/sdwan-portal-deploy/tasks/sdwan_portal_license.yml b/src/roles/sdwan-portal-deploy/tasks/sdwan_portal_license.yml index fb4119654a..3a21094b26 100644 --- a/src/roles/sdwan-portal-deploy/tasks/sdwan_portal_license.yml +++ b/src/roles/sdwan-portal-deploy/tasks/sdwan_portal_license.yml @@ -2,5 +2,5 @@ - name: Copy SD-WAN Portal License file to the Docker hosts copy: src={{ portal_license_file }} - dest={{ portal_license_path | default("/opt/vnsportal/tomcat-instance1/") }} + dest={{ portal_license_path | default("/opt/vnsportal/tomcat-instance1/vns-portal.license") }} mode=0644 diff --git a/src/roles/sdwan-portal-deploy/templates/config.j2 b/src/roles/sdwan-portal-deploy/templates/config.j2 index dd83bcf6cd..15f73350c0 100644 --- a/src/roles/sdwan-portal-deploy/templates/config.j2 +++ b/src/roles/sdwan-portal-deploy/templates/config.j2 @@ -25,7 +25,7 @@ proxy.enabled: false proxy.port: '' proxy.url: 0.0.0.0 smtp.host: {{ smtp_fqdn | default('None') }} -smtp.password: '{{ smtp_password }}' +smtp.password: '{{ smtp_password | default('None') }}' smtp.port: '{{ smtp_port | default('25') }}' smtp.secureConnection: {{ '1' if smtp_secure is defined and smtp_secure else '0' }} smtp.user: '{{ smtp_user | default('None') }}' diff --git a/src/roles/sdwan-portal-deploy/templates/environment.properties.j2 b/src/roles/sdwan-portal-deploy/templates/environment.properties.j2 index ec008dc2d3..9652f11634 100644 --- a/src/roles/sdwan-portal-deploy/templates/environment.properties.j2 +++ b/src/roles/sdwan-portal-deploy/templates/environment.properties.j2 @@ -16,9 +16,19 @@ password_reset_sender_address={{ password_reset_email }} new_account_sender_address={{ new_account_email }} forgot_password_sender_address={{ forgot_password_email }} servlet_session_secure={{ sdwan_portal_secure }} +{% if smtp_fqdn is defined %} smtp_host={{ smtp_fqdn }} +{% endif %} +{% if smtp_port is defined %} smtp_port={{ smtp_port }} +{% endif %} +{% if smtp_secure is defined %} smtp_secureConnection={{ smtp_secure }} +{% endif %} +{% if smtp_user is defined %} smtp_user={{ smtp_user }} +{% endif %} +{% if smtp_password is defined %} smtp_password={{ smtp_password }} +{% endif %} OVERWRITE_ALL=1 diff --git a/src/roles/sdwan-portal-predeploy/tasks/kvm.yml b/src/roles/sdwan-portal-predeploy/tasks/kvm.yml index ab1779d415..d813dbca5a 100644 --- a/src/roles/sdwan-portal-predeploy/tasks/kvm.yml +++ b/src/roles/sdwan-portal-predeploy/tasks/kvm.yml @@ -142,7 +142,7 @@ - name: Increase the virtual vm size shell: "qemu-img resize {{ guestfish_dest }} +{{ portal_allocate_size_gb | int - 8 }}G" - when: portal_allocate_size_gb > 0 + when: portal_allocate_size_gb | int > 0 - name: Define and start Portal VM include_role: @@ -171,7 +171,7 @@ delegate_to: "{{ inventory_hostname }}" remote_user: "{{ portal_default_username }}" - when: portal_allocate_size_gb > 0 + when: portal_allocate_size_gb | int > 0 when: not node_running delegate_to: "{{ target_server }}" diff --git a/src/roles/setup-health-monitoring/tasks/main.yml b/src/roles/setup-health-monitoring/tasks/main.yml index 42256a0902..d1e34999c4 100644 --- a/src/roles/setup-health-monitoring/tasks/main.yml +++ b/src/roles/setup-health-monitoring/tasks/main.yml @@ -13,6 +13,8 @@ retries: 10 until: result.rc == 0 delay: 5 + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Enable server access in Zabbix config lineinfile: diff --git a/src/roles/stcv-postdeploy/tasks/vcenter.yml b/src/roles/stcv-postdeploy/tasks/vcenter.yml index b7e1f07271..521338763a 100644 --- a/src/roles/stcv-postdeploy/tasks/vcenter.yml +++ b/src/roles/stcv-postdeploy/tasks/vcenter.yml @@ -1,6 +1,6 @@ --- - name: Finding VM folder (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -12,7 +12,7 @@ ignore_errors: on - name: Gathering info on VM (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/stcv-predeploy/tasks/kvm.yml b/src/roles/stcv-predeploy/tasks/kvm.yml index 45751f94f8..a1e7019768 100644 --- a/src/roles/stcv-predeploy/tasks/kvm.yml +++ b/src/roles/stcv-predeploy/tasks/kvm.yml @@ -81,7 +81,7 @@ uri=qemu:///system - name: Wait for VM ssh to be ready - connection: local + delegate_to: localhost wait_for: port: "22" host: "{{ mgmt_ip }}" diff --git a/src/roles/stcv-predeploy/tasks/vcenter.yml b/src/roles/stcv-predeploy/tasks/vcenter.yml index 9fe1238eb6..451b018cef 100644 --- a/src/roles/stcv-predeploy/tasks/vcenter.yml +++ b/src/roles/stcv-predeploy/tasks/vcenter.yml @@ -1,6 +1,6 @@ --- - name: Finding VM folder (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -12,7 +12,7 @@ ignore_errors: on - name: Gathering info on VM (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -49,7 +49,7 @@ when: vcenter.resource_pool != 'NONE' - name: Deploy STCv image on vCenter (Can take several minutes) - connection: local + delegate_to: localhost command: > {{ vcenter_global.ovftool }} --acceptAllEulas @@ -74,7 +74,7 @@ changed_when: True - name: Waiting until VMware tools becomes available - connection: local + delegate_to: localhost vmware_guest_tools_wait: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/vcin-create-dvs/tasks/main.yml b/src/roles/vcin-create-dvs/tasks/main.yml index 7c3fcb513c..94b1688583 100644 --- a/src/roles/vcin-create-dvs/tasks/main.yml +++ b/src/roles/vcin-create-dvs/tasks/main.yml @@ -9,6 +9,8 @@ yum: name: "{{ rc_cloudmgmt_rpm.files[0].path }}" state: present + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Create dvSwitch if specified command: >- diff --git a/src/roles/vcin-health/tasks/report_start.yml b/src/roles/vcin-health/tasks/report_start.yml index 33421be6f4..a5dfd82c5b 100644 --- a/src/roles/vcin-health/tasks/report_start.yml +++ b/src/roles/vcin-health/tasks/report_start.yml @@ -1,5 +1,5 @@ - name: Pull facts of localhost - connection: local + delegate_to: localhost action: setup - name: Create report folder on ansible deployment host diff --git a/src/roles/vnsutil-deploy/tasks/main.yml b/src/roles/vnsutil-deploy/tasks/main.yml index a7e659159a..229f6d4bc9 100644 --- a/src/roles/vnsutil-deploy/tasks/main.yml +++ b/src/roles/vnsutil-deploy/tasks/main.yml @@ -32,6 +32,8 @@ - name: Install NTP package on the vnsutil vm yum: name=ntp state=present + vars: + ansible_python_interpreter: /usr/bin/python2 remote_user: "{{ vnsutil_default_username }}" @@ -62,6 +64,8 @@ - name: Install dhcp package yum: name=dhcp state=present + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Write dhcp config template: src=dhcpd.conf.j2 backup=no dest=/etc/dhcp/dhcpd.conf @@ -76,6 +80,8 @@ - name: Install dnsmasq package yum: name=dnsmasq state=present + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Copy dnsmasq config template: src=dnsmasq.conf.j2 backup=no dest=/etc/dnsmasq.conf diff --git a/src/roles/vnsutil-deploy/tasks/openstack.yml b/src/roles/vnsutil-deploy/tasks/openstack.yml index 6c6e1edd4f..48fc46faa0 100644 --- a/src/roles/vnsutil-deploy/tasks/openstack.yml +++ b/src/roles/vnsutil-deploy/tasks/openstack.yml @@ -3,7 +3,7 @@ vm_name: "{{ vmname }}" - name: Get VNSUTIL details from OpenStack - os_server_facts: + os_server_info: auth: "{{ openstack_auth }}" server: "{{ vm_name }}" @@ -16,7 +16,7 @@ - name: Set vnsutil mgmt ip set_fact: - vnsutil_mgmt_ip: "{{ vnsutil_ip['ansible_facts']['openstack_servers'][0]['addresses'][openstack_mgmt_network][0]['addr'] }}" + vnsutil_mgmt_ip: "{{ vnsutil_ip['openstack_servers'][0]['addresses'][openstack_mgmt_network][0]['addr'] }}" - name: Wait for VNSUTIL ssh to be ready include_role: diff --git a/src/roles/vnsutil-destroy/tasks/vcenter.yml b/src/roles/vnsutil-destroy/tasks/vcenter.yml index a27f57c5dd..c3626bebe9 100644 --- a/src/roles/vnsutil-destroy/tasks/vcenter.yml +++ b/src/roles/vnsutil-destroy/tasks/vcenter.yml @@ -1,18 +1,27 @@ --- - name: Finding VM folder (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" - datacenter: "{{ vcenter.datacenter }}" name: "{{ vmname }}" validate_certs: no register: vnsutil_vm_folder ignore_errors: on +- name: Check output message for unexpected errors + assert: + that: vnsutil_vm_folder.msg is search('Unable to find folders for virtual machine') + fail_msg: "{{ vnsutil_vm_folder.msg }}" + when: vnsutil_vm_folder.msg is defined + +- name: Check for exception in VNSUTIL VM Folder + fail: msg="Exception found {{ vnsutil_vm_folder.exception }}" + when: vnsutil_vm_folder.exception is defined + - name: Gathering info on VM (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -29,7 +38,7 @@ - block: - name: Power off the VNS Util VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -42,7 +51,7 @@ when: vnsutil_vm_facts['instance']['hw_power_status'] == 'poweredOn' - name: Removing the VNS Util VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/vnsutil-predeploy/tasks/vcenter.yml b/src/roles/vnsutil-predeploy/tasks/vcenter.yml index 0836181622..656443cbf2 100644 --- a/src/roles/vnsutil-predeploy/tasks/vcenter.yml +++ b/src/roles/vnsutil-predeploy/tasks/vcenter.yml @@ -31,7 +31,7 @@ tasks_from: vcenter-deploy-image - name: Waiting until VMware tools becomes available - connection: local + delegate_to: localhost vmware_guest_tools_wait: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -76,7 +76,7 @@ vm_password: "{{ vnsutil_default_password }}" - name: Disable NetworkManager - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -91,7 +91,7 @@ vm_shell_args: " disable NetworkManager" - name: Reboot VNSUTIL VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/vrs-deploy/tasks/main.yml b/src/roles/vrs-deploy/tasks/main.yml index d2d5d9456f..1587954bc3 100644 --- a/src/roles/vrs-deploy/tasks/main.yml +++ b/src/roles/vrs-deploy/tasks/main.yml @@ -138,12 +138,16 @@ - name: Install Nuage OpenVSwitch packages on RedHat OS family distros yum: name={{ temp_dir }}/{{ inventory_hostname }}/{{ item }} state=present + vars: + ansible_python_interpreter: /usr/bin/python2 with_items: - "{{ vrs_package_file_name_list }}" when: ansible_os_family == "RedHat" - name: Install Selinux package for RHEL7 and Centos7 yum: name={{ temp_dir }}/{{ inventory_hostname }}/{{ item }} state=present + vars: + ansible_python_interpreter: /usr/bin/python2 with_items: - "{{ selinux_package_file_name_list }}" when: @@ -159,6 +163,8 @@ - name: Install Nuage DKMS packages on RedHat OS family distros yum: name={{ temp_dir }}/{{ inventory_hostname }}/{{ item }} state=present + vars: + ansible_python_interpreter: /usr/bin/python2 with_items: - "{{ dkms_package_file_name_list }}" when: ansible_os_family == "RedHat" diff --git a/src/roles/vrs-destroy/tasks/main.yml b/src/roles/vrs-destroy/tasks/main.yml index ae56fac56a..994a9de6d0 100644 --- a/src/roles/vrs-destroy/tasks/main.yml +++ b/src/roles/vrs-destroy/tasks/main.yml @@ -10,6 +10,8 @@ - selinux-policy-nuage state: absent when: ansible_os_family == "RedHat" + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Uninstall Nuage OpenVSwitch package on Debian OS family distros apt: diff --git a/src/roles/vrs-predeploy/tasks/main.yml b/src/roles/vrs-predeploy/tasks/main.yml index cf8df9db3d..a3f8457d4a 100644 --- a/src/roles/vrs-predeploy/tasks/main.yml +++ b/src/roles/vrs-predeploy/tasks/main.yml @@ -24,6 +24,8 @@ name: "{{ redhat_family_ovs_packages }}" state: absent when: ansible_os_family == "RedHat" + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Remove OVS packages on Debian machines if present apt: @@ -56,6 +58,8 @@ when: - ansible_os_family == "RedHat" - yum_update | default(False) + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Install VRS prerequisite packages on RedHat OS family distros yum: @@ -70,6 +74,8 @@ when: - ansible_os_family == "RedHat" - ansible_distribution_major_version == '6' + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Install VRS prerequisite packages on RedHat OS family distros yum: @@ -83,6 +89,8 @@ when: - ansible_os_family == "RedHat" - ansible_distribution_major_version == '7' + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Install DKMS prerequisite packages on RedHat OS family distros yum: @@ -93,6 +101,8 @@ when: - ansible_os_family == "RedHat" - dkms_install + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Upgrade all packages on Debian OS family distros apt: diff --git a/src/roles/vsc-deploy/tasks/nuagex.yml b/src/roles/vsc-deploy/tasks/nuagex.yml index 215ad8a72a..784e6d25e0 100644 --- a/src/roles/vsc-deploy/tasks/nuagex.yml +++ b/src/roles/vsc-deploy/tasks/nuagex.yml @@ -7,13 +7,13 @@ password: "{{ vsc_password | default(vsc_default_password) }}" - name: Creating VNS config file - connection: local + delegate_to: localhost template: src: config.cfg.j2 dest: "/tmp/ansible-nuagex-config-{{ inventory_hostname }}.cfg" - name: Copy VSC config file to the VSC - connection: local + delegate_to: localhost command: >- sshpass -p'{{ vsc_password | default(vsc_default_password) }}' scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null /tmp/ansible-nuagex-config-{{ inventory_hostname }}.cfg {{ vsc_username | default(vsc_default_username) }}@{{ mgmt_ip }}: diff --git a/src/roles/vsc-deploy/tasks/setup_vsc_config.yml b/src/roles/vsc-deploy/tasks/setup_vsc_config.yml index 886a4004c7..f4b9af68cd 100644 --- a/src/roles/vsc-deploy/tasks/setup_vsc_config.yml +++ b/src/roles/vsc-deploy/tasks/setup_vsc_config.yml @@ -75,7 +75,7 @@ when: ssh_proxy_configuration is defined - name: Copy VSC config file to the VSC - connection: local + delegate_to: localhost shell: >- sshpass -p'{{ vsc_password | default(vsc_default_password) }}' scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {{ proxy_conf | default('') }} {{ tmp_dir_name }}/{{ config_file_name }} {{ vsc_username | default(vsc_default_username) }}@{{ mgmt_ip }}: @@ -85,7 +85,7 @@ - not override_vsc_config - name: Copy VSC config file to the VSC - connection: local + delegate_to: localhost shell: >- sshpass -p'{{ vsc_password | default(vsc_default_password) }}' scp -6 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {{ proxy_conf | default('') }} {{ tmp_dir_name }}/{{ config_file_name }} @@ -98,7 +98,7 @@ - name: Override using user provided config block: - name: Copy user provided config file for VSC(IPv4) - connection: local + delegate_to: localhost shell: >- sshpass -p'{{ vsc_password | default(vsc_default_password) }}' scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {{ proxy_conf | default('') }} {{ item }} {{ vsc_username | default(vsc_default_username) }}@{{ mgmt_ip }}: @@ -106,7 +106,7 @@ when: not mgmt_ip | ipv6 - name: Copy user provided config file for VSC(IPv6) - connection: local + delegate_to: localhost shell: >- sshpass -p'{{ vsc_password | default(vsc_default_password) }}' scp -6 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {{ proxy_conf | default('') }} {{ item }} @@ -131,7 +131,7 @@ block: - name: Copy VSC BOF file to the VSC - connection: local + delegate_to: localhost shell: >- sshpass -p'{{ vsc_password | default(vsc_default_password) }}' scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {{ proxy_conf | default('') }} {{ tmp_dir_name }}/{{ bof_file_name }} {{ vsc_username | default(vsc_default_username) }}@{{ mgmt_ip }}: @@ -139,7 +139,7 @@ when: not mgmt_ip | ipv6 - name: Copy VSC BOF file to the VSC - connection: local + delegate_to: localhost shell: >- sshpass -p'{{ vsc_password | default(vsc_default_password) }}' scp -6 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {{ proxy_conf | default('') }} {{ tmp_dir_name }}/{{ bof_file_name }} @@ -231,7 +231,7 @@ - name: Verify reboot happened successfully assert: - that: "uptime_after_reboot < uptime_before_reboot" + that: "uptime_after_reboot.stdout < uptime_before_reboot.stdout" fail_msg: "System uptime after reboot step suggests there were issues during reboot. Check log for errors." when: reboot_vsc | default(true) diff --git a/src/roles/vsc-destroy/tasks/vcenter.yml b/src/roles/vsc-destroy/tasks/vcenter.yml index c67ccf4e81..226157eb5b 100644 --- a/src/roles/vsc-destroy/tasks/vcenter.yml +++ b/src/roles/vsc-destroy/tasks/vcenter.yml @@ -1,24 +1,33 @@ --- - name: Finding VM folder (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" - datacenter: "{{ vcenter.datacenter }}" name: "{{ vmname }}" validate_certs: no register: vsc_vm_folder ignore_errors: on +- name: Check output message for unexpected errors + assert: + that: vsc_vm_folder.msg is search('Unable to find folders for virtual machine') + fail_msg: "{{ vsc_vm_folder.msg }}" + when: vsc_vm_folder.msg is defined + +- name: Check for exception in VSC VM Folder + fail: msg="Exception found {{ vsc_vm_folder.exception }}" + when: vsc_vm_folder.exception is defined + - name: Gathering info on VM (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vsc_vm_folder['folders'][0] }}" + folder: "{{ vsc_vm_folder['folders'][0] }}" name: "{{ vmname }}" validate_certs: no register: vsc_vm_facts @@ -45,7 +54,7 @@ with_items: "{{ vm_info.virtual_machines }}" - name: Turn off autostart - connection: local + delegate_to: localhost vmware_autostart: name: "{{ vmname }}" uuid: "{{ uuid }}" @@ -56,28 +65,28 @@ state: disable - name: Power off the VSC VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" validate_certs: no datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vsc_vm_folder['folders'][0] }}" + folder: "{{ vsc_vm_folder['folders'][0] }}" name: "{{ vmname }}" state: "poweredoff" when: vsc_vm_facts['instance']['hw_power_status'] == 'poweredOn' - name: Removing the VSC VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" validate_certs: no datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vsc_vm_folder['folders'][0] }}" + folder: "{{ vsc_vm_folder['folders'][0] }}" name: "{{ vmname }}" state: "absent" diff --git a/src/roles/vsc-health/tasks/main.yml b/src/roles/vsc-health/tasks/main.yml index cd7822be0f..f304b9a2fd 100644 --- a/src/roles/vsc-health/tasks/main.yml +++ b/src/roles/vsc-health/tasks/main.yml @@ -54,9 +54,6 @@ provider: "{{ vsc_creds }}" register: show_router_interfaces - - name: Print output of 'show router interface' when verbosity >= 1 - debug: var=show_router_interfaces verbosity=1 - - name: Write VSC router interface table separator to report file nuage_append: filename="{{ report_path }}" text="-----VSC Router Interface Table-----\n" diff --git a/src/roles/vsc-postdeploy/tasks/main.yml b/src/roles/vsc-postdeploy/tasks/main.yml index 167c1aa6d4..bd892ec83c 100644 --- a/src/roles/vsc-postdeploy/tasks/main.yml +++ b/src/roles/vsc-postdeploy/tasks/main.yml @@ -8,7 +8,7 @@ - name: Check VSC Health after deployment import_role: name="vsc-health" - connection: local + delegate_to: localhost vars: report_filename: vsc-postdeploy-health delegate_to: localhost diff --git a/src/roles/vsc-predeploy/tasks/vcenter.yml b/src/roles/vsc-predeploy/tasks/vcenter.yml index eebb275d83..9dad5d3498 100644 --- a/src/roles/vsc-predeploy/tasks/vcenter.yml +++ b/src/roles/vsc-predeploy/tasks/vcenter.yml @@ -98,7 +98,7 @@ tasks_from: vcenter-deploy-image - name: Finding VM folder - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -108,13 +108,13 @@ register: vsc_vm_folder - name: Gathering info on VM - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vsc_vm_folder['folders'][0] }}" + folder: "{{ vsc_vm_folder['folders'][0] }}" name: "{{ vmname }}" validate_certs: no register: vsc_vm_facts @@ -128,7 +128,7 @@ - debug: var=uuid - name: Turn on autostart - connection: local + delegate_to: localhost vmware_autostart: name: "{{ vm_name }}" uuid: "{{ uuid }}" @@ -144,7 +144,7 @@ msg: "VSC VM {{ vmname }} is not created" - name: Wait for VSC ssh to be ready - connection: local + delegate_to: localhost wait_for: port: "22" host: "{{ mgmt_ip }}" diff --git a/src/roles/vsc-upgrade-deploy/tasks/main.yml b/src/roles/vsc-upgrade-deploy/tasks/main.yml index bd7da66b2a..ba913dfa72 100644 --- a/src/roles/vsc-upgrade-deploy/tasks/main.yml +++ b/src/roles/vsc-upgrade-deploy/tasks/main.yml @@ -110,7 +110,7 @@ - name: Verify reboot happened successfully assert: - that: "uptime_after_reboot < uptime_before_reboot" + that: "uptime_after_reboot.stdout < uptime_before_reboot.stdout" fail_msg: "System uptime after reboot step suggests there were issues during reboot. Check log for errors." when: target_server_type is match('kvm') or target_server_type is match('vcenter') or target_server_type is match('openstack') or target_server_type is match("none") diff --git a/src/roles/vsd-cluster-failover/tasks/main.yml b/src/roles/vsd-cluster-failover/tasks/main.yml index 71fefe4805..333ac60c23 100644 --- a/src/roles/vsd-cluster-failover/tasks/main.yml +++ b/src/roles/vsd-cluster-failover/tasks/main.yml @@ -14,7 +14,7 @@ command: ping -c1 {{ item }} delegate_to: localhost register: ping_result - ignore_errors: vsd_force_cluster_failover | default(no) + ignore_errors: "{{ vsd_force_cluster_failover | default(no) }}" with_items: "{{ groups['primary_vsds'] }}" - name: Check health of active and standby vsd cluster @@ -24,7 +24,7 @@ vars: no_report: "True" when: not skip_health_check | default(false) - ignore_errors: vsd_force_cluster_failover | default(no) + ignore_errors: "{{ vsd_force_cluster_failover | default(no) }}" run_once: false - name: Set the reachablility flag for primary vsds @@ -47,7 +47,7 @@ - name: Deactivate replication on primary VSD cluster shell: /opt/vsd/bin/vsd-deactivate-replication-master-cluster.sh -f # noqa 305 - ignore_errors: vsd_force_cluster_failover | default(no) + ignore_errors: "{{ vsd_force_cluster_failover | default(no) }}" delegate_to: "{{ item }}" with_items: "{{ groups['primary_vsds'] }}" when: @@ -56,7 +56,7 @@ - name: Deactivate replication on primary VSD cluster command: /opt/vsd/bin/vsd-deactivate-replication-master-cluster.sh - ignore_errors: vsd_force_cluster_failover | default(no) + ignore_errors: "{{ vsd_force_cluster_failover | default(no) }}" delegate_to: "{{ item }}" with_items: "{{ groups['primary_vsds'] }}" when: @@ -210,7 +210,7 @@ vars: no_report: "True" when: is_primary_reachable | default(false) - ignore_errors: vsd_force_cluster_failover | default(no) + ignore_errors: "{{ vsd_force_cluster_failover | default(no) }}" delegate_to: "{{ groups['primary_vsds'][0] }}" @@ -222,7 +222,7 @@ vars: no_report: "True" when: is_primary_reachable | default(false) - ignore_errors: vsd_force_cluster_failover | default(no) + ignore_errors: "{{ vsd_force_cluster_failover | default(no) }}" loop_control: loop_var: vsd_ha_node with_items: diff --git a/src/roles/vsd-dbbackup/tasks/main.yml b/src/roles/vsd-dbbackup/tasks/main.yml index d6d45f87f0..4eea675526 100644 --- a/src/roles/vsd-dbbackup/tasks/main.yml +++ b/src/roles/vsd-dbbackup/tasks/main.yml @@ -60,6 +60,7 @@ tags: vsd - block: + - name: Read gateway purge timer config_vsd_system: vsd_auth: @@ -148,6 +149,48 @@ - "'.iso' in mount_file.stdout" msg: "Did not find iso file in mount path" + - block: + + - name: Copy pre-upgrade database check script to vsd + copy: + src: "{{ vsd_preupgrade_db_check_script_path }}/pre-upgrade-backup.sh" + dest: "/opt/vsd/data/" + mode: '0744' + + - name: Set fact for path to script on VSD + set_fact: + db_preupgrade_script_vsd_path: "/opt/vsd/data" + + when: vsd_preupgrade_db_check_script_path is defined + + - block: + + - name: Check if pre-upgrade database check script exists in migration ISO + find: + path: "/media/CDROM/" + pattern: "pre-upgrade-backup.sh" + register: preupgrade_script + + - name: Set fact for path to script on VSD + set_fact: + db_preupgrade_script_vsd_path: "/media/CDROM" + when: preupgrade_script.matched == 1 + + when: vsd_preupgrade_db_check_script_path is not defined + + - block: + + - name: Check VSD database before upgrade + shell: "{{ db_preupgrade_script_vsd_path }}/pre-upgrade-backup.sh -s" + register: preupgrade_db_check + + - name: Assert that db preupgrade check was successful + assert: + that: "'Successfully executed pre-upgrade-backup script' in preupgrade_db_check.stdout" + msg: "The preupgrade database check was not successfully executed. Please check /opt/vsd/logs/upgrade_issue.log for more details." + + when: db_preupgrade_script_vsd_path is defined + - name: Run backup script from mount location shell: "{{ backup_cmd }}" # noqa 305 when: vsd_mysql_password is undefined @@ -165,6 +208,11 @@ - name: Umount the ISO shell: "umount /media/CDROM" # noqa 305 + - name: Delete the VSD migration script ISO from VSD's temp directory + file: + path: /tmp/{{ vsd_migration_iso_file_name }} + state: absent + remote_user: "{{ vsd_custom_username | default(vsd_default_username) }}" become: "{{ 'no' if vsd_custom_username | default(vsd_default_username) == 'root' else 'yes' }}" become_flags: '-i' diff --git a/src/roles/vsd-decouple/tasks/report_header.yml b/src/roles/vsd-decouple/tasks/report_header.yml index 64af3bdb92..4200fab30a 100644 --- a/src/roles/vsd-decouple/tasks/report_header.yml +++ b/src/roles/vsd-decouple/tasks/report_header.yml @@ -1,5 +1,5 @@ - name: Pull facts of localhost - connection: local + delegate_to: localhost action: setup - name: Create report folder on ansible deployment host diff --git a/src/roles/vsd-deploy/tasks/brand_vsd.yml b/src/roles/vsd-deploy/tasks/brand_vsd.yml index 7f165d07f0..1f645375f8 100644 --- a/src/roles/vsd-deploy/tasks/brand_vsd.yml +++ b/src/roles/vsd-deploy/tasks/brand_vsd.yml @@ -64,6 +64,8 @@ timeout_seconds: 1200 test_interval_seconds: 30 + run_once: True + delegate_to: "{{ vsd_branding_host }}" remote_user: "{{ vsd_username | default(vsd_default_username) }}" become: "{{ 'no' if vsd_username | default(vsd_default_username) == 'root' else 'yes' }}" vars: @@ -71,15 +73,19 @@ - name: Create backup directory for original branding file: - path: "{{ metro_backup_root }}/{{ hostname }}-branding" + path: "{{ metro_backup_root }}/{{ vsd_branding_host }}-branding" state: directory delegate_to: localhost + run_once: True - name: Backup original branding fetch: src: "/tmp/original-nuage-branding.zip" - dest: "{{ metro_backup_root }}/{{ hostname }}-branding" + dest: "{{ metro_backup_root }}/{{ vsd_branding_host }}-branding/" fail_on_missing: yes + flat: yes + run_once: True + delegate_to: "{{ vsd_branding_host }}" remote_user: "{{ vsd_username | default(vsd_default_username) }}" become: "{{ 'no' if vsd_username | default(vsd_default_username) == 'root' else 'yes' }}" vars: diff --git a/src/roles/vsd-deploy/tasks/main.yml b/src/roles/vsd-deploy/tasks/main.yml index abb05c2ea1..dafe15f2fd 100644 --- a/src/roles/vsd-deploy/tasks/main.yml +++ b/src/roles/vsd-deploy/tasks/main.yml @@ -308,7 +308,7 @@ - name: Wait for master VCIN processes to become running monit_waitfor_service: - name: "{{ vcin_proc_pre['state'].keys() }}" + name: "{{ vcin_proc_pre['state'].keys() | list }}" timeout_seconds: 1200 test_interval_seconds: 10 @@ -421,7 +421,11 @@ when: not skip_vsd_deploy remote_user: "{{ vsd_default_username }}" -- import_tasks: brand_vsd.yml +- name: Apply VSD branding + include_tasks: brand_vsd.yml + loop: "{{ groups['primary_vsds'] }}" + loop_control: + loop_var: vsd_branding_host when: - not skip_vsd_deploy - branding_zip_file is defined @@ -461,7 +465,7 @@ - name: wait for VSD common, core and stats services to become running monit_waitfor_service: - name: "{{ proc_list['state'].keys() | difference(remove_list) }}" + name: "{{ proc_list['state'].keys() | list | difference(remove_list) }}" timeout_seconds: 1200 test_interval_seconds: 30 @@ -493,6 +497,8 @@ when: - vstats is defined - not skip_vsd_deploy + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Install statistics for stats-out VSDs (Can take 20 minutes) command: /opt/vsd/install.sh -t t -y diff --git a/src/roles/vsd-deploy/tasks/openstack.yml b/src/roles/vsd-deploy/tasks/openstack.yml index b9ce78bcba..6a1c434436 100644 --- a/src/roles/vsd-deploy/tasks/openstack.yml +++ b/src/roles/vsd-deploy/tasks/openstack.yml @@ -20,7 +20,7 @@ when: not upgrade - name: Get VSD details from OpenStack - os_server_facts: + os_server_info: auth: "{{ openstack_auth }}" server: "{{ vm_name }}" @@ -30,10 +30,10 @@ validate_certs: false register: vsd_ip delegate_to: localhost - + - name: Set vsd mgmt ip set_fact: - vsd_mgmt_ip: "{{ vsd_ip['ansible_facts']['openstack_servers'][0]['addresses'][openstack_network][0]['addr'] }}" + vsd_mgmt_ip: "{{ vsd_ip['openstack_servers'][0]['addresses'][openstack_network][0]['addr'] }}" - name: Wait for VSD ssh to be ready include_role: diff --git a/src/roles/vsd-deploy/tasks/renew_ejabberd_license.yml b/src/roles/vsd-deploy/tasks/renew_ejabberd_license.yml index d3ac59cbe9..674e620a58 100644 --- a/src/roles/vsd-deploy/tasks/renew_ejabberd_license.yml +++ b/src/roles/vsd-deploy/tasks/renew_ejabberd_license.yml @@ -1,5 +1,4 @@ -- name: Run when ejabberd license file is defined - block: +- block: - name: Check that the ejabberdctl file exists stat: @@ -7,62 +6,78 @@ register: ejabberdctl_result - name: Install ejabberdctl if not present - command: "/opt/vsd/install/install_ejabberd.sh -9 /opt/vsd/Packages -x {{ vsd_fqdn }}" + command: "/opt/vsd/install/install_ejabberd.sh -9 /opt/vsd/Packages -x {{ vsd_fqdn }} {{ (enable_ipv6 | default(False)) | ternary('-6', '') }}" when: not ejabberdctl_result.stat.exists - - name: Find the ejabberd base directory - command: find /opt/ejabberd/lib/ -type d -name "ejabberd_nuage*" - register: ejabberd_base_dir - - - name: Backup the current license files for ejabberd - # noqa 303 - command: >- - tar zcf orig_backup.tgz ejabberd.beam ejabberd_admin.beam ejabberd_c2s.beam - ejabberd_cluster.beam ejabberd_config.beam ejabberd_listener.beam ejabberd_sm.beam - args: - chdir: "{{ ejabberd_base_dir.stdout }}/ebin" - - - name: Copy new ejabberd license file to root folder on VSD - copy: - dest: "/root/" - src: "{{ vsd_ejabberd_license_file }}" - - - name: Untar the new ejabberd license file - # noqa 303 - command: "tar zxf /root/{{ vsd_ejabberd_license_file | basename }}" - args: - chdir: "{{ ejabberd_base_dir.stdout }}/ebin" - - - name: Change ownership of licenses - # noqa 302 - command: >- - chown ejabberd:hadoopusers ejabberd_license.beam ejabberd_cluster.beam - ejabberd_c2s.beam ejabberd.beam ejabberd_sm.beam ejabberd_router.beam ejabberd_listener.beam - args: - chdir: "{{ ejabberd_base_dir.stdout }}/ebin" - when: not(ejabberd_base_dir.stdout is search("3.2.13_5")) - - - name: Change ownership of licenses for version 3.2.13_5 - # noqa 302 - command: >- - chown ejabberd:hadoopusers ejabberd_cluster.beam ejabberd_c2s.beam ejabberd.beam ejabberd_sm.beam ejabberd_router.beam ejabberd_listener.beam - args: - chdir: "{{ ejabberd_base_dir }}/ebin" - when: ejabberd_base_dir.stdout is search("3.2.13_5") + - name: Run when ejabberd license file is defined + block: + + - name: Find the ejabberd base directory + command: find /opt/ejabberd/lib/ -type d -name "ejabberd_nuage*" + register: ejabberd_base_dir + + - name: Backup the current license files for ejabberd + # noqa 303 + command: >- + tar zcf orig_backup.tgz ejabberd.beam ejabberd_admin.beam ejabberd_c2s.beam + ejabberd_cluster.beam ejabberd_config.beam ejabberd_listener.beam ejabberd_sm.beam + args: + chdir: "{{ ejabberd_base_dir.stdout }}/ebin" + + - name: Copy new ejabberd license file to root folder on VSD + copy: + dest: "/root/" + src: "{{ vsd_ejabberd_license_file }}" + + - name: Untar the new ejabberd license file + # noqa 303 + command: "tar zxf /root/{{ vsd_ejabberd_license_file | basename }}" + args: + chdir: "{{ ejabberd_base_dir.stdout }}/ebin" + + - name: Change ownership of licenses + # noqa 302 + command: >- + chown ejabberd:hadoopusers ejabberd_license.beam ejabberd_cluster.beam + ejabberd_c2s.beam ejabberd.beam ejabberd_sm.beam ejabberd_router.beam ejabberd_listener.beam + args: + chdir: "{{ ejabberd_base_dir.stdout }}/ebin" + when: not(ejabberd_base_dir.stdout is search("3.2.13_5")) + + - name: Change ownership of licenses for version 3.2.13_5 + # noqa 302 + command: >- + chown ejabberd:hadoopusers ejabberd_cluster.beam ejabberd_c2s.beam ejabberd.beam ejabberd_sm.beam ejabberd_router.beam ejabberd_listener.beam + args: + chdir: "{{ ejabberd_base_dir }}/ebin" + when: ejabberd_base_dir.stdout is search("3.2.13_5") + + when: vsd_ejabberd_license_file is defined and vsd_ejabberd_license_file + + - name: Changing log_rotate_size to non-zero + lineinfile: + path: /opt/ejabberd/conf/ejabberd.yml + regexp: '^log_rotate_size:' + line: "log_rotate_size: 10485760" + when: "vsd_version.stdout is version('20.10.1', '>=')" - name: Start ejabberd command: /opt/ejabberd/bin/ejabberdctl start - register: license_check_after_renew - - name: Check that new ejabberd license is valid - command: /opt/ejabberd/bin/ejabberdctl license_info - register: license_check_after_renew + - name: Check ejabberd status + command: /opt/ejabberd/bin/ejabberdctl status + register: ejabberd_status retries: 5 - until: "license_check_after_renew.rc == 0" + until: "ejabberd_status.rc == 0 and ejabberd_status.stdout.find('started with status: started') != -1" delay: 20 + ignore_errors: yes + + - name: Check if not able to start ejabberd + assert: + that: "{{ ejabberd_status.rc }} == 0 and ejabberd_status.stdout.find('started with status: started') != -1" + msg: "Error starting ejabberd. Please check if the ejabberd license has expired." - name: Stop ejabberd service command: /opt/ejabberd/bin/ejabberdctl stop - when: vsd_ejabberd_license_file is defined remote_user: "{{ vsd_default_username }}" diff --git a/src/roles/vsd-deploy/tasks/vsd_security_hardening.yml b/src/roles/vsd-deploy/tasks/vsd_security_hardening.yml index 39d7177d13..fe0af2a607 100644 --- a/src/roles/vsd-deploy/tasks/vsd_security_hardening.yml +++ b/src/roles/vsd-deploy/tasks/vsd_security_hardening.yml @@ -71,9 +71,14 @@ backup: no dest: /tmp/authorized_keys_temp + - name: Add proxy setup + set_fact: + proxy_conf: '-o ProxyCommand="ssh -W %h:%p -q {{ ssh_proxy_configuration }}"' + when: ssh_proxy_configuration is defined + - name: Copy authorized file to vsd custom user home directory shell: >- - sshpass -p'{{ vsd_custom_password }}' scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null /tmp/authorized_keys_temp + sshpass -p'{{ vsd_custom_password }}' scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {{ proxy_conf | default('') }} /tmp/authorized_keys_temp {{ vsd_custom_username }}@{{ mgmt_ip }}:/home/{{ vsd_custom_username }}/.ssh/authorized_keys no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" @@ -393,7 +398,7 @@ - name: Wait for VSD core processes to become running monit_waitfor_service: - name: "{{ proc_list['state'].keys() }}" + name: "{{ proc_list['state'].keys() | list }}" timeout_seconds: 1200 test_interval_seconds: 30 diff --git a/src/roles/vsd-destroy/tasks/vcenter.yml b/src/roles/vsd-destroy/tasks/vcenter.yml index 8b3569dff4..d321dfb8b5 100644 --- a/src/roles/vsd-destroy/tasks/vcenter.yml +++ b/src/roles/vsd-destroy/tasks/vcenter.yml @@ -1,7 +1,7 @@ --- - name: Finding VM folder (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -22,13 +22,13 @@ when: vsd_vm_folder.exception is defined - name: Gathering info on VM - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vsd_vm_folder['folders'][0] }}" + folder: "{{ vsd_vm_folder['folders'][0] }}" name: "{{ vm_name }}" validate_certs: no register: vsd_vm_facts @@ -56,7 +56,7 @@ - debug: var=uuid - name: Turn off autostart - connection: local + delegate_to: localhost vmware_autostart: name: "{{ vm_name }}" uuid: "{{ uuid }}" @@ -67,28 +67,28 @@ validate_certs: no - name: Power off the vsd VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" validate_certs: no datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vsd_vm_folder['folders'][0] }}" + folder: "{{ vsd_vm_folder['folders'][0] }}" name: "{{ vm_name }}" state: "poweredoff" when: vsd_vm_facts['instance']['hw_power_status'] == 'poweredOn' - name: Removing the vsd VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" validate_certs: no datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vsd_vm_folder['folders'][0] }}" + folder: "{{ vsd_vm_folder['folders'][0] }}" name: "{{ vm_name }}" state: "absent" when: (not preserve_vm | default( False )) diff --git a/src/roles/vsd-health/tasks/main.yml b/src/roles/vsd-health/tasks/main.yml index 34ff2c2af8..e7f7ac1d1e 100644 --- a/src/roles/vsd-health/tasks/main.yml +++ b/src/roles/vsd-health/tasks/main.yml @@ -348,25 +348,11 @@ - block: - name: Verify that JMS gateway is reachable on Master Node - uri: - url: https://{{ jms_master_hostname }}:61619 - method: GET - user: "{{ vsd_auth.username }}" - password: "{{ vsd_auth.password }}" - status_code: 200 - validate_certs: False - until: webresult.status == 200 + wait_for: + host: "{{ jms_master_hostname }}" + port: 61619 retries: 15 delay: 60 - register: webresult - delegate_to: localhost - - - name: Write JMS Gateway accessibility separator to report file - nuage_append: filename="{{ report_path }}" text="-----Results of verifying JMS gateway accessibility from master node-----\n" - delegate_to: localhost - - - name: Write web interface result - nuage_append: filename="{{ report_path }}" text="{{ webresult | to_nice_json }}\n" delegate_to: localhost when: jms_master_hostname is defined and jms_master_hostname diff --git a/src/roles/vsd-health/tasks/report_start.yml b/src/roles/vsd-health/tasks/report_start.yml index d285031001..ab09ce290a 100644 --- a/src/roles/vsd-health/tasks/report_start.yml +++ b/src/roles/vsd-health/tasks/report_start.yml @@ -1,5 +1,5 @@ - name: Pull facts of localhost - connection: local + delegate_to: localhost action: setup - name: Create report folder on ansible deployment host diff --git a/src/roles/vsd-predeploy/tasks/vcenter-enable-interface.yml b/src/roles/vsd-predeploy/tasks/vcenter-enable-interface.yml index aaebf022f3..064594413f 100644 --- a/src/roles/vsd-predeploy/tasks/vcenter-enable-interface.yml +++ b/src/roles/vsd-predeploy/tasks/vcenter-enable-interface.yml @@ -9,7 +9,7 @@ node_disabled_interface: False - name: Rewriting eth0 network script file to the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -24,7 +24,7 @@ vm_shell_args: " '{{ lookup('template', 'ifcfg-eth0.j2') }}' > /etc/sysconfig/network-scripts/ifcfg-eth0" - name: Set the owner and group on the eth0 network script file in the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -39,7 +39,7 @@ vm_shell_args: " 0 0 /etc/sysconfig/network-scripts/ifcfg-eth0" - name: Restart the network service after adding the network script - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/vsd-predeploy/tasks/vcenter.yml b/src/roles/vsd-predeploy/tasks/vcenter.yml index 440cede23f..31307f01d0 100644 --- a/src/roles/vsd-predeploy/tasks/vcenter.yml +++ b/src/roles/vsd-predeploy/tasks/vcenter.yml @@ -53,7 +53,7 @@ tasks_from: vcenter-deploy-image - name: Waiting until VMware tools becomes available - connection: local + delegate_to: localhost vmware_guest_tools_wait: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -71,7 +71,7 @@ - debug: var=uuid - name: Turn on autostart - connection: local + delegate_to: localhost vmware_autostart: name: "{{ vm_name }}" uuid: "{{ uuid }}" @@ -114,7 +114,7 @@ vm_password: "{{ vsd_default_password }}" - name: Reboot VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/vsd-services-start/tasks/main.yml b/src/roles/vsd-services-start/tasks/main.yml index 94904f5f64..ca70925810 100644 --- a/src/roles/vsd-services-start/tasks/main.yml +++ b/src/roles/vsd-services-start/tasks/main.yml @@ -4,37 +4,71 @@ command: /opt/vsd/sysmon/bootPercona.py --force when: "'vsd_ha_node1' in groups and inventory_hostname in groups['vsd_ha_node1']" - - name: Start VSD common processes - command: monit -g vsd-common start - - - name: Get monit summary for common processes on VSD - vsd_monit: - group: vsd-common - register: proc_list - - - name: Wait for VSD common processes to become running - monit_waitfor_service: - name: "{{ proc_list['state'].keys() }}" - timeout_seconds: 1200 - test_interval_seconds: 30 - - - name: Start VSD core processes (ignoring errors) - command: monit -g vsd-core start - register: monit_core_output - ignore_errors: inventory_hostname not in groups['primary_vsds'] - - - name: Get monit summary for core processes on VSD - vsd_monit: - group: vsd-core - register: proc_list - when: monit_core_output.rc == 0 - - - name: Wait for VSD core processes to become running - monit_waitfor_service: - name: "{{ proc_list['state'].keys() }}" - timeout_seconds: 1200 - test_interval_seconds: 30 - when: monit_core_output.rc == 0 + - name: Check that 'vsd-common' group is present in vsd + command: monit -g vsd-common status + ignore_errors: yes + register: group_status_common + + - name: Check if an error should be ignored + assert: + that: "not group_status_common.failed or group_status_common.stderr is search('not found')" + msg: "Error while attempting to check group is present in current vsd" + + - block: + + - name: Start VSD common processes + command: monit -g vsd-common start + + - name: Get monit summary for common processes on VSD + vsd_monit: + group: vsd-common + register: proc_list + + - name: Wait for VSD common processes to become running + monit_waitfor_service: + name: "{{ proc_list['state'].keys() | list }}" + timeout_seconds: 1200 + test_interval_seconds: 30 + + when: group_status_common.rc == 0 + + - name: Check that 'vsd-core' group is present in vsd + command: monit -g vsd-core status + ignore_errors: yes + register: group_status_core + + - name: Check if an error should be ignored + assert: + that: "not group_status_core.failed or group_status_core.stderr is search('not found')" + msg: "Error while attempting to check group is present in current vsd" + + - block: + + - name: Start VSD core processes (ignoring errors) + command: monit -g vsd-core start + + - name: Get monit summary for core processes on VSD + vsd_monit: + group: vsd-core + register: proc_list + + - name: Wait for VSD core processes to become running + monit_waitfor_service: + name: "{{ proc_list['state'].keys() | list }}" + timeout_seconds: 1200 + test_interval_seconds: 30 + + when: group_status_core.rc == 0 + + - name: Check that 'vsd-stats' group is present in vsd + command: monit -g vsd-stats status + ignore_errors: yes + register: group_status_stats + + - name: Check if an error should be ignored + assert: + that: "not group_status_stats.failed or group_status_stats.stderr is search('not found')" + msg: "Error while attempting to check group is present in current vsd" - block: @@ -48,11 +82,11 @@ - name: Wait for VSD stats processes to become running monit_waitfor_service: - name: "{{ proc_list['state'].keys() }}" + name: "{{ proc_list['state'].keys() | list }}" timeout_seconds: 1200 test_interval_seconds: 30 - when: groups['vstats'] is defined and groups['vstats'] and inventory_hostname in groups['primary_vsds'] + when: group_status_stats.rc == 0 remote_user: "{{ vsd_custom_username | default(vsd_default_username) }}" become: "{{ 'no' if vsd_custom_username | default(vsd_default_username) == 'root' else 'yes' }}" diff --git a/src/roles/vsd-services-stop/tasks/main.yml b/src/roles/vsd-services-stop/tasks/main.yml index 921f4a9e58..9452773959 100644 --- a/src/roles/vsd-services-stop/tasks/main.yml +++ b/src/roles/vsd-services-stop/tasks/main.yml @@ -4,18 +4,22 @@ - block: + - name: Check that 'vsd-stats' group is present in vsd + command: monit -g vsd-stats status + ignore_errors: yes + register: group_status_stats + + - name: Check if an error should be ignored + assert: + that: "not group_status_stats.failed or group_status_stats.stderr is search('not found')" + msg: "Error while attempting to check group is present in current vsd" + - block: - name: Stop vsd statistics services shell: "{{ stop_stats }}" - ignore_errors: yes register: stop_stats_output - - name: Check if an error should be ignored - assert: - that: "not stop_stats_output.failed or stop_stats_output.stderr is search('not found')" - msg: "Error while attempting to stop vsd stats services" - - name: Pause for processes to exit pause: seconds: 20 @@ -30,7 +34,17 @@ ignore_errors: yes when: list_stats_pids.stdout.strip()!= "" # noqa 602 - when: groups['vstats'] is defined and groups['vstats'] + when: group_status_stats.rc == 0 + + - name: Check that 'vsd-core' group is present in vsd + command: monit -g vsd-core status + ignore_errors: yes + register: group_status_core + + - name: Check if an error should be ignored + assert: + that: "not group_status_core.failed or group_status_core.stderr is search('not found')" + msg: "Error while attempting to check group is present in current vsd" - block: @@ -52,10 +66,24 @@ ignore_errors: yes when: list_core_pids.stdout.strip()!="" # noqa 602 + when: group_status_core.rc == 0 + + - name: Check that 'vsd-common' group is present in vsd + command: monit -g vsd-common status + ignore_errors: yes + register: group_status_common + + - name: Check if an error should be ignored + assert: + that: "not group_status_common.failed or group_status_common.stderr is search('not found')" + msg: "Error while attempting to check group is present in current vsd" + + - block: + - name: Stop vsd common services (ignoring errors) shell: "{{ stop_vsd_common }}" ignore_errors: yes - + - name: Pause for processes to exit pause: seconds: 20 @@ -70,6 +98,8 @@ ignore_errors: yes when: list_common_pids.stdout.strip()!="" # noqa 602 + when: group_status_common.rc == 0 + remote_user: "{{ vsd_custom_username | default(vsd_default_username) }}" become: "{{ 'no' if vsd_custom_username | default(vsd_default_username) == 'root' else 'yes' }}" vars: diff --git a/src/roles/vsd-upgrade-complete/tasks/check_certificate_expiration_time.yml b/src/roles/vsd-upgrade-complete/tasks/check_certificate_expiration_time.yml index b35742d42e..e423a26bf5 100644 --- a/src/roles/vsd-upgrade-complete/tasks/check_certificate_expiration_time.yml +++ b/src/roles/vsd-upgrade-complete/tasks/check_certificate_expiration_time.yml @@ -1,10 +1,7 @@ - name: Check Certificates Expiration Time shell: "{{ check_certificate_expiration_time_command }} | grep ocspsigner" register: certificate_expiration_time - remote_user: "{{ vsd_custom_username | default(vsd_default_username) }}" - become: "{{ 'no' if vsd_custom_username | default(vsd_default_username) == 'root' else 'yes' }}" - vars: - ansible_become_pass: "{{ vsd_custom_password | default(vsd_default_password) }}" + remote_user: "{{ vsd_default_username }}" - name: Validate ocspsigner user is present assert: @@ -14,10 +11,7 @@ - name: Get timestamp from the system shell: "date +%Y-%m-%d%H:%M:%S" register: tstamp - remote_user: "{{ vsd_custom_username | default(vsd_default_username) }}" - become: "{{ 'no' if vsd_custom_username | default(vsd_default_username) == 'root' else 'yes' }}" - vars: - ansible_become_pass: "{{ vsd_custom_password | default(vsd_default_password) }}" + remote_user: "{{ vsd_default_username }}" - name: Extracting Current date and time from timestamp set_fact: @@ -32,5 +26,5 @@ - name: Ensuring certificate is unexpired assert: - that: time_diff > 1 + that: (time_diff | int) > 1 fail_msg: "The certification is expired" diff --git a/src/roles/vsd-upgrade-complete/tasks/check_monit_status.yml b/src/roles/vsd-upgrade-complete/tasks/check_monit_status.yml index bf5bf70d38..4494ba610f 100644 --- a/src/roles/vsd-upgrade-complete/tasks/check_monit_status.yml +++ b/src/roles/vsd-upgrade-complete/tasks/check_monit_status.yml @@ -21,7 +21,7 @@ - name: wait for VSD common , core and stats services to become running monit_waitfor_service: - name: "{{ proc_list['state'].keys() | difference(remove_list) }}" + name: "{{ proc_list['state'].keys() | list | difference(remove_list) }}" timeout_seconds: 1200 test_interval_seconds: 30 remote_user: "{{ vsd_default_username }}" diff --git a/src/roles/vsd-upgrade-destroy/tasks/main.yml b/src/roles/vsd-upgrade-destroy/tasks/main.yml index 316f5be232..e00bd9c303 100644 --- a/src/roles/vsd-upgrade-destroy/tasks/main.yml +++ b/src/roles/vsd-upgrade-destroy/tasks/main.yml @@ -1,5 +1,5 @@ --- -- name: Invoke vsd-destroy role on new VM +- name: Invoke vsd-destroy role on old VM include_role: name: vsd-destroy vars: diff --git a/src/roles/vsr-predeploy/tasks/kvm_check_hypervisor.yml b/src/roles/vsr-predeploy/tasks/kvm_check_hypervisor.yml index 0eb71d4e84..5b1bb34b2e 100644 --- a/src/roles/vsr-predeploy/tasks/kvm_check_hypervisor.yml +++ b/src/roles/vsr-predeploy/tasks/kvm_check_hypervisor.yml @@ -13,8 +13,9 @@ - libvirt - bridge-utils - libguestfs-tools - - libvirt-python state: present + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Make sure libvirtd has started service: @@ -33,7 +34,6 @@ - libvirt-bin - bridge-utils - libguestfs-tools - - python-libvirt state: present - name: Make sure libvirtd has started @@ -44,6 +44,9 @@ when: ansible_os_family is match("Debian") + - name: Install libvirt-python + pip: name=libvirt-python + delegate_to: "{{ target_server }}" remote_user: "{{ target_server_username }}" become: "{{ 'no' if target_server_username == 'root' else 'yes' }}" diff --git a/src/roles/vstat-deploy/tasks/main.yml b/src/roles/vstat-deploy/tasks/main.yml index b396b1860a..7e94d6fd92 100644 --- a/src/roles/vstat-deploy/tasks/main.yml +++ b/src/roles/vstat-deploy/tasks/main.yml @@ -1,6 +1,6 @@ - name: Clean known_hosts of VSTATs (ignoring errors) known_hosts: - name: "{{ mgmt_ip }}" + name: "{{ hostname }}" state: absent delegate_to: localhost no_log: True @@ -416,12 +416,12 @@ when: vstat_sa_or_ha is match('ha') run_once: True - delegate_to: "{{ vsd_node1_hostname }}" - remote_user: "{{ hostvars[vsd_node1_hostname].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" + delegate_to: "{{ hostvars[inventory_hostname].vsd_hostname_list[0] }}" + remote_user: "{{ hostvars[hostvars[inventory_hostname].vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" become: >- - {{ 'no' if hostvars[vsd_node1_hostname].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }} + {{ 'no' if hostvars[inventory_hostname].vsd_hostname_list[0].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }} vars: - ansible_become_pass: "{{ hostvars[vsd_node1_hostname].vsd_custom_password | default(vsd_custom_password | default(vsd_default_password)) }}" + ansible_become_pass: "{{ hostvars[inventory_hostname].vsd_hostname_list[0].vsd_custom_password | default(vsd_custom_password | default(vsd_default_password)) }}" - name: Update elasticsearch data path lineinfile: diff --git a/src/roles/vstat-deploy/tasks/openstack.yml b/src/roles/vstat-deploy/tasks/openstack.yml index 58767796d3..1a82d2183d 100644 --- a/src/roles/vstat-deploy/tasks/openstack.yml +++ b/src/roles/vstat-deploy/tasks/openstack.yml @@ -19,7 +19,7 @@ when: not upgrade - name: Get VSTAT details from OpenStack - os_server_facts: + os_server_info: auth: "{{ openstack_auth }}" server: "{{ vm_name }}" @@ -32,7 +32,7 @@ - name: Set vstat mgmt ip set_fact: - vstat_mgmt_ip: "{{ vstat_ip['ansible_facts']['openstack_servers'][0]['addresses'][openstack_network][0]['addr'] }}" + vstat_mgmt_ip: "{{ vstat_ip['openstack_servers'][0]['addresses'][openstack_network][0]['addr'] }}" - name: Wait for VSTAT ssh to be ready include_role: diff --git a/src/roles/vstat-deploy/tasks/vstat_security_hardening.yml b/src/roles/vstat-deploy/tasks/vstat_security_hardening.yml new file mode 100644 index 0000000000..b28ead62df --- /dev/null +++ b/src/roles/vstat-deploy/tasks/vstat_security_hardening.yml @@ -0,0 +1,101 @@ +- name: Clean known_hosts of VSTATs (ignoring errors) + known_hosts: + name: "{{ mgmt_ip }}" + state: absent + delegate_to: localhost + no_log: True + ignore_errors: True + +- name: Clean known_hosts of VSTATs (ignoring errors) + known_hosts: + name: "{{ inventory_hostname }}" + state: absent + delegate_to: localhost + no_log: True + ignore_errors: True + +- name: Check if custom user is already exists and is connected via ssh + command: ssh -oStrictHostKeyChecking=no -oPasswordAuthentication=no {{ vstat_custom_username }}@{{ inventory_hostname }} exit 0 + register: ssh_custom_user + delegate_to: localhost + ignore_errors: yes + when: vstat_custom_username is defined + +- block: + + - block: + + - name: Create a SSH pass + command: openssl passwd -1 {{ vstat_custom_password }} + register: openssl_pwd + + - name: Check if user exists + command: id -u {{ vstat_custom_username }} + ignore_errors: yes + register: user_exists + + - name: Create the user if not present + command: useradd -p {{ openssl_pwd.stdout }} {{ vstat_custom_username }} + when: "'no such user' in user_exists.stderr" + + - name: Add user to sudoers list + command: usermod -aG wheel {{ vstat_custom_username }} + + - name: Update sshd config file + lineinfile: + path: /etc/ssh/sshd_config + regexp: '#PermitRootLogin yes' + line: 'PermitRootLogin no' + + - name: Create .ssh directory for vstat custom user + file: + path: '/home/{{ vstat_custom_username }}/.ssh/' + state: directory + owner: "{{ vstat_custom_username }}" + group: "{{ vstat_custom_username }}" + + - block: + + - name: Get the public key for the curren user + command: cat "{{ user_ssh_pub_key }}" + register: current_user_ssh_key + + - name: Create a temporary copy of the autorized_keys file + template: + src: "{{ role_path }}/../common/templates/authorized_keys.j2" + backup: no + dest: /tmp/authorized_keys_temp + + - name: Add proxy setup + set_fact: + proxy_conf: '-o ProxyCommand="ssh -W %h:%p -q {{ ssh_proxy_configuration }}"' + when: ssh_proxy_configuration is defined + + - name: Copy authorized file to vstat custom user home directory + shell: >- + sshpass -p'{{vstat_custom_password }}' scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null {{ proxy_conf | default("") }} + /tmp/authorized_keys_temp {{ vstat_custom_username }}@{{ mgmt_ip }}:/home/{{ vstat_custom_username }}/.ssh/authorized_keys + no_log: "{{ lookup('env', 'METROAE_NO_LOG') or 'true' }}" + + - name: Remove temporary copy of authorized_key file + file: path=tmp/authorized_keys_temp state=absent + + delegate_to: localhost + + - name: Set permission on the authorized file to 644 + shell: 'chmod 644 /home/{{ vstat_custom_username }}/.ssh/authorized_keys' + remote_user: "{{ vstat_custom_username }}" + + - name: Restart ssh server + shell: /bin/systemctl restart sshd.service + + - name: Set vstat user to custom user + set_fact: + vstat_username: "{{ vstat_custom_username }}" + + when: + - vstat_custom_username is defined + - vstat_custom_password is defined + - ssh_custom_user.rc != 0 + + remote_user: "{{ vstat_default_username }}" diff --git a/src/roles/vstat-destroy/tasks/vcenter.yml b/src/roles/vstat-destroy/tasks/vcenter.yml index 5a04ca632a..872c253fce 100644 --- a/src/roles/vstat-destroy/tasks/vcenter.yml +++ b/src/roles/vstat-destroy/tasks/vcenter.yml @@ -1,24 +1,33 @@ --- - name: Finding VM folder (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_find: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" - datacenter: "{{ vcenter.datacenter }}" name: "{{ vm_name }}" validate_certs: no register: vstat_vm_folder ignore_errors: yes +- name: Check output message for unexpected errors + assert: + that: vstat_vm_folder.msg is search('Unable to find folders for virtual machine') + fail_msg: "{{ vstat_vm_folder.msg }}" + when: vstat_vm_folder.msg is defined + +- name: Check for exception in VSTAT VM Folder + fail: msg="Exception found {{ vstat_vm_folder.exception }}" + when: vstat_vm_folder.exception is defined + - name: Gathering info on VM (ignoring errors) - connection: local + delegate_to: localhost vmware_guest_info: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vstat_vm_folder['folders'][0] }}" + folder: "{{ vstat_vm_folder['folders'][0] }}" name: "{{ vm_name }}" validate_certs: no register: vstat_vm_facts @@ -47,7 +56,7 @@ with_items: "{{ vm_info.virtual_machines }}" - name: Turn off autostart - connection: local + delegate_to: localhost vmware_autostart: name: "{{ vm_name }}" uuid: "{{ uuid }}" @@ -58,27 +67,27 @@ validate_certs: no - name: Power off the Stats VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" validate_certs: no datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vstat_vm_folder['folders'][0] }}" + folder: "{{ vstat_vm_folder['folders'][0] }}" name: "{{ vm_name }}" state: "poweredoff" when: vstat_vm_facts['instance']['hw_power_status'] == 'poweredOn' - name: Removing the Stats VM - connection: local + delegate_to: localhost vmware_guest: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" password: "{{ vcenter.password }}" validate_certs: no datacenter: "{{ vcenter.datacenter }}" - folder: "/{{ vcenter.datacenter }}{{ vstat_vm_folder['folders'][0] }}" + folder: "{{ vstat_vm_folder['folders'][0] }}" name: "{{ vm_name }}" state: "absent" when: (not preserve_vm | default( False )) diff --git a/src/roles/vstat-health/tasks/main.yml b/src/roles/vstat-health/tasks/main.yml index 2c0d56aecd..a31fb7994d 100644 --- a/src/roles/vstat-health/tasks/main.yml +++ b/src/roles/vstat-health/tasks/main.yml @@ -15,6 +15,9 @@ mac_addr: False register: net_conf remote_user: "{{ vstat_username | default(vstat_default_username) }}" + become: "{{ 'no' if vstat_username | default(vstat_default_username) == 'root' else 'yes' }}" + vars: + ansible_become_pass: "{{ vstat_password | default(vstat_default_password) }}" - name: Print network config when verbosity >= 1 debug: var=net_conf.info verbosity=1 @@ -30,6 +33,10 @@ - name: Stat the pid file for elasticsearch process stat: path=/var/run/elasticsearch/elasticsearch.pid register: pid + remote_user: "{{ vstat_username | default(vstat_default_username) }}" + become: "{{ 'no' if vstat_username | default(vstat_default_username) == 'root' else 'yes' }}" + vars: + ansible_become_pass: "{{ vstat_password | default(vstat_default_password) }}" - name: Verify that the elasticsearch pid file exists assert: @@ -40,6 +47,10 @@ command: systemctl is-active elasticsearch # noqa 303 register: es_process_status changed_when: True + remote_user: "{{ vstat_username | default(vstat_default_username) }}" + become: "{{ 'no' if vstat_username | default(vstat_default_username) == 'root' else 'yes' }}" + vars: + ansible_become_pass: "{{ vstat_password | default(vstat_default_password) }}" - name: Verify that elasticsearch status is active assert: @@ -63,6 +74,10 @@ status_code: 200 validate_certs: False register: webresult + remote_user: "{{ vstat_username | default(vstat_default_username) }}" + become: "{{ 'no' if vstat_username | default(vstat_default_username) == 'root' else 'yes' }}" + vars: + ansible_become_pass: "{{ vstat_password | default(vstat_default_password) }}" - name: Write web interface result separator to report file nuage_append: filename="{{ report_path }}" text="-----VSTAT Web Interface Check-----\n" @@ -80,6 +95,10 @@ until: es_health.json.unassigned_shards == 0 retries: 60 delay: 10 + remote_user: "{{ vstat_username | default(vstat_default_username) }}" + become: "{{ 'no' if vstat_username | default(vstat_default_username) == 'root' else 'yes' }}" + vars: + ansible_become_pass: "{{ vstat_password | default(vstat_default_password) }}" - name: Write shard status result separator to report file nuage_append: filename="{{ report_path }}" text="-----Shard Status Results-----\n" @@ -98,7 +117,7 @@ - name: Set fact for expected number of ES nodes set_fact: - expected_es_node_count: "{{ groups['vstats'] | default([]) | length + groups['data_vstats'] | default([]) | length + groups['primary_vstats'] | default([]) | length + groups['backup_vstats'] | default([]) | length }}" + expected_es_node_count: "{{ groups['vstats'] | default([]) | length + groups['data_vstats'] | default([]) | length + groups['primary_vstats'] | default([]) | length }}" - name: Check the ES status color and node count uri: @@ -108,8 +127,23 @@ until: es_status.json.status == 'green' and es_status.json.number_of_nodes == expected_es_node_count|int retries: 60 delay: 10 - delegate_to: "{{ vsd_hostname_list[0] }}" - remote_user: "{{ hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" + delegate_to: "{{ hostvars[inventory_hostname].vsd_hostname_list[0] }}" + remote_user: "{{ hostvars[hostvars[inventory_hostname].vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" + become: "{{ 'no' if hostvars[inventory_hostname].vsd_hostname_list[0].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }}" + vars: + ansible_become_pass: "{{ hostvars[inventory_hostname].vsd_hostname_list[0].vsd_custom_password | default(vsd_custom_password | default(vsd_default_password)) }}" + + - name: Check the ES data node count when active/standby + uri: + url: "http://{{ inventory_hostname }}:9200/_cluster/health?pretty" + method: GET + register: es_status_active_standby + until: es_status_active_standby.json.number_of_data_nodes == groups['backup_vstats']|length|int + retries: 60 + delay: 10 + delegate_to: "{{ hostvars[inventory_hostname].vsd_hostname_list[0] }}" + remote_user: "{{ hostvars[hostvars[inventory_hostname].vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" + when: groups['backup_vstats'] is defined and groups['backup_vstats']|length > 0 become: "{{ 'no' if hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }}" vars: ansible_become_pass: "{{ hostvars[vsd_hostname_list[0]].vsd_custom_password | default(vsd_custom_password | default(vsd_default_password)) }}" diff --git a/src/roles/vstat-health/tasks/report_start.yml b/src/roles/vstat-health/tasks/report_start.yml index 4187c087db..486d9f92c3 100644 --- a/src/roles/vstat-health/tasks/report_start.yml +++ b/src/roles/vstat-health/tasks/report_start.yml @@ -1,5 +1,5 @@ - name: Pull facts of localhost - connection: local + delegate_to: localhost action: setup - name: Create report folder on ansible deployment host diff --git a/src/roles/vstat-postdeploy/tasks/main.yml b/src/roles/vstat-postdeploy/tasks/main.yml index 8f0b447d54..097ec6f27a 100644 --- a/src/roles/vstat-postdeploy/tasks/main.yml +++ b/src/roles/vstat-postdeploy/tasks/main.yml @@ -10,3 +10,5 @@ import_role: name="vstat-health" vars: report_filename: vstat-postdeploy-health + vstat_username: "{{ vstat_custom_username | default(vstat_default_username) }}" + vstat_password: "{{ vstat_custom_password | default(vstat_default_password) }}" diff --git a/src/roles/vstat-predeploy/tasks/main.yml b/src/roles/vstat-predeploy/tasks/main.yml index 0e9cfbb6fc..1853d32c87 100644 --- a/src/roles/vstat-predeploy/tasks/main.yml +++ b/src/roles/vstat-predeploy/tasks/main.yml @@ -79,6 +79,8 @@ yum: name: nfs-utils state: present + vars: + ansible_python_interpreter: /usr/bin/python2 - name: Create Backup directory file: diff --git a/src/roles/vstat-predeploy/tasks/vcenter.yml b/src/roles/vstat-predeploy/tasks/vcenter.yml index 4332da8557..bf2191122f 100644 --- a/src/roles/vstat-predeploy/tasks/vcenter.yml +++ b/src/roles/vstat-predeploy/tasks/vcenter.yml @@ -50,7 +50,7 @@ tasks_from: vcenter-deploy-image - name: Waiting until VMware tools becomes available - connection: local + delegate_to: localhost vmware_guest_tools_wait: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -68,7 +68,7 @@ - debug: var=uuid - name: Add an extra ES Disk - connection: local + delegate_to: localhost vmware_guest_disk: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -86,7 +86,7 @@ when: extra_disk_size_gb is defined - name: Turn on autostart - connection: local + delegate_to: localhost vmware_autostart: name: "{{ vm_name }}" uuid: "{{ uuid }}" @@ -121,7 +121,7 @@ vm_password: "{{ vstat_default_password }}" - name: Restart Network services in the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" @@ -136,7 +136,7 @@ vm_shell_args: " network restart" - name: Disable cloud-init on the VM - connection: local + delegate_to: localhost vmware_vm_shell: hostname: "{{ target_server }}" username: "{{ vcenter.username }}" diff --git a/src/roles/vstat-update-license/tasks/main.yml b/src/roles/vstat-update-license/tasks/main.yml index 3a2a82c4b2..1c33781252 100644 --- a/src/roles/vstat-update-license/tasks/main.yml +++ b/src/roles/vstat-update-license/tasks/main.yml @@ -10,8 +10,7 @@ dest: "~/vstat_license" - name: Apply the patch - shell: curl -X PUT http://{{inventory_hostname}}:9200/_license -H "Content-Type:application/json" \​ - -d @vstat_license + shell: curl -X PUT http://{{inventory_hostname}}:9200/_license -H "Content-Type:application/json" -d @vstat_license ignore_errors: true register: license_applied retries: 5 diff --git a/src/roles/vstat-upgrade-prep/tasks/prep_vstat_in_place_upgrade.yml b/src/roles/vstat-upgrade-prep/tasks/prep_vstat_in_place_upgrade.yml index 846cf1840c..554f5bbacb 100644 --- a/src/roles/vstat-upgrade-prep/tasks/prep_vstat_in_place_upgrade.yml +++ b/src/roles/vstat-upgrade-prep/tasks/prep_vstat_in_place_upgrade.yml @@ -1,7 +1,7 @@ - name: Set variable for upgrade versions set_fact: upgrade_60_2010R1: - "{{ upgrade_from_version|upper|replace('R','') is search('6.0.') and upgrade_to_version|upper|replace('R','') is search('20.10.1') }}" + "{{ upgrade_from_version|upper|replace('R','') is search('6.0.') and upgrade_to_version|upper|replace('R','') is version('20.10.5', '>=') }}" - name: Save version for certain upgrades block: @@ -57,7 +57,7 @@ register: ssh_key delegate_to: "{{ vsd_hostname_list[0] }}" - remote_user: "{{ hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" + remote_user: "{{ hostvars[hostvars[inventory_hostname].vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" become: "{{ 'no' if hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }}" vars: ansible_become_pass: "{{ hostvars[vsd_hostname_list[0]].vsd_custom_password | default(vsd_custom_password | default(vsd_default_password)) }}" diff --git a/src/roles/vstat-upgrade-wrapup/tasks/post_upgrade_checks.yml b/src/roles/vstat-upgrade-wrapup/tasks/post_upgrade_checks.yml index a79ef2cc1c..acd45fd8b8 100644 --- a/src/roles/vstat-upgrade-wrapup/tasks/post_upgrade_checks.yml +++ b/src/roles/vstat-upgrade-wrapup/tasks/post_upgrade_checks.yml @@ -26,7 +26,7 @@ - name: Set variable for upgrade versions set_fact: upgrade_60_2010R1: - "{{ upgrade_from_version|upper|replace('R','') is search('6.0.') and upgrade_to_version|upper|replace('R','') is search('20.10.1') }}" + "{{ upgrade_from_version|upper|replace('R','') is search('6.0.') and upgrade_to_version|upper|replace('R','') is version('20.10.5', '>=') }}" - name: Check ES version when required block: diff --git a/src/roles/vstat-upgrade/tasks/execute_ha_script.yml b/src/roles/vstat-upgrade/tasks/execute_ha_script.yml index b58c6c0ff4..7f9a1f42a7 100644 --- a/src/roles/vstat-upgrade/tasks/execute_ha_script.yml +++ b/src/roles/vstat-upgrade/tasks/execute_ha_script.yml @@ -1,6 +1,12 @@ - block: + + - name: Define data only VSTATs for cluster + set_fact: + data_group_params: " -d {{ groups['data_vstats'][0] }},{{ groups['data_vstats'][1] }},{{ groups['data_vstats'][2] }}" + when: stats_out | default(false) + - name: Execute VSTAT clustered script - command: /opt/vsd/vsd-es-cluster-config.sh -e {{ groups['vstats'][0] }},{{ groups['vstats'][1] }},{{ groups['vstats'][2] }} + command: /opt/vsd/vsd-es-cluster-config.sh -e {{ groups['vstats'][0] }},{{ groups['vstats'][1] }},{{ groups['vstats'][2] }}{{ data_group_params | default('') }} register: upgrade_status environment: SSHPASS: "{{ vstat_custom_root_password | default( vstat_default_password ) }}" @@ -20,6 +26,7 @@ when: - is_backup_vstats is defined - not is_backup_vstats + - primary_vstats is defined and primary_vstats - name: Execute VSTAT clustered script on backup vstats command: /opt/vsd/vsd-es-cluster-config.sh -e {{ groups['backup_vstats'][0] }},{{ groups['backup_vstats'][1] }},{{ groups['backup_vstats'][2] }} @@ -34,7 +41,7 @@ - is_backup_vstats delegate_to: "{{ vsd_hostname_list[0] }}" - remote_user: "{{ hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" + remote_user: "{{ hostvars[hostvars[inventory_hostname].vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" become: "{{ 'no' if hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }}" vars: ansible_become_pass: "{{ hostvars[vsd_hostname_list[0]].vsd_custom_password | default(vsd_custom_password | default(vsd_default_password)) }}" diff --git a/src/roles/vstat-upgrade/tasks/execute_sa_script.yml b/src/roles/vstat-upgrade/tasks/execute_sa_script.yml index fce713f279..3a0200dd18 100644 --- a/src/roles/vstat-upgrade/tasks/execute_sa_script.yml +++ b/src/roles/vstat-upgrade/tasks/execute_sa_script.yml @@ -1,7 +1,7 @@ - name: Execute VSTAT standalone script command: /opt/vsd/vsd-es-standalone.sh -e {{ inventory_hostname }} delegate_to: "{{ vsd_hostname_list[0] }}" - remote_user: "{{ hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" + remote_user: "{{ hostvars[hostvars[inventory_hostname].vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) }}" become: "{{ 'no' if hostvars[vsd_hostname_list[0]].vsd_custom_username | default(vsd_custom_username | default(vsd_default_username)) == 'root' else 'yes' }}" vars: ansible_become_pass: "{{ hostvars[vsd_hostname_list[0]].vsd_custom_password | default(vsd_custom_password | default(vsd_default_password)) }}" diff --git a/src/roles/vstat-upgrade/tasks/get_health_status.yml b/src/roles/vstat-upgrade/tasks/get_health_status.yml index 052311d904..669b633116 100644 --- a/src/roles/vstat-upgrade/tasks/get_health_status.yml +++ b/src/roles/vstat-upgrade/tasks/get_health_status.yml @@ -6,6 +6,10 @@ until: es_health.json.unassigned_shards == 0 retries: 60 delay: 10 + when: + - (groups['vstats'] is defined and inventory_hostname in groups['vstats']) or + (groups['primary_vstats'] is defined and inventory_hostname in groups['primary_vstats']) or + (groups['backup_vstats'] is defined and inventory_hostname in groups['backup_vstats']) - name: Get ES Status uri: diff --git a/src/roles/vstat-upgrade/tasks/vstat_in_place_upgrade.yml b/src/roles/vstat-upgrade/tasks/vstat_in_place_upgrade.yml index 75b451ee3b..4ab5ac16f7 100644 --- a/src/roles/vstat-upgrade/tasks/vstat_in_place_upgrade.yml +++ b/src/roles/vstat-upgrade/tasks/vstat_in_place_upgrade.yml @@ -9,9 +9,9 @@ - name: Set variable for upgrade versions set_fact: upgrade_60_2010R1: - "{{ upgrade_from_version|upper|replace('R','') is search('6.0.') and upgrade_to_version|upper|replace('R','') is search('20.10.1') }}" + "{{ upgrade_from_version|upper|replace('R','') is search('6.0.') and upgrade_to_version|upper|replace('R','') is version('20.10.5', '>=') }}" -- name: When not upgrade from 6.0 to 20.10.R1 +- name: When not upgrade from 6.0 to 20.10 block: - name: When HA @@ -260,6 +260,7 @@ when: - upgrade_to_version is version('6.0.1', '>=') - not upgrade_60_2010R1 + - (groups['data_vstats'] is not defined or inventory_hostname not in groups['data_vstats']) - name: Assert that we are running ES version 5.5.0 assert: diff --git a/src/roles/webfilter-deploy/tasks/main.yml b/src/roles/webfilter-deploy/tasks/main.yml index fefef7ff27..0aaa1aab6d 100644 --- a/src/roles/webfilter-deploy/tasks/main.yml +++ b/src/roles/webfilter-deploy/tasks/main.yml @@ -105,7 +105,7 @@ - name: Wait for webfilter processes to become running monit_waitfor_service: - name: "{{ webfilter_proc['state'].keys() }}" + name: "{{ webfilter_proc['state'].keys() | list }}" timeout_seconds: 600 test_interval_seconds: 10 @@ -114,10 +114,44 @@ - name: Wait for webfilter processes to become running monit_waitfor_service: - name: "{{ webfilter_proc['state'].keys() }}" + name: "{{ webfilter_proc['state'].keys() | list }}" timeout_seconds: 600 test_interval_seconds: 10 + - block: + + - name: Enable Proxy Settings in "/usr/local/gcf1/etc/gcf1.conf" + command: "sed -i 's/^# Proxy Settings.*/Proxy Settings/' /usr/local/gcf1/etc/gcf1.conf" + + - name: Add PROXY_HOST in "/usr/local/gcf1/etc/gcf1.conf" + command: "sed -i 's/^PROXY_HOST=.*/PROXY_HOST={{ web_proxy_host }}/' /usr/local/gcf1/etc/gcf1.conf" + + - name: Add PROXY_PORT in "/usr/local/gcf1/etc/gcf1.conf" + command: "sed -i 's/^PROXY_PORT=.*/PROXY_PORT={{ web_proxy_port }}/' /usr/local/gcf1/etc/gcf1.conf" + + - name: Enable Proxy Settings in "/opt/vsd/webfilter/seed_34/gcf1/etc/gcf1.conf" + command: "sed -i 's/^# Proxy Settings.*/Proxy Settings/' /opt/vsd/webfilter/seed_34/gcf1/etc/gcf1.conf" + + - name: Add PROXY_HOST in "/opt/vsd/webfilter/seed_34/gcf1/etc/gcf1.conf" + command: "sed -i 's/^PROXY_HOST=.*/PROXY_HOST={{ web_proxy_host }}/' /opt/vsd/webfilter/seed_34/gcf1/etc/gcf1.conf" + + - name: Add PROXY_PORT in "/opt/vsd/webfilter/seed_34/gcf1/etc/gcf1.conf" + command: "sed -i 's/^PROXY_PORT=.*/PROXY_PORT={{ web_proxy_port }}/' /opt/vsd/webfilter/seed_34/gcf1/etc/gcf1.conf" + + - name: Restart the incompass monits to set the new proxy + command: monit restart incompass + + - name: Restart the incompass monits to set the new proxy for seed + command: monit restart incompass-34-seed + + - name: Wait for webfilter processes to become running + monit_waitfor_service: + name: "{{ webfilter_proc['state'].keys() }}" + timeout_seconds: 600 + test_interval_seconds: 10 + + when: web_http_proxy is defined and web_http_proxy + - block: - name: Run incompass operation command for seed @@ -128,7 +162,7 @@ - name: Wait for webfilter processes to become running monit_waitfor_service: - name: "{{ webfilter_proc['state'].keys() }}" + name: "{{ webfilter_proc['state'].keys() | list }}" timeout_seconds: 600 test_interval_seconds: 10 diff --git a/src/validate_plugin.py b/src/validate_plugin.py index 9ba32b0a87..2d9a6237de 100755 --- a/src/validate_plugin.py +++ b/src/validate_plugin.py @@ -279,20 +279,20 @@ def validate_hooks(plugin_directory, hooks): def main(): if len(sys.argv) != 2: - print "Validates the format of a MetroAE plugin" - print "Usage:" - print " " + sys.argv[0] + " " + print("Validates the format of a MetroAE plugin") + print("Usage:") + print(" " + sys.argv[0] + " ") sys.exit(1) plugin_directory = sys.argv[1] if not os.path.isdir(plugin_directory): - print plugin_directory + " is not a plugin directory" + print(plugin_directory + " is not a plugin directory") sys.exit(1) validate_plugin(plugin_directory) - print "Plugin OK!" + print("Plugin OK!") if __name__ == '__main__': diff --git a/src/validate_schema_section.py b/src/validate_schema_section.py index ad3b65c14d..eb95be099f 100644 --- a/src/validate_schema_section.py +++ b/src/validate_schema_section.py @@ -19,7 +19,7 @@ def validate_sections(schema_contents, file_name): if len(stack) > 0: # Two sectionBegin print( - "Error in " + file_name + "! There are two overlap " + + "Error in " + file_name + "! There are two overlapping " + SECTION_BEGIN_FIELD + ': "' + stack.pop() + '" and "' + match.group(2) + '"') exit(1) @@ -29,7 +29,7 @@ def validate_sections(schema_contents, file_name): # No previous match print( "Error in " + file_name + "! There is no " + - SECTION_BEGIN_FIELD + ' correspond to "' + + SECTION_BEGIN_FIELD + ' corresponding to "' + SECTION_END_FIELD + '": ' + match.group(2)) exit(1) pre_section = stack.pop() diff --git a/src/workflows.yml b/src/workflows.yml index 2e95ec59fb..f146b1c650 100644 --- a/src/workflows.yml +++ b/src/workflows.yml @@ -33,6 +33,9 @@ schemas: nuhs: file: nuhs.json title: NUHs + nfss: + file: nfss.json + title: NFSs nsgvs: file: nsgvs.json title: NSGvs diff --git a/yum_requirements.txt b/yum_requirements.txt index 3327a5185e..5b73a548c0 100644 --- a/yum_requirements.txt +++ b/yum_requirements.txt @@ -1,11 +1,11 @@ epel-release @Development tools -python-pip -python-devel.x86_64 +python3-pip +python3-devel.x86_64 openssl-devel sshpass git -python-netaddr +python36-netaddr libguestfs-tools bind-utils libselinux-python.x86_64