Skip to content

Commit

Permalink
Merge pull request #148 from LandRegistry/develop
Browse files Browse the repository at this point in the history
2.3.0
  • Loading branch information
sichapman authored Apr 3, 2024
2 parents c34c00a + d14ef72 commit 945963e
Show file tree
Hide file tree
Showing 33 changed files with 267 additions and 149 deletions.
30 changes: 30 additions & 0 deletions .github/workflows/linter.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
name: Lint

on:
pull_request:
push:
branches:
- 'master'
- 'develop'

jobs:
build:
name: Lint
runs-on: ubuntu-latest
permissions:
contents: read
packages: read
statuses: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
# The only difference between full and slim is the latter excludes .NET and Rust linters
- uses: super-linter/super-linter/slim@v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
LINTER_RULES_PATH: .
# Only run against ruby and markdown - we don't care about the others
RUBY_CONFIG_FILE: .rubocop.yml
VALIDATE_RUBY: true
VALIDATE_MARKDOWN: true
1 change: 1 addition & 0 deletions .markdownlintignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
docs/
9 changes: 6 additions & 3 deletions .rubocop.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@ AllCops:

Exclude:
- 'apps/**/*'
- 'scripts/docker/**/*'

Layout/LineLength:
Max: 120
Max: 125

# The default is 10, which is low...
Metrics/MethodLength:
Expand All @@ -20,13 +21,15 @@ Metrics/BlockNesting:
Max: 5

# There are loads of calls to colorize which are fairly clear, but drive up AbcSize
# Set to 21 for update_apps (ensures it won't grow)
Metrics/AbcSize:
Max: 21
Enabled: false

Metrics/ParameterLists:
Max: 7

Metrics/PerceivedComplexity:
Enabled: false

Style/FrozenStringLiteralComment:
Enabled: false

Expand Down
8 changes: 6 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,15 @@ Project maintainers have the right and responsibility to remove, edit, or reject

### Scope

This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.

Representation of a project may be further defined and clarified by project maintainers.

### Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances.

The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.

Expand Down
60 changes: 45 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,11 @@ Other `run.sh` parameters are:
### Configuration Repository

This is a Git repository that must contain a single file -
`configuration.yml`. The configuration file has an `applications` key that contains a list of the applications that will be running in the dev-env, each specifying the URL of their Git repository (the `repo` key) plus which branch/tag/commit should be initially checked out (the `ref` key). The name of the application should match the repository name so that things dependent on the directory structure like volume mappings in the app's docker-compose-fragment.yml will work correctly.
`configuration.yml`. The configuration file has an `applications` key that contains a list of the applications that will be running in the dev-env, each specifying the URL of their Git repository (the `repo` key) plus which branch/tag/commit should be initially checked out (the `ref` key), and any optional Compose fragment variant name to use (the `variant` key) if appropriate.

The name of the application should match the repository name so that things dependent on the directory structure like volume mappings in the app's compose-fragment.yml will work correctly.

Any Compose fragment variant name, if defined, should also match with a variant Compose file in the repository, in the format `compose-fragment.xyz.yml` (where "xyz" is the variant name).

The application repositories will be pulled and updated on each `up` or `reload`, _unless_ the current checked out branch does not match the one in the configuration. This allows you to create and work in feature branches while remaining in full control of updates and merges.

Expand All @@ -66,6 +70,8 @@ For an application repository to leverage the full power of the dev-env...

Docker containers are used to run all apps. So some files are needed to support that.

#### Fragments

##### `/fragments/compose-fragment.yml`

This is used by the environment to construct an application container and then launch it. Standard [Compose Spec](https://github.com/compose-spec/compose-spec/blob/master/spec.md) structure applies - but some recommendations are:
Expand All @@ -80,6 +86,14 @@ Although generally an application should only have itself in its compose fragmen

Note that when including directives such as a Dockerfile build location or host volume mapping for the source code, the Compose context root `.` is considered to be the dev-env's /apps/ folder, not the location of the fragment. Ensure relative paths are set accordingly.

##### `/fragments/compose-fragment.<variant>.yml`

Optional variants of `compose-fragment.yml` designed for specific use cases, such as offering slimmed-down or extended configurations. The syntax for a variant compose fragment is the same as that for a default unversioned compose fragment (see above).

Replace `<variant>` with the specific name of the variant configuration required, such as `slim` or `full`.

A default `compose-fragment.yml` is still required in addition to any optional variants.

[Example](snippets/compose-fragment.yml)

##### `/fragments/docker-compose-fragment.yml` and `/fragments/docker-compose-fragment.3.7.yml`
Expand All @@ -88,10 +102,14 @@ Optional variants of `compose-fragment.yml` with a version of `2` and `3.7` resp

If the environment cannot identify a universal compose file version, then provisioning will fail.

Compose fragment variants are unsupported when used in conjunction with older compose fragment versions.

[2 Example](snippets/docker-compose-fragment.yml)

[3.7 Example](snippets/docker-compose-fragment.3.7.yml)

#### Other

##### `/Dockerfile`

This is a file that defines the application's Docker image. The Compose fragment may point to this file. Extend an existing image and install/set whatever is needed to ensure that containers created from the image will run. See the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/) for more information.
Expand All @@ -111,9 +129,9 @@ The list of allowable commodity values is:
4. elasticsearch5
5. nginx
6. rabbitmq
7. redis
8. swagger
9. wiremock
7. redis
8. swagger
9. wiremock
10. squid
11. auth
12. cadence
Expand All @@ -122,14 +140,19 @@ The list of allowable commodity values is:
15. ibmmq
16. localstack

* The file may optionally also indicate that one or more services are resource intensive ("expensive") when starting up. The dev env will start those containers seperately - 3 at a time - and wait until each are declared healthy (or crash and get restarted 10 times) before starting any more. This requires a healthcheck command specified here or in the Dockerfile/docker-compose-fragment (in which case just use 'docker' in this file).
* If one of these expensive services prefers another one to be considered "healthy" before a startup attempt is made (such as a database, to ensure immediate connectivity and no expensive restarts) then the dependent service can be specified here, with a healthcheck command following the same rules as above.
The file may optionally also indicate that one or more services are resource intensive ("expensive") when starting up. The dev env will start those containers seperately - 3 at a time - and wait until each are declared healthy (or crash and get restarted 10 times) before starting any more.

This requires a healthcheck command specified here or in the Dockerfile/docker-compose-fragment (in which case just use 'docker' in this file).

If one of these expensive services prefers another one to be considered "healthy" before a startup attempt is made (such as a database, to ensure immediate connectivity and no expensive restarts) then the dependent service can be specified here, with a healthcheck command following the same rules as above.

[Example](snippets/app_configuration.yml)

#### Commodities

Individual commodities may require further files in order to set them up correctly even after being specified in the application's `configuration.yml`, these are detailed below. Note that unless specified, any fragment files will only be run once. This is controlled by a generated `.commodities.yml` file in the root of the this repository, which you can change to allow the files to be read again - useful if you've had to delete and recreate a commodity container.
Individual commodities may require further files in order to set them up correctly even after being specified in the application's `configuration.yml`, these are detailed below.

Note that unless specified, any fragment files will only be run once. This is controlled by a generated `.commodities.yml` file in the root of the this repository, which you can change to allow the files to be read again - useful if you've had to delete and recreate a commodity container.

##### PostgreSQL

Expand Down Expand Up @@ -175,7 +198,9 @@ The ports 9300 and 9302 are exposed on the host.

This file forms part of an NGINX configration file. It will be merged into the server directive of the main configuration file.

Important - if your app is adding itself as a proxied location{} behind NGINX, NGINX must start AFTER your app, otherwise it will error with a host not found. So your app's docker-compose-fragment.yml must actually specify NGINX as a service and set the depends_on variable with your app's name. Compose will automatically merge this with the dev-env's own NGINX fragment. See the end of the [example Compose fragment](snippets/docker-compose-fragment.yml) for the exact code.
Important - if your app is adding itself as a proxied location{} behind NGINX, NGINX must start AFTER your app, otherwise it will error with a host not found. So your app's docker-compose-fragment.yml must actually specify NGINX as a service and set the depends_on variable with your app's name.

Compose will automatically merge this with the dev-env's own NGINX fragment. See the end of the [example Compose fragment](snippets/docker-compose-fragment.yml) for the exact code.

[Example](snippets/nginx-fragment.conf)

Expand Down Expand Up @@ -262,18 +287,20 @@ No users are added to the LDAP database by default. To add users, groups, etc, a

Keycloak is an identity and access management system supporting the OAuth and OpenID Connect protocols. This container is built containing a `development` realm configured to use the OpenLDAP service to perform user authentication.

When running, Keycloak's admin console is available at http://localhost:8180/ with username `admin` and password `admin`.
When running, Keycloak's admin console is available at <http://localhost:8180/> with username `admin` and password `admin`.

Applications using OAuth flows or the OpenID Connect protocol can use Keycloak for this purpose with the following configuration parameters:

* Client ID: `oauth-client`
* Authentication URL: http://localhost:8180/auth/realms/development/protocol/openid-connect/auth (must be resolvable by the user agent, hence we use `localhost` assuming that the user agent will be a web browser on the host system)
* Token URL: http://keycloak:8080/auth/realms/development/protocol/openid-connect/token (use `localhost:8180` if connecting from the host system)
* OpenID Connect configuration endpoint: http://keycloak:8080/auth/realms/development/.well-known/openid-configuration (use `localhost:8180` if connecting from the host system)
* Authentication URL: <http://localhost:8180/auth/realms/development/protocol/openid-connect/auth> (must be resolvable by the user agent, hence we use `localhost` assuming that the user agent will be a web browser on the host system)
* Token URL: <http://keycloak:8080/auth/realms/development/protocol/openid-connect/token> (use `localhost:8180` if connecting from the host system)
* OpenID Connect configuration endpoint: <http://keycloak:8080/auth/realms/development/.well-known/openid-configuration> (use `localhost:8180` if connecting from the host system)

JWT tokens issued from the `development` realm have been configured to mimic those issued by Microsoft ADFS servers. In particular, the LDAP `cn` field is mapped to the `UserName` claim in JWT tokens along with the `Office` claim mapped from the `physicalDeliveryOfficeName` in the LDAP database and the `group` claim listing the user's group memberships.

A [JSON export](scripts/docker/auth/keycloak/development_realm.json) of the `development` realm is used to configure the realm. If further configuration of the realm is required, you can make changes in the admin console and re-export the realm using the procedure described in "Exporting a realm" [here](https://hub.docker.com/r/jboss/keycloak/#exporting-a-realm). The exported JSON can then be merged back into this repository and reused.
A [JSON export](scripts/docker/auth/keycloak/development_realm.json) of the `development` realm is used to configure the realm. If further configuration of the realm is required, you can make changes in the admin console and re-export the realm using the procedure described in "Exporting a realm" [here](https://hub.docker.com/r/jboss/keycloak/#exporting-a-realm).

The exported JSON can then be merged back into this repository and reused.

###### Cadence

Expand All @@ -296,12 +323,15 @@ From the host system:
cadence core services.

*Running Cadence web locally*
- In a web browser enter http://localhost:5004
- In a web browser enter <http://localhost:5004>

###### Localstack

[Localstack](https://localstack.cloud) is a cloud stack testing and mocking framework for developing against various AWS services.

A default Localstack configuration is provided with a minimal number of enabled services available (S3 only at present). Localstack does not *require* the use of any other external configuration file (as applications can manage buckets programatically through methods such as the [AWS SDK](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-s3-buckets.html)). However, if additional configuration (such as new buckets) are necessary before application startup, you can use a `localstack-init-fragment.sh` to perform this provisioning; an example of which is provided [here](snippets/localstack-init-fragment.sh).
A default Localstack configuration is provided with a minimal number of enabled services available (S3 only at present). Localstack does not _require_ the use of any other external configuration file (as applications can manage buckets programatically through methods such as the [AWS SDK](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/examples-s3-buckets.html)).

However, if additional configuration (such as new buckets) are necessary before application startup, you can use a `localstack-init-fragment.sh` to perform this provisioning; an example of which is provided [here](snippets/localstack-init-fragment.sh).

Localstack is available at <http://localstack:4566> within the Docker network, and <http://localhost:4566> on the host.

Expand Down
Loading

0 comments on commit 945963e

Please sign in to comment.