Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adopt GDB #1255

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

adopt GDB #1255

wants to merge 2 commits into from

Conversation

sergiyvamz
Copy link
Contributor

Summary

Adopt Aurora Global Database

  • expose a new GDB driver dialect
  • adjust failover and failover2 plugins and related components
  • add support of Global Writer Endpoint and initialConnection plugin

Additional Reviewers

@karenc-bq
@aaron-congo
@aaronchung-bitquill

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@sergiyvamz sergiyvamz added the wip Pull request that is a work in progress label Jan 22, 2025
Copy link

github-actions bot commented Jan 22, 2025

Qodana Community for JVM

It seems all right 👌

No new problems were found according to the checks applied

💡 Qodana analysis was run in the pull request mode: only the changed files were checked

View the detailed Qodana report

To be able to view the detailed Qodana report, you can either:

  1. Register at Qodana Cloud and configure the action
  2. Use GitHub Code Scanning with Qodana
  3. Host Qodana report at GitHub Pages
  4. Inspect and use qodana.sarif.json (see the Qodana SARIF format for details)

To get *.log files or any other Qodana artifacts, run the action with upload-result option set to true,
so that the action will upload the files as the job artifacts:

      - name: 'Qodana Scan'
        uses: JetBrains/[email protected]
        with:
          upload-result: true
Contact Qodana team

Contact us at [email protected]

cleanup
rebase
fix unit tests; code style
docs
fix 'failover' plugin for GDB; add PG support for GDB
adopt GDB support including Global Writer Endpoint to failover2
@sergiyvamz sergiyvamz removed the wip Pull request that is a work in progress label Jan 23, 2025
@@ -37,6 +37,10 @@ With the `failover` plugin, the downtime during certain DB cluster operations, s

Visit [this page](./docs/using-the-jdbc-driver/SupportForRDSMultiAzDBCluster.md) for more details.

### Using the AWS JDBC Driver with Amazon Aurora Global Databases

This driver supports in-region `failover` and between-regions `planned failover` and `switchover` of [Amazon Aurora Global Databases](https://aws.amazon.com/ru/rds/aurora/global-database/). A [Global Writer Endpoint](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-connecting.html) is also recognized and can be handled to minimize potential stale DNS issue. Please check [failover plugin](./docs/using-the-jdbc-driver/using-plugins/UsingTheFailoverPlugin.md), [failover2 plugin](./docs/using-the-jdbc-driver/using-plugins/UsingTheFailover2Plugin.md) and [Aurora Initial Connection Strategy plugin](./docs/using-the-jdbc-driver/using-plugins/UsingTheAuroraInitialConnectionStrategyPlugin.md) for more information.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This driver supports in-region `failover` and between-regions `planned failover` and `switchover` of [Amazon Aurora Global Databases](https://aws.amazon.com/ru/rds/aurora/global-database/). A [Global Writer Endpoint](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-connecting.html) is also recognized and can be handled to minimize potential stale DNS issue. Please check [failover plugin](./docs/using-the-jdbc-driver/using-plugins/UsingTheFailoverPlugin.md), [failover2 plugin](./docs/using-the-jdbc-driver/using-plugins/UsingTheFailover2Plugin.md) and [Aurora Initial Connection Strategy plugin](./docs/using-the-jdbc-driver/using-plugins/UsingTheAuroraInitialConnectionStrategyPlugin.md) for more information.
This driver supports in-region `failover` and between-regions `planned failover` and `switchover` of [Amazon Aurora Global Databases](https://aws.amazon.com/ru/rds/aurora/global-database/). A [Global Writer Endpoint](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-connecting.html) is also recognized and can be handled to minimize potential stale DNS issues. Please check [failover plugin](./docs/using-the-jdbc-driver/using-plugins/UsingTheFailoverPlugin.md), [failover2 plugin](./docs/using-the-jdbc-driver/using-plugins/UsingTheFailover2Plugin.md) and [Aurora Initial Connection Strategy plugin](./docs/using-the-jdbc-driver/using-plugins/UsingTheAuroraInitialConnectionStrategyPlugin.md) for more information.

@@ -5,6 +5,8 @@ When this plugin is enabled, if the initial connection is to a reader cluster en

This plugin also helps retrieve connections more reliably. When a user connects to a cluster endpoint, the actual instance for a new connection is resolved by DNS. During failover, the cluster elects another instance to be the writer. While DNS is updating, which can take up to 40-60 seconds, if a user tries to connect to the cluster endpoint, they may be connecting to an old node. This plugin helps by replacing the out of date endpoint if DNS is updating.

In case of Aurora Global Database, a user has an option to use an [Aurora Global Writer Endpoint](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-connecting.html). Global Writer Endpoint makes a user application configuration easier. However, similar to a cluster writer endpoint mentioned above, it can also be affected by DNS updates. The Aurora Initial Connection Strategy Plugin recognizes an Aurora Global Writer Endpoint and substitutes it with a current writer endpoint.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In case of Aurora Global Database, a user has an option to use an [Aurora Global Writer Endpoint](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-connecting.html). Global Writer Endpoint makes a user application configuration easier. However, similar to a cluster writer endpoint mentioned above, it can also be affected by DNS updates. The Aurora Initial Connection Strategy Plugin recognizes an Aurora Global Writer Endpoint and substitutes it with a current writer endpoint.
When using Aurora Global Database, the user has an option to use an [Aurora Global Writer Endpoint](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-connecting.html). The Global Writer Endpoint makes application configuration easier. However, similar to the cluster writer endpoint mentioned above, it can also be affected by DNS updates. The Aurora Initial Connection Strategy Plugin recognizes an Aurora Global Writer Endpoint and substitutes it with the current writer endpoint.

| `telemetryFailoverAdditionalTopTrace` | Boolean | No | Allows the driver to produce an additional telemetry span associated with failover. Such span helps to facilitate telemetry analysis in AWS CloudWatch. | `false` |
| Parameter | Value | Required | Description | Default Value |
|---------------------------------------|:-------:|:--------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `failoverMode` | String | No | Defines a mode for failover process. Failover process may prioritize nodes with different roles and connect to them. Possible values: <br><br>- `strict-writer` - Failover process follows writer node and connects to a new writer when it changes.<br>- `reader-or-writer` - During failover, the driver tries to connect to any available/accessible reader node. If no reader is available, the driver will connect to a writer node. This logic mimics the logic of the Aurora read-only cluster endpoint.<br>- `strict-reader` - During failover, the driver tries to connect to any available reader node. If no reader is available, the driver raises an error. Reader failover to a writer node will only be allowed for single-node clusters. This logic mimics the logic of the Aurora read-only cluster endpoint.<br><br>If this parameter is omitted, default value depends on connection url. For Aurora read-only cluster endpoint, it's set to `reader-or-writer`. Otherwise, it's `strict-writer`. | Default value depends on connection url. For Aurora read-only cluster endpoint, it's set to `reader-or-writer`. Otherwise, it's `strict-writer`. |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We say reader-or-writer and strict-reader both mimic the logic of the read-only cluster endpoint, but the RO endpoint will not connect you to a writer AFAIK. Also, we describe the default value twice here so can probably remove the duplicate in the description

Suggested change
| `failoverMode` | String | No | Defines a mode for failover process. Failover process may prioritize nodes with different roles and connect to them. Possible values: <br><br>- `strict-writer` - Failover process follows writer node and connects to a new writer when it changes.<br>- `reader-or-writer` - During failover, the driver tries to connect to any available/accessible reader node. If no reader is available, the driver will connect to a writer node. This logic mimics the logic of the Aurora read-only cluster endpoint.<br>- `strict-reader` - During failover, the driver tries to connect to any available reader node. If no reader is available, the driver raises an error. Reader failover to a writer node will only be allowed for single-node clusters. This logic mimics the logic of the Aurora read-only cluster endpoint.<br><br>If this parameter is omitted, default value depends on connection url. For Aurora read-only cluster endpoint, it's set to `reader-or-writer`. Otherwise, it's `strict-writer`. | Default value depends on connection url. For Aurora read-only cluster endpoint, it's set to `reader-or-writer`. Otherwise, it's `strict-writer`. |
| `failoverMode` | String | No | Defines a mode for failover process. Failover process may prioritize nodes with different roles and connect to them. Possible values: <br><br>- `strict-writer` - Failover process follows writer node and connects to a new writer when it changes.<br>- `reader-or-writer` - During failover, the driver tries to connect to any available/accessible reader node. If no reader is available, the driver will connect to a writer node.<br>- `strict-reader` - During failover, the driver tries to connect to any available reader node. If no reader is available, the driver raises an error. Reader failover to a writer node will only be allowed for single-node clusters. This logic mimics the logic of the Aurora read-only cluster endpoint.<br> | Default value depends on connection url. For Aurora read-only cluster endpoint, it's set to `reader-or-writer`. Otherwise, it's `strict-writer`. |

| Parameter | Value | Required | Description | Default Value |
|---------------------------------------|:-------:|:--------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `failoverMode` | String | No | Defines a mode for failover process. Failover process may prioritize nodes with different roles and connect to them. Possible values: <br><br>- `strict-writer` - Failover process follows writer node and connects to a new writer when it changes.<br>- `reader-or-writer` - During failover, the driver tries to connect to any available/accessible reader node. If no reader is available, the driver will connect to a writer node. This logic mimics the logic of the Aurora read-only cluster endpoint.<br>- `strict-reader` - During failover, the driver tries to connect to any available reader node. If no reader is available, the driver raises an error. Reader failover to a writer node will only be allowed for single-node clusters. This logic mimics the logic of the Aurora read-only cluster endpoint.<br><br>If this parameter is omitted, default value depends on connection url. For Aurora read-only cluster endpoint, it's set to `reader-or-writer`. Otherwise, it's `strict-writer`. | Default value depends on connection url. For Aurora read-only cluster endpoint, it's set to `reader-or-writer`. Otherwise, it's `strict-writer`. |
| `clusterInstanceHostPattern` | String | If connecting using an IP address or custom domain URL: Yes<br><br>Otherwise: No | This parameter is not required unless connecting to an AWS RDS cluster via an IP address or custom domain URL. In those cases, this parameter specifies the cluster instance DNS pattern that will be used to build a complete instance endpoint. A "?" character in this pattern should be used as a placeholder for the DB instance identifiers of the instances in the cluster. See [here](./UsingTheFailoverPlugin.md#host-pattern) for more information. <br/><br/> This parameter is ignored for Aurora Global Databases. <br/><br/>Example: `?.my-domain.com`, `any-subdomain.?.my-domain.com:9999`<br/><br/>Use case Example: If your cluster instance endpoints follow this pattern:`instanceIdentifier1.customHost`, `instanceIdentifier2.customHost`, etc. and you want your initial connection to be to `customHost:1234`, then your connection string should look like this: `jdbc:aws-wrapper:mysql://customHost:1234/test?clusterInstanceHostPattern=?.customHost` | If the provided connection string is not an IP address or custom domain, the JDBC Driver will automatically acquire the cluster instance host pattern from the customer-provided connection string. |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| `clusterInstanceHostPattern` | String | If connecting using an IP address or custom domain URL: Yes<br><br>Otherwise: No | This parameter is not required unless connecting to an AWS RDS cluster via an IP address or custom domain URL. In those cases, this parameter specifies the cluster instance DNS pattern that will be used to build a complete instance endpoint. A "?" character in this pattern should be used as a placeholder for the DB instance identifiers of the instances in the cluster. See [here](./UsingTheFailoverPlugin.md#host-pattern) for more information. <br/><br/> This parameter is ignored for Aurora Global Databases. <br/><br/>Example: `?.my-domain.com`, `any-subdomain.?.my-domain.com:9999`<br/><br/>Use case Example: If your cluster instance endpoints follow this pattern:`instanceIdentifier1.customHost`, `instanceIdentifier2.customHost`, etc. and you want your initial connection to be to `customHost:1234`, then your connection string should look like this: `jdbc:aws-wrapper:mysql://customHost:1234/test?clusterInstanceHostPattern=?.customHost` | If the provided connection string is not an IP address or custom domain, the JDBC Driver will automatically acquire the cluster instance host pattern from the customer-provided connection string. |
| `clusterInstanceHostPattern` | String | If connecting using an IP address or custom domain URL: Yes<br><br>Otherwise: No | This parameter is not required unless connecting to an AWS RDS cluster via an IP address or custom domain URL. In those cases, this parameter specifies the cluster instance DNS pattern that will be used to build a complete instance endpoint. A "?" character in this pattern should be used as a placeholder for the DB instance identifiers of the instances in the cluster. See [here](./UsingTheFailoverPlugin.md#host-pattern) for more information. <br/><br/> If you are using Aurora Global Database, the `globalClusterInstanceHostPattern` parameter should be used instead of this one. <br/><br/>Example: `?.my-domain.com`, `any-subdomain.?.my-domain.com:9999`<br/><br/>Use case Example: If your cluster instance endpoints follow this pattern:`instanceIdentifier1.customHost`, `instanceIdentifier2.customHost`, etc. and you want your initial connection to be to `customHost:1234`, then your connection string should look like this: `jdbc:aws-wrapper:mysql://customHost:1234/test?clusterInstanceHostPattern=?.customHost` | If the provided connection string is not an IP address or custom domain, the JDBC Driver will automatically acquire the cluster instance host pattern from the customer-provided connection string. |

|---------------------------------------|:-------:|:--------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `failoverMode` | String | No | Defines a mode for failover process. Failover process may prioritize nodes with different roles and connect to them. Possible values: <br><br>- `strict-writer` - Failover process follows writer node and connects to a new writer when it changes.<br>- `reader-or-writer` - During failover, the driver tries to connect to any available/accessible reader node. If no reader is available, the driver will connect to a writer node. This logic mimics the logic of the Aurora read-only cluster endpoint.<br>- `strict-reader` - During failover, the driver tries to connect to any available reader node. If no reader is available, the driver raises an error. Reader failover to a writer node will only be allowed for single-node clusters. This logic mimics the logic of the Aurora read-only cluster endpoint.<br><br>If this parameter is omitted, default value depends on connection url. For Aurora read-only cluster endpoint, it's set to `reader-or-writer`. Otherwise, it's `strict-writer`. | Default value depends on connection url. For Aurora read-only cluster endpoint, it's set to `reader-or-writer`. Otherwise, it's `strict-writer`. |
| `clusterInstanceHostPattern` | String | If connecting using an IP address or custom domain URL: Yes<br><br>Otherwise: No | This parameter is not required unless connecting to an AWS RDS cluster via an IP address or custom domain URL. In those cases, this parameter specifies the cluster instance DNS pattern that will be used to build a complete instance endpoint. A "?" character in this pattern should be used as a placeholder for the DB instance identifiers of the instances in the cluster. See [here](./UsingTheFailoverPlugin.md#host-pattern) for more information. <br/><br/> This parameter is ignored for Aurora Global Databases. <br/><br/>Example: `?.my-domain.com`, `any-subdomain.?.my-domain.com:9999`<br/><br/>Use case Example: If your cluster instance endpoints follow this pattern:`instanceIdentifier1.customHost`, `instanceIdentifier2.customHost`, etc. and you want your initial connection to be to `customHost:1234`, then your connection string should look like this: `jdbc:aws-wrapper:mysql://customHost:1234/test?clusterInstanceHostPattern=?.customHost` | If the provided connection string is not an IP address or custom domain, the JDBC Driver will automatically acquire the cluster instance host pattern from the customer-provided connection string. |
| `globalClusterInstanceHostPatterns` | String | For Global Databases: Yes <br><br>Otherwise: No | This parameter is similar to `clusterInstanceHostPattern` parameter but it provides a coma-separated list of instance host patterns. This parameter is required for Aurora Global Databases. The list should contains host pattern for each region of a global database. Each host pattern can be based on a RDS instance endpoint or a custom user domain name. If custom domain name is used, an instance template pattern should be prefixed with a AWS region name in square brackets (`[<aws-region-name>]`). <br/><br/> The parameter is ignored for other types of databases (Aurora Clusters, RDS Clusters, plain RDS databases, etc.).<br/><br/>Example: for an Aurora Global Database with two AWS regions `us-east-2` and `us-west-2`, the parameter value is `?.XYZ1.us-east-2.rds.amazonaws.com,?.XYZ2.us-west-2.rds.amazonaws.com`. Please pay attention that user identifiers are different for different AWS regions (`XYZ1` and `XYZ2` as in the example above). <br/><br/> In case of custom domain names, the parameter value can be `[us-east-2]?.customHost,[us-west-2]?.anotherCustomHost`. Port can be also provided: `[us-east-2]?.customHost:8888,[us-west-2]?.anotherCustomHost:9999` | |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| `globalClusterInstanceHostPatterns` | String | For Global Databases: Yes <br><br>Otherwise: No | This parameter is similar to `clusterInstanceHostPattern` parameter but it provides a coma-separated list of instance host patterns. This parameter is required for Aurora Global Databases. The list should contains host pattern for each region of a global database. Each host pattern can be based on a RDS instance endpoint or a custom user domain name. If custom domain name is used, an instance template pattern should be prefixed with a AWS region name in square brackets (`[<aws-region-name>]`). <br/><br/> The parameter is ignored for other types of databases (Aurora Clusters, RDS Clusters, plain RDS databases, etc.).<br/><br/>Example: for an Aurora Global Database with two AWS regions `us-east-2` and `us-west-2`, the parameter value is `?.XYZ1.us-east-2.rds.amazonaws.com,?.XYZ2.us-west-2.rds.amazonaws.com`. Please pay attention that user identifiers are different for different AWS regions (`XYZ1` and `XYZ2` as in the example above). <br/><br/> In case of custom domain names, the parameter value can be `[us-east-2]?.customHost,[us-west-2]?.anotherCustomHost`. Port can be also provided: `[us-east-2]?.customHost:8888,[us-west-2]?.anotherCustomHost:9999` | |
| `globalClusterInstanceHostPatterns` | String | For Global Databases: Yes <br><br>Otherwise: No | This parameter is similar to the `clusterInstanceHostPattern` parameter but it provides a comma-separated list of instance host patterns. This parameter is required for Aurora Global Databases. The list should contain host patterns for each region of the global database. Each host pattern can be based on an RDS instance endpoint or a custom user domain name. If a custom domain name is used, the instance template pattern should be prefixed with the AWS region name in square brackets (`[<aws-region-name>]`). <br/><br/> The parameter is ignored for other types of databases (Aurora Clusters, RDS Clusters, plain RDS databases, etc.).<br/><br/>Example: for an Aurora Global Database with two AWS regions `us-east-2` and `us-west-2`, the parameter value should be set to `?.XYZ1.us-east-2.rds.amazonaws.com,?.XYZ2.us-west-2.rds.amazonaws.com`. Please note that user identifiers are different for different AWS regions (`XYZ1` and `XYZ2` in the example above). <br/><br/> Example: if using custom domain names, the parameter value should be similar to `[us-east-2]?.customHost,[us-west-2]?.anotherCustomHost`. The port can also be provided: `[us-east-2]?.customHost:8888,[us-west-2]?.anotherCustomHost:9999` | |

final String port = matcher.group("port");

if (StringUtils.isNullOrEmpty(host)) {
throw new IllegalArgumentException(String.format("Can't recognize host in '%s'", urlWithRegionPrefix));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
throw new IllegalArgumentException(String.format("Can't recognize host in '%s'", urlWithRegionPrefix));
throw new IllegalArgumentException(String.format("Can't parse host from '%s'", urlWithRegionPrefix));

if (StringUtils.isNullOrEmpty(awsRegion)) {
awsRegion = rdsUtils.getRdsRegion(host);
if (StringUtils.isNullOrEmpty(awsRegion)) {
throw new IllegalArgumentException(String.format("Can't recognize AWS region in '%s'", urlWithRegionPrefix));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
throw new IllegalArgumentException(String.format("Can't recognize AWS region in '%s'", urlWithRegionPrefix));
throw new IllegalArgumentException(String.format("Can't parse AWS region from '%s'", urlWithRegionPrefix));


final Matcher matcher = URL_WITH_REGION_PATTERN.matcher(urlWithRegionPrefix);
if (!matcher.find()) {
throw new IllegalArgumentException(String.format("Can't recognize AWS region in '%s'", urlWithRegionPrefix));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please also move these messages to the .properties file?

if (key.equals(url)) {
return new ClusterSuggestedResult(url, isPrimaryCluster);
for (HostSpec hostSpec : hosts) {
// TODO: use template pattern port?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this going to be investigated/addressed later or can this comment be removed?

@@ -166,6 +166,14 @@ public class RdsUtils {
".*(?<prefix>-green-[0-9a-z]{6})\\..*",
Pattern.CASE_INSENSITIVE);

// TODO: check GDB writer endpoint for China regions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this going to be investigated later?

@aaron-congo
Copy link
Contributor

aaron-congo commented Jan 24, 2025

I'm trying to remember, can we add integration tests now or are we blocked for some reason? Could be done in a separate PR as this PR is already fairly large

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants