diff --git a/articles/active-directory/develop/v2-protocols-oidc.md b/articles/active-directory/develop/v2-protocols-oidc.md
index 195f099a461ea..734d813cd3ecb 100644
--- a/articles/active-directory/develop/v2-protocols-oidc.md
+++ b/articles/active-directory/develop/v2-protocols-oidc.md
@@ -284,7 +284,7 @@ Review the [UserInfo documentation](userinfo.md#calling-the-api) to look over ho
When you want to sign out the user from your app, it isn't sufficient to clear your app's cookies or otherwise end the user's session. You must also redirect the user to the Microsoft identity platform to sign out. If you don't do this, the user reauthenticates to your app without entering their credentials again, because they will have a valid single sign-in session with the Microsoft identity platform.
-You can redirect the user to the `end_session_endpoint` listed in the OpenID Connect metadata document:
+You can redirect the user to the `end_session_endpoint` (which supports both HTTP GET and POST requests) listed in the OpenID Connect metadata document:
```HTTP
GET https://login.microsoftonline.com/common/oauth2/v2.0/logout?
diff --git a/articles/active-directory/index.yml b/articles/active-directory/index.yml
index f1581c1d90942..0d45f162f4322 100644
--- a/articles/active-directory/index.yml
+++ b/articles/active-directory/index.yml
@@ -12,7 +12,7 @@ metadata:
ms.collection: M365-identity-device-management
author: rolyon
ms.author: rolyon
- manager: karenhoran
+ manager: CelesteDG
ms.date: 01/25/2022
highlightedContent:
@@ -322,4 +322,4 @@ additionalContent:
url: /powershell/module/azuread/
- title: Azure CLI commands for Azure AD
summary: Find the Azure AD commands in the CLI reference.
- url: /cli/azure/ad
\ No newline at end of file
+ url: /cli/azure/ad
diff --git a/articles/api-management/api-management-advanced-policies.md b/articles/api-management/api-management-advanced-policies.md
index ba55d5aaeaf39..63f4995efbf27 100644
--- a/articles/api-management/api-management-advanced-policies.md
+++ b/articles/api-management/api-management-advanced-policies.md
@@ -536,6 +536,29 @@ In the following example, request forwarding is retried up to ten times using an
```
+### Example
+
+In the following example, sending a request to a URL other than the defined backend is retried up to three times if the connection is dropped/timed out, or the request results in a server-side error. Since `first-fast-retry` is set to true, the first retry is executed immediately upon the initial request failure. Note that `send-request` must set `ignore-error` to true in order for `response-variable-name` to be null in the event of an error.
+
+```xml
+
+= 500)"
+ count="3"
+ interval="1"
+ first-fast-retry="true">
+
+ https://api.contoso.com/products/5
+ GET
+
+
+
+```
+
### Elements
| Element | Description | Required |
diff --git a/articles/azure-government/documentation-government-developer-guide.md b/articles/azure-government/documentation-government-developer-guide.md
index 62f6e0b18d39d..f7a77f70a3b73 100644
--- a/articles/azure-government/documentation-government-developer-guide.md
+++ b/articles/azure-government/documentation-government-developer-guide.md
@@ -58,7 +58,7 @@ Navigate through the following links to get started using Azure Government:
- [Connect with CLI](./documentation-government-get-started-connect-with-cli.md)
- [Connect with Visual Studio](./documentation-government-connect-vs.md)
- [Connect to Azure Storage](./documentation-government-get-started-connect-to-storage.md)
-- [Connect with Azure SDK for Python](/azure/developer/python/azure-sdk-sovereign-domain)
+- [Connect with Azure SDK for Python](/azure/developer/python/sdk/azure-sdk-sovereign-domain)
### Azure Government Video Library
diff --git a/articles/container-apps/deploy-visual-studio.md b/articles/container-apps/deploy-visual-studio.md
index 20eb92751737b..6873befe7c7a0 100644
--- a/articles/container-apps/deploy-visual-studio.md
+++ b/articles/container-apps/deploy-visual-studio.md
@@ -99,19 +99,34 @@ The Visual Studio publish dialogs will help you choose existing Azure resources,
:::image type="content" source="media/visual-studio/container-apps-choose-registry.png" alt-text="A screenshot showing how select the created registry.":::
-### Publish the app
+### Publish the app using Visual Studio
While the resources and publishing profile are created, you still need to publish and deploy the app to Azure.
-Choose **Publish** in the upper right of the publishing profile screen to deploy to the container app you created in Azure. This process may take a moment, so wait for it to complete.
+Choose **Publish** in the upper right of the publishing profile screen to deploy to the container app you created in Azure. This process may take a moment, so wait for it to complete.
:::image type="content" source="media/visual-studio/container-apps-publish.png" alt-text="A screenshot showing how to publish the app.":::
When the app finishes deploying, Visual Studio opens a browser to the URL of your deployed site. This page may initially display an error if all of the proper resources have not finished provisioning. You can continue to refresh the browser periodically to check if the deployment has fully completed.
-
:::image type="content" source="media/visual-studio/container-apps-site.png" alt-text="A screenshot showing the published site.":::
+### Publish the app using GitHub Actions
+
+Container Apps can also be deployed using CI/CD through [GitHub actions](https://docs.github.com/en/actions), which are a powerful tool for automating, customizing, and executing development workflows directly through the GitHub repository of your project.
+
+If Visual Studio detects the project you are publishing is hosted in GitHub, the publish flow presents an additional **Deployment type** step. This stage allows developers to choose whether to publish directly through Visual Studio using the steps shown earlier in the quickstart, or through a GitHub Actions workflow.
+
+:::image type="content" source="media/visual-studio/container-apps-deployment-type.png" alt-text="A screenshot showing the deployment type.":::
+
+If you select the GitHub Actions workflow, Visual Studio will add a *.github* folder to the root directory of the project, along with a generated YAML file inside of it. The YAML file contains GitHub Actions configurations to build and deploy your app to Azure every time you push your code.
+
+After you make a change and push your code, you can see the progress of the build and deploy process in GitHub under the **Actions** tab. This page provides detailed logs and indicators regarding the progress and health of the workflow.
+
+:::image type="content" source="media/visual-studio/container-apps-github-actions.png" alt-text="A screenshot showing GitHub actions.":::
+
+Once you see a green checkmark next to the build and deploy jobs the workflow is complete. When you browse to your Container Apps site you should see the latest changes applied. You can always find the URL for your container app using the Azure portal page.
+
## Clean up resources
If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
diff --git a/articles/container-apps/media/visual-studio/container-apps-deployment-type.png b/articles/container-apps/media/visual-studio/container-apps-deployment-type.png
new file mode 100644
index 0000000000000..1de992c8c34b7
Binary files /dev/null and b/articles/container-apps/media/visual-studio/container-apps-deployment-type.png differ
diff --git a/articles/container-apps/media/visual-studio/container-apps-github-actions.png b/articles/container-apps/media/visual-studio/container-apps-github-actions.png
new file mode 100644
index 0000000000000..68b1291ee1e78
Binary files /dev/null and b/articles/container-apps/media/visual-studio/container-apps-github-actions.png differ
diff --git a/articles/cost-management-billing/costs/aws-integration-set-up-configure.md b/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
index 8ba0f7834b382..e1aa0bfa205c2 100644
--- a/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
+++ b/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
@@ -4,7 +4,7 @@ titleSuffix: Azure Cost Management + Billing
description: This article walks you through setting up and configuring AWS Cost and Usage report integration with Cost Management.
author: bandersmsft
ms.author: banders
-ms.date: 04/13/2022
+ms.date: 04/28/2022
ms.topic: how-to
ms.service: cost-management-billing
ms.subservice: cost-management
@@ -60,10 +60,10 @@ Use the Create a New Role wizard:
1. Sign in to your AWS console and select **Services**.
2. In the list of services, select **IAM**.
3. Select **Roles** and then select **Create Role**.
-4. On the next page, select **Another AWS account**.
-5. In **Account ID**, enter **432263259397**.
-6. In **Options**, select **Require external ID (Best practice when a third party will assume this role)**.
-7. In **External ID**, enter the external ID, which is a shared passcode between the AWS role and Cost Management. The same external ID is also used on the **New Connector** page in Cost Management. Microsoft recommends that you use a strong passcode policy when entering the external ID.
+4. On the **Select trusted entity** page, select **AWS account** and then under **An AWS account**, select **Another AWS account**.
+5. Under **Account ID**, enter **432263259397**.
+6. Under **Options**, select **Require external ID (Best practice when a third party will assume this role)**.
+7. Under **External ID**, enter the external ID, which is a shared passcode between the AWS role and Cost Management. The same external ID is also used on the **New Connector** page in Cost Management. Microsoft recommends that you use a strong passcode policy when entering the external ID.
> [!NOTE]
> Don't change the selection for **Require MFA**. It should remain cleared.
8. Select **Next: Permissions**.
diff --git a/articles/digital-twins/concepts-models.md b/articles/digital-twins/concepts-models.md
index 9349aa11700a0..1e9a22c69aee6 100644
--- a/articles/digital-twins/concepts-models.md
+++ b/articles/digital-twins/concepts-models.md
@@ -156,7 +156,7 @@ The following example shows a Sensor model with a semantic-type telemetry for Te
:::code language="json" source="~/digital-twins-docs-samples-getting-started/models/advanced-home-example/ISensor.json" highlight="7-18":::
> [!NOTE]
-> Currently, "Property" or "Telemetry" type must be the first element of the array, followed by the semantic type. Otherwise, the field may not be visible in the Azure Digital Twins Explorer.
+> *"Property"* or *"Telemetry"* must be the first element of the `@type` array, followed by the semantic type. Otherwise, the field may not be visible in [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md).
## Relationships
diff --git a/articles/expressroute/expressroute-howto-routing-portal-resource-manager.md b/articles/expressroute/expressroute-howto-routing-portal-resource-manager.md
index 14fc3ad605b34..de3a77bb6110d 100644
--- a/articles/expressroute/expressroute-howto-routing-portal-resource-manager.md
+++ b/articles/expressroute/expressroute-howto-routing-portal-resource-manager.md
@@ -90,20 +90,16 @@ This section helps you create, get, update, and delete the Microsoft peering con
:::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/configuration-m-validation-needed.png" alt-text="Configure Microsoft peering validation needed":::
-> [!IMPORTANT]
-> Microsoft verifies if the specified 'Advertised public prefixes' and 'Peer ASN' (or 'Customer ASN') are assigned to you in the Internet Routing Registry. If you are getting the public prefixes from another entity and if the assignment is not recorded with the routing registry, the automatic validation will not complete and will require manual validation. If the automatic validation fails, you will see the message 'Validation needed'.
->
-> If you see the message 'Validation needed', collect the document(s) that show the public prefixes are assigned to your organization by the entity that is listed as the owner of the prefixes in the routing registry and submit these documents for manual validation by opening a support ticket.
->
+ > [!IMPORTANT]
+ > Microsoft verifies if the specified 'Advertised public prefixes' and 'Peer ASN' (or 'Customer ASN') are assigned to you in the Internet Routing Registry. If you are getting the public prefixes from another entity and if the assignment is not recorded with the routing registry, the automatic validation will not complete and will require manual validation. If the automatic validation fails, you will see the message 'Validation needed'.
+ >
+ > If you see the message 'Validation needed', collect the document(s) that show the public prefixes are assigned to your organization by the entity that is listed as the owner of the prefixes in the routing registry and submit these documents for manual validation by opening a support ticket.
+ >
- If your circuit gets to a 'Validation needed' state, you must open a support ticket to show proof of ownership of the prefixes to our support team. You can open a support ticket directly from the portal, as shown in the following example:
+ If your circuit gets to a **Validation needed** state, you must open a support ticket to show proof of ownership of the prefixes to our support team. You can open a support ticket directly from the portal, as shown in the following example:
:::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/ticket-portal-m.png" alt-text="Validation Needed - support ticket":::
-5. After the configuration has been accepted successfully, you'll see something similar to the following image:
-
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/microsoft-peering-validation-configured.png" alt-text="Peering status: Configured":::
-
### To view Microsoft peering details
You can view the properties of Microsoft peering by selecting the row for the peering.
@@ -128,11 +124,11 @@ This section helps you create, get, update, and delete the Azure private peering
**Circuit - Provider status: Not provisioned**
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/not-provisioned-private.png" alt-text="Screenshot showing the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status which is set to Not provisioned":::
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/not-provisioned.png" alt-text="Screenshot showing the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status which is set to Not provisioned":::
**Circuit - Provider status: Provisioned**
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/provisioned-private-peering.png" alt-text="Screenshot showing the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status which is set to Provisioned":::
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/provisioned.png" alt-text="Screenshot showing the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status which is set to Provisioned":::
2. Configure Azure private peering for the circuit. Make sure that you have the following items before you continue with the next steps:
@@ -152,15 +148,12 @@ This section helps you create, get, update, and delete the Azure private peering
:::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/private-peering-configuration.png" alt-text="Configure private peering":::
-5. After the configuration has been accepted successfully, you see something similar to the following example:
-
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/save-private-peering.png" alt-text="Saved private peering":::
### To view Azure private peering details
You can view the properties of Azure private peering by selecting the peering.
-:::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/view-peering-m.png" alt-text="View private peering properties":::
+:::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/view-private-peering.png" alt-text="View private peering properties":::
### To update Azure private peering configuration
diff --git a/articles/expressroute/expressroute-howto-set-global-reach-portal.md b/articles/expressroute/expressroute-howto-set-global-reach-portal.md
index 9124241bcb95c..672d68bba6407 100644
--- a/articles/expressroute/expressroute-howto-set-global-reach-portal.md
+++ b/articles/expressroute/expressroute-howto-set-global-reach-portal.md
@@ -11,7 +11,7 @@ ms.author: duau
# Configure ExpressRoute Global Reach using the Azure portal
-This article helps you configure ExpressRoute Global Reach using PowerShell. For more information, see [ExpressRouteRoute Global Reach](expressroute-global-reach.md).
+This article helps you configure ExpressRoute Global Reach using the Azure portal. For more information, see [ExpressRouteRoute Global Reach](expressroute-global-reach.md).
> [!NOTE]
> IPv6 support for ExpressRoute Global Reach is now in Public Preview.
@@ -53,6 +53,8 @@ Enable connectivity between your on-premises networks. There are separate sets o
1. Select **Save** to complete the Global Reach configuration. When the operation completes, you'll have connectivity between your two on-premises networks through both ExpressRoute circuits.
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/save-configuration.png" alt-text="Screenshot of the save button for Global Reach configuration.":::
+
> [!NOTE]
> The Global Reach configuration is bidirectional. Once you create the connection from one circuit the other circuit will also have the configuration.
>
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/configuration-m-validation-needed.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/configuration-m-validation-needed.png
index a55ec5b619711..24ff8946aa7da 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/configuration-m-validation-needed.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/configuration-m-validation-needed.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/configuration-m.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/configuration-m.png
index 614d114719c2b..6afaa3674bf23 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/configuration-m.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/configuration-m.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-microsoft-peering.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-microsoft-peering.png
index 63051703cf570..795c43347f620 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-microsoft-peering.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-microsoft-peering.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-p.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-p.png
deleted file mode 100644
index 2689e3c40dea7..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-p.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-peering-m.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-peering-m.png
deleted file mode 100644
index f9f3c8a6b7bde..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-peering-m.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-private-peering.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-private-peering.png
index 2b94d0b9571a4..c064655af53d8 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-private-peering.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/delete-private-peering.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/microsoft-peering-configuration.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/microsoft-peering-configuration.png
deleted file mode 100644
index 06ab5455f7ca5..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/microsoft-peering-configuration.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/microsoft-peering-validation-configured.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/microsoft-peering-validation-configured.png
deleted file mode 100644
index 3a3efa1ecbe5c..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/microsoft-peering-validation-configured.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-m-lightbox.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-m-lightbox.png
deleted file mode 100644
index df0cc2d573eeb..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-m-lightbox.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-m.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-m.png
deleted file mode 100644
index 3753768054855..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-m.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-p-lightbox.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-p-lightbox.png
deleted file mode 100644
index cca1f1d51d371..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-p-lightbox.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-p.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-p.png
deleted file mode 100644
index b02382661d016..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned-p.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned.png
index 874a332ba1578..1a86b56562f00 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/not-provisioned.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/private-peering-configuration.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/private-peering-configuration.png
index 68a32181bab56..0cc04d585c352 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/private-peering-configuration.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/private-peering-configuration.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-m-lightbox.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-m-lightbox.png
deleted file mode 100644
index 0a75959be3da9..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-m-lightbox.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-m.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-m.png
deleted file mode 100644
index 7e7248a23f21b..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-m.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-p-lightbox.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-p-lightbox.png
deleted file mode 100644
index 97b9341affa5c..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-p-lightbox.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-p.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-p.png
deleted file mode 100644
index 9057ec93a6e39..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-p.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-private-peering.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-private-peering.png
deleted file mode 100644
index 5e124f4cf8f5c..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned-private-peering.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned.png
index 5e124f4cf8f5c..c2b767d04da7b 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/provisioned.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/save-p.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/save-p.png
deleted file mode 100644
index f1394ce87775f..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/save-p.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/save-private-peering.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/save-private-peering.png
deleted file mode 100644
index 7bd9aa51759d8..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/save-private-peering.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-microsoft-peering.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-microsoft-peering.png
index 3347304d4d209..abff80a43b0d3 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-microsoft-peering.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-microsoft-peering.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-peering-p-lightbox.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-peering-p-lightbox.png
deleted file mode 100644
index ea8ad69b0d2a1..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-peering-p-lightbox.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-private-peering.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-private-peering.png
index 69dc380d596d1..2dc29301b39fd 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-private-peering.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/select-private-peering.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/update-peering-m.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/update-peering-m.png
deleted file mode 100644
index 365f2a16ee868..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/update-peering-m.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/update-peering-p.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/update-peering-p.png
deleted file mode 100644
index 980faf53fbb4f..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/update-peering-p.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/update-private-peering.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/update-private-peering.png
index fd56ad8a50013..968b25066daa8 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/update-private-peering.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/update-private-peering.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-p-lightbox.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-p-lightbox.png
deleted file mode 100644
index 25fdf26defb7b..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-p-lightbox.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-p.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-p.png
deleted file mode 100644
index ee7f59147575a..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-p.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-peering-m-lightbox.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-peering-m-lightbox.png
deleted file mode 100644
index 6f597d11983c7..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-peering-m-lightbox.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-peering-m.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-peering-m.png
index 63249b39b5d1d..bdd47c453a15d 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-peering-m.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-peering-m.png differ
diff --git a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-private-peering.png b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-private-peering.png
index 0fe0320049f5f..1efc59b94d593 100644
Binary files a/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-private-peering.png and b/articles/expressroute/media/expressroute-howto-routing-portal-resource-manager/view-private-peering.png differ
diff --git a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration-with-authorization.png b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration-with-authorization.png
index 69cb46b25fc82..b00a270686222 100644
Binary files a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration-with-authorization.png and b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration-with-authorization.png differ
diff --git a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration.png b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration.png
index a07e342c3d6f4..a71587a7df35a 100644
Binary files a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration.png and b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration.png differ
diff --git a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/disable-global-reach-configuration.png b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/disable-global-reach-configuration.png
deleted file mode 100644
index 2f075a866ba3d..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/disable-global-reach-configuration.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/expressroute-circuit-global-reach-list.png b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/expressroute-circuit-global-reach-list.png
deleted file mode 100644
index a03fe3581c4d9..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/expressroute-circuit-global-reach-list.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/expressroute-circuit-private-peering.png b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/expressroute-circuit-private-peering.png
deleted file mode 100644
index 6152c7734f20e..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/expressroute-circuit-private-peering.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/overview.png b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/overview.png
index e4986dc4be860..f575ec26bce51 100644
Binary files a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/overview.png and b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/overview.png differ
diff --git a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/private-peering-enable-global-reach.png b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/private-peering-enable-global-reach.png
deleted file mode 100644
index 8eb94993c7423..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/private-peering-enable-global-reach.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/save-configuration.png b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/save-configuration.png
new file mode 100644
index 0000000000000..17f7625263ec5
Binary files /dev/null and b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/save-configuration.png differ
diff --git a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/save-private-peering-configuration.png b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/save-private-peering-configuration.png
deleted file mode 100644
index bc3b3695f77be..0000000000000
Binary files a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/save-private-peering-configuration.png and /dev/null differ
diff --git a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/verify-global-reach-configuration.png b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/verify-global-reach-configuration.png
index 455b3219bcd93..19f47351fea30 100644
Binary files a/articles/expressroute/media/expressroute-howto-set-global-reach-portal/verify-global-reach-configuration.png and b/articles/expressroute/media/expressroute-howto-set-global-reach-portal/verify-global-reach-configuration.png differ
diff --git a/articles/load-balancer/outbound-rules.md b/articles/load-balancer/outbound-rules.md
index c830a6af3c7a5..ca16776b64e00 100644
--- a/articles/load-balancer/outbound-rules.md
+++ b/articles/load-balancer/outbound-rules.md
@@ -31,7 +31,7 @@ With outbound rules, you can explicitly define outbound **SNAT** behavior.
Outbound rules allow you to control:
* **Which virtual machines are translated to which public IP addresses.**
- * Two rules were backend pool 1 uses the blue IP address 1 and 2, backend pool 2 uses the yellow IP prefix.
+ * Two rules where backend pool 1 uses both blue IP addresses, and backend pool 2 uses the yellow IP prefix.
* **How outbound SNAT ports are allocated.**
* If backend pool 2 is the only pool making outbound connections, give all SNAT ports to backend pool 2 and none to backend pool 1.
* **Which protocols to provide outbound translation for.**
diff --git a/articles/logic-apps/single-tenant-overview-compare.md b/articles/logic-apps/single-tenant-overview-compare.md
index 0e0c041068de4..8259821ad3fb3 100644
--- a/articles/logic-apps/single-tenant-overview-compare.md
+++ b/articles/logic-apps/single-tenant-overview-compare.md
@@ -5,7 +5,7 @@ services: logic-apps
ms.suite: integration
ms.reviewer: estfan, azla
ms.topic: conceptual
-ms.date: 04/26/2022
+ms.date: 04/28/2022
ms.custom: ignite-fall-2021
---
@@ -255,7 +255,7 @@ The single-tenant model and **Logic App (Standard)** resource type include many
For the **Logic App (Standard)** resource, these capabilities have changed, or they are currently limited, unavailable, or unsupported:
-* **Triggers and actions**: Built-in triggers and actions run natively in Azure Logic Apps, while managed connectors are hosted and run in Azure. Some built-in triggers and actions are unavailable, such as Sliding Window, Batch, Azure App Services, and Azure API Management. To start a stateful or stateless workflow, use the [built-in Recurrence, Request, HTTP, HTTP Webhook, Event Hubs, or Service Bus trigger](../connectors/apis-list.md). In the designer, built-in triggers and actions appear under the **Built-in** tab.
+* **Triggers and actions**: Built-in triggers and actions run natively in Azure Logic Apps, while managed connectors are hosted and run in Azure. Some built-in triggers and actions are unavailable, such as Sliding Window, Batch, Azure App Services, and Azure API Management. To start a stateful or stateless workflow, use the [Request, HTTP, HTTP Webhook, Event Hubs, Service Bus trigger, and so on](../connectors/built-in.md). The Recurrence trigger is available only for stateful workflows, not stateless workflows. In the designer, built-in triggers and actions appear under the **Built-in** tab.
For *stateful* workflows, [managed connector triggers and actions](../connectors/managed.md) appear under the **Azure** tab, except for the unavailable operations listed below. For *stateless* workflows, the **Azure** tab doesn't appear when you want to select a trigger. You can select only [managed connector *actions*, not triggers](../connectors/managed.md). Although you can enable Azure-hosted managed connectors for stateless workflows, the designer doesn't show any managed connector triggers for you to add.
diff --git a/articles/orbital/space-partner-program-overview.md b/articles/orbital/space-partner-program-overview.md
index c541d468c151c..ba29117f52ce3 100644
--- a/articles/orbital/space-partner-program-overview.md
+++ b/articles/orbital/space-partner-program-overview.md
@@ -61,7 +61,8 @@ To join the program, we ask partners to commit to:
## Next steps
-- [Sign up for MS Startups for access to credits and support](https://partner.microsoft.com/?msclkid=0ea9c859bb5611ec801255d300e7c499)
+- [Sign up for the Microsoft Partner Network](https://partner.microsoft.com/?msclkid=0ea9c859bb5611ec801255d300e7c499)
+- [Sign up for MS Startups for access to credits and support](https://startups.microsoft.com/)
- [Downlink data from satellites using Azure Orbital](overview.md)
- [Analyze space data on Azure](/azure/architecture/example-scenario/data/geospatial-data-processing-analytics-azure)
- [Drive insights with geospatial partners on Azure – ESRI and visualize with Power BI](https://azuremarketplace.microsoft.com/en/marketplace/apps/esri.arcgis-enterprise?tab=Overview)
diff --git a/articles/service-bus-messaging/includes/service-bus-ruby-setup.md b/articles/service-bus-messaging/includes/service-bus-ruby-setup.md
index 74c86acd1fc8d..7e9f6ae27ce71 100644
--- a/articles/service-bus-messaging/includes/service-bus-ruby-setup.md
+++ b/articles/service-bus-messaging/includes/service-bus-ruby-setup.md
@@ -1,6 +1,6 @@
---
author: spelluru
-ms.service: service-bus
+ms.service: service-bus-messaging
ms.topic: include
ms.date: 11/09/2018
ms.author: spelluru
diff --git a/articles/service-bus-messaging/includes/service-bus-selector-queues.md b/articles/service-bus-messaging/includes/service-bus-selector-queues.md
index d30fd2a2491e8..d4a81e7e16b31 100644
--- a/articles/service-bus-messaging/includes/service-bus-selector-queues.md
+++ b/articles/service-bus-messaging/includes/service-bus-selector-queues.md
@@ -1,6 +1,6 @@
---
author: spelluru
-ms.service: service-bus
+ms.service: service-bus-messaging
ms.topic: include
ms.date: 11/09/2018
ms.author: spelluru
diff --git a/articles/service-bus-messaging/includes/service-bus-selector-topics.md b/articles/service-bus-messaging/includes/service-bus-selector-topics.md
index cbd9110fb8193..ba62cbdb5a19b 100644
--- a/articles/service-bus-messaging/includes/service-bus-selector-topics.md
+++ b/articles/service-bus-messaging/includes/service-bus-selector-topics.md
@@ -1,6 +1,6 @@
---
author: spelluru
-ms.service: service-bus
+ms.service: service-bus-messaging
ms.topic: include
ms.date: 11/09/2018
ms.author: spelluru
diff --git a/articles/service-bus-messaging/transport-layer-security-audit-minimum-version.md b/articles/service-bus-messaging/transport-layer-security-audit-minimum-version.md
index 8eaaf3f4558c5..4afb862b14309 100644
--- a/articles/service-bus-messaging/transport-layer-security-audit-minimum-version.md
+++ b/articles/service-bus-messaging/transport-layer-security-audit-minimum-version.md
@@ -5,7 +5,7 @@ description: Configure Azure Policy to audit compliance of Azure Service Bus for
services: service-bus
author: EldertGrootenboer
-ms.service: service-bus
+ms.service: service-bus-messaging
ms.topic: article
ms.date: 04/22/2022
ms.author: egrootenboer
diff --git a/articles/service-bus-messaging/transport-layer-security-configure-client-version.md b/articles/service-bus-messaging/transport-layer-security-configure-client-version.md
index a3a2634506454..98ca416fab5c0 100644
--- a/articles/service-bus-messaging/transport-layer-security-configure-client-version.md
+++ b/articles/service-bus-messaging/transport-layer-security-configure-client-version.md
@@ -5,7 +5,7 @@ description: Configure a client application to communicate with Azure Service Bu
services: service-bus
author: EldertGrootenboer
-ms.service: service-bus
+ms.service: service-bus-messaging
ms.topic: article
ms.date: 04/22/2022
ms.author: egrootenboer
diff --git a/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md b/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
index 9bc8000d7111a..da751e3c5d665 100644
--- a/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
+++ b/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
@@ -5,7 +5,7 @@ description: Configure an Azure Service Bus namespace to use a minimum version o
services: service-bus
author: EldertGrootenboer
-ms.service: service-bus
+ms.service: service-bus-messaging
ms.topic: article
ms.date: 04/22/2022
ms.author: egrootenboer
diff --git a/articles/service-bus-messaging/transport-layer-security-enforce-minimum-version.md b/articles/service-bus-messaging/transport-layer-security-enforce-minimum-version.md
index 370debb2137ef..f44ef1617e59a 100644
--- a/articles/service-bus-messaging/transport-layer-security-enforce-minimum-version.md
+++ b/articles/service-bus-messaging/transport-layer-security-enforce-minimum-version.md
@@ -5,7 +5,7 @@ description: Configure a service bus namespace to require a minimum version of T
services: service-bus
author: EldertGrootenboer
-ms.service: service-bus
+ms.service: service-bus-messaging
ms.topic: article
ms.date: 04/12/2022
ms.author: egrootenboer
diff --git a/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md b/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
index f853b2ddab534..27bde3e3b7c17 100644
--- a/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
+++ b/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
@@ -73,63 +73,63 @@ All Synapse RBAC permissions/actions shown in the table are prefixed `Microsoft/
Task (I want to...) |Role (I need to be...)|Synapse RBAC permission/action
--|--|--
-|Open Synapse Studio on a workspace|Synapse User, or |read
-| |Azure Owner, Contributor, or Reader on the workspace|none
-|List SQL pools, Data Explorer pools, Apache Spark pools, Integration runtimes and access their configuration details|Synapse User, or|read|
-||Azure Owner, Contributor, or Reader on the workspace|none
-|List linked services, credentials, managed private endpoints|Synapse User|read
+|Open Synapse Studio on a workspace|Synapse User or |read
+| |Azure Owner or Contributor, or Reader on the workspace|none
+|List SQL pools or Data Explorer pools or Apache Spark pools, or Integration runtimes and access their configuration details|Synapse User or|read|
+||Azure Owner or Contributor, or Reader on the workspace|none
+|List linked services or credentials or managed private endpoints|Synapse User|read
SQL POOLS|
Create a dedicated SQL pool or a serverless SQL pool|Azure Owner or Contributor on the workspace|none
-Manage (pause, scale, or delete) a dedicated SQL pool|Azure Owner or Contributor on the SQL pool or workspace|none
-Create a SQL script|Synapse User, or Azure Owner or Contributor on the workspace, *Additional SQL permissions are required to run a SQL script, publish, or commit changes*.|
-List and open any published SQL script| Synapse Artifact User, Artifact Publisher, Synapse Contributor|artifacts/read
+Manage (pause or scale, or delete) a dedicated SQL pool|Azure Owner or Contributor on the SQL pool or workspace|none
+Create a SQL script|Synapse User or Azure Owner or Contributor on the workspace. *Additional SQL permissions are required to run a SQL script, publish, or commit changes*.|
+List and open any published SQL script| Synapse Artifact User or Artifact Publisher, or Synapse Contributor|artifacts/read
Run a SQL script on a serverless SQL pool|SQL permissions on the pool (granted automatically to a Synapse Administrator)|none
Run a SQL script on a dedicated SQL pool|SQL permissions on the pool (granted automatically to a Synapse Administrator)|none
-Publish a new, updated, or deleted SQL script|Synapse Artifact Publisher, Synapse Contributor|sqlScripts/write, delete
+Publish a new or updated, or deleted SQL script|Synapse Artifact Publisher or Synapse Contributor|sqlScripts/write, delete
Commit changes to a SQL script to the Git repo|Requires Git permissions on the repo|
Assign Active Directory Admin on the workspace (via workspace properties in the Azure Portal)|Azure Owner or Contributor on the workspace |
DATA EXPLORER POOLS|
Create a Data Explorer pool |Azure Owner or Contributor on the workspace|none
-Manage (pause, scale, or delete) a Data Explorer pool|Azure Owner or Contributor on the Data Explorer pool or workspace|none
-Create a KQL script|Synapse User, *Additional Data Explorer permissions are required to run a script, publish, or commit changes*.|
-List and open any published KQL script| Synapse Artifact User, Artifact Publisher, Synapse Contributor|artifacts/read
+Manage (pause or scale, or delete) a Data Explorer pool|Azure Owner or Contributor on the Data Explorer pool or workspace|none
+Create a KQL script|Synapse User. *Additional Data Explorer permissions are required to run a script, publish, or commit changes*.|
+List and open any published KQL script| Synapse Artifact User or Artifact Publisher, or Synapse Contributor|artifacts/read
Run a KQL script on a Data Explorer pool| Data Explorer permissions on the pool (granted automatically to a Synapse Administrator)|none
-Publish new, update, or delete KQL script|Synapse Artifact Publisher, Synapse Contributor|kqlScripts/write, delete
+Publish new, update, or delete KQL script|Synapse Artifact Publisher or Synapse Contributor|kqlScripts/write, delete
Commit changes to a KQL script to the Git repo|Requires Git permissions on the repo|
APACHE SPARK POOLS|
Create an Apache Spark pool|Azure Owner or Contributor on the workspace|
Monitor Apache Spark applications| Synapse User|read
View the logs for notebook and job execution |Synapse Monitoring Operator|
Cancel any notebook or Spark job running on an Apache Spark pool|Synapse Compute Operator on the Apache Spark pool.|bigDataPools/useCompute
-Create a notebook or job definition|Synapse User, or Azure Owner, Contributor, or Reader on the workspace *Additional permissions are required to run, publish, or commit changes*|read
-List and open a published notebook or job definition, including reviewing saved outputs|Synapse Artifact User, Synapse Monitoring Operator on the workspace|artifacts/read
-Run a notebook and review its output, or submit a Spark job|Synapse Apache Spark Administrator, Synapse Compute Operator on the selected Apache Spark pool|bigDataPools/useCompute
-Publish or delete a notebook or job definition (including output) to the service|Artifact Publisher on the workspace, Synapse Apache Spark Administrator|notebooks/write, delete
+Create a notebook or job definition|Synapse User or Azure Owner or Contributor, or Reader on the workspace *Additional permissions are required to run, publish, or commit changes*|read
+List and open a published notebook or job definition, including reviewing saved outputs|Synapse Artifact User or Synapse Monitoring Operator on the workspace|artifacts/read
+Run a notebook and review its output, or submit a Spark job|Synapse Apache Spark Administrator or Synapse Compute Operator on the selected Apache Spark pool|bigDataPools/useCompute
+Publish or delete a notebook or job definition (including output) to the service|Artifact Publisher on the workspace or Synapse Apache Spark Administrator|notebooks/write, delete
Commit changes to a notebook or job definition to the Git repo|Git permissions|none
PIPELINES, INTEGRATION RUNTIMES, DATAFLOWS, DATASETS & TRIGGERS|
Create, update, or delete an Integration runtime|Azure Owner or Contributor on the workspace|
Monitor Integration runtime status|Synapse Monitoring Operator|read, integrationRuntimes/viewLogs
Review pipeline runs|Synapse Monitoring Operator|read, pipelines/viewOutputs
-Create a pipeline |Synapse User*Additional Synapse permissions are required to debug, add triggers, publish, or commit changes*|read
-Create a dataflow or dataset |Synapse User*Additional Synapse permissions are required to publish, or commit changes*|read
-List and open a published pipeline |Synapse Artifact User, Synapse Monitoring Operator | artifacts/read
-Preview dataset data|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity|
-Debug a pipeline using the default Integration runtime|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity credential|read, credentials/useSecret
-Create a trigger, including trigger now (requires permission to execute the pipeline)|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity|read, credentials/useSecret/action
-Execute/run a pipeline|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity|read, credentials/useSecret/action
-Copy data using the Copy Data tool|Synapse User + Synapse Credential User on the Workspace System Identity|read, credentials/useSecret/action
-Ingest data (using a schedule)|Synapse Author + Synapse Credential User on the Workspace System Identity|read, credentials/useSecret/action
+Create a pipeline |Synapse User*Additional Synapse permissions are required to debug, add triggers, publish, or commit changes*|read
+Create a dataflow or dataset |Synapse User*Additional Synapse permissions are required to publish, or commit changes*|read
+List and open a published pipeline |Synapse Artifact User or Synapse Monitoring Operator | artifacts/read
+Preview dataset data|Synapse User and Synapse Credential User on the WorkspaceSystemIdentity|
+Debug a pipeline using the default Integration runtime|Synapse User and Synapse Credential User on the WorkspaceSystemIdentity credential|read, credentials/useSecret
+Create a trigger, including trigger now (requires permission to execute the pipeline)|Synapse User and Synapse Credential User on the WorkspaceSystemIdentity|read, credentials/useSecret/action
+Execute/run a pipeline|Synapse User and Synapse Credential User on the WorkspaceSystemIdentity|read, credentials/useSecret/action
+Copy data using the Copy Data tool|Synapse User and Synapse Credential User on the Workspace System Identity|read, credentials/useSecret/action
+Ingest data (using a schedule)|Synapse Author and Synapse Credential User on the Workspace System Identity|read, credentials/useSecret/action
Publish a new, updated, or deleted pipeline, dataflow, or trigger to the service|Synapse Artifact Publisher on the workspace|pipelines/write, deletedataflows/write, deletetriggers/write, delete
Commit changes to pipelines, dataflows, datasets, or triggers to the Git repo |Git permissions|none
LINKED SERVICES|
-Create a linked service (includes assigning a credential)|Synapse User*Additional permissions are required to use a linked service with credentials, or to publish, or commit changes*|read
+Create a linked service (includes assigning a credential)|Synapse User*Additional permissions are required to use a linked service with credentials, or to publish, or commit changes*|read
List and open a published linked service|Synapse Artifact User|linkedServices/write, delete
-Test connection on a linked service secured by a credential|Synapse User + Synapse Credential User|credentials/useSecret/action|
-Publish a linked service|Synapse Artifact Publisher, Synapse Linked Data Manager|linkedServices/write, delete
+Test connection on a linked service secured by a credential|Synapse User and Synapse Credential User|credentials/useSecret/action|
+Publish a linked service|Synapse Artifact Publisher or Synapse Linked Data Manager|linkedServices/write, delete
Commit linked service definitions to the Git repo|Git permissions|none
ACCESS MANAGEMENT|
Review Synapse RBAC role assignments at any scope|Synapse User|read
-Assign and remove Synapse RBAC role assignments for users, groups, and service principals| Synapse Administrator at the workspace or at a specific workspace item scope|roleAssignments/write, delete
+Assign and remove Synapse RBAC role assignments for users, groups, and service principals| Synapse Administrator at the workspace or at a specific workspace item scope|roleAssignments/write, delete
## Next steps
diff --git a/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md b/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
index 041836577d0cd..c6b84f5aa37fe 100644
--- a/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
+++ b/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
@@ -1,6 +1,6 @@
---
title: Azure Synapse Dedicated SQL Pool Connector for Apache Spark
-description: This article discusses the Azure Synapse Dedicated SQL Pool Connector for Apache Spark. The connector is used to move data between a serverless Spark pool and Azure Synapse Dedicated SQL Pool.
+description: Azure Synapse Dedicated SQL Pool Connector for Apache Spark to move data between the Synapse Serverless Spark Pool and the Synapse Dedicated SQL Pool.
author: kalyankadiyala-Microsoft
ms.service: synapse-analytics
ms.topic: overview
@@ -11,69 +11,63 @@ ms.reviewer: ktuckerdavis, aniket.adnaik
---
# Azure Synapse Dedicated SQL Pool Connector for Apache Spark
-This article discusses the Azure Synapse Dedicated SQL Pool Connector for Apache Spark in Azure Synapse Analytics. The connector is used to move data between the Apache Spark runtime (serverless Spark pool) and Azure Synapse Dedicated SQL Pool.
-
## Introduction
-The Azure Synapse Dedicated SQL Pool Connector for Apache Spark in Azure Synapse Analytics enables efficient transfer of large datasets between the [Apache Spark runtime](../../synapse-analytics/spark/apache-spark-overview.md) and the [dedicated SQL pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md).
-
-The connector is implemented by using the `Scala` language. The connector is shipped as a default library within the Azure Synapse environment that consists of a workspace notebook and the serverless Spark pool runtime. To use the connector with other notebook language choices, use the Spark magic command `%%spark`.
+The Azure Synapse Dedicated SQL Pool Connector for Apache Spark in Azure Synapse Analytics enables efficient transfer of large data sets between the [Apache Spark runtime](../../synapse-analytics/spark/apache-spark-overview.md) and the [Dedicated SQL pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md). The connector is implemented using `Scala` language. The connector is shipped as a default library with Azure Synapse Workspace. To use the Connector with other notebook language choices, use the Spark magic command - `%%spark`.
-At a high level, the connector provides the following capabilities:
+At a high-level, the connector provides the following capabilities:
-* Writes to Azure Synapse Dedicated SQL Pool:
- * Ingests a large volume of data to internal and external table types.
- * Supports the following DataFrame save mode preferences:
+* Read from Azure Synapse Dedicated SQL Pool:
+ * Read large data sets from Synapse Dedicated SQL Pool Tables (Internal and External) and Views.
+ * Comprehensive predicate push down support, where filters on DataFrame get mapped to corresponding SQL predicate push down.
+ * Support for column pruning.
+* Write to Azure Synapse Dedicated SQL Pool:
+ * Ingest large volume data to Internal and External table types.
+ * Supports following DataFrame save mode preferences:
* `Append`
* `ErrorIfExists`
* `Ignore`
* `Overwrite`
- * Writes to an external table type that supports Parquet and the delimited text file format, for example, CSV.
- * To write data to internal tables, the connector now uses a [COPY statement](../../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md) instead of the CETAS/CTAS approach.
- * Enhancements optimize end-to-end write throughput performance.
- * Introduces an optional call-back handle (a Scala function argument) that clients can use to receive post-write metrics:
- * A few examples include the number of records, the duration to complete a certain action, and failure reason.
-* Reads from Azure Synapse Dedicated SQL Pool:
- * Reads large datasets from Azure Synapse Dedicated SQL Pool tables, which are internal and external, and views.
- * Comprehensive predicate push-down support, where filters on DataFrame get mapped to corresponding SQL predicate push down.
- * Support for column pruning.
+ * Write to External Table type supports Parquet and Delimited Text file format (example - CSV).
+ * To write data to internal tables, the connector now uses [COPY statement](../../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md) instead of CETAS/CTAS approach.
+ * Enhancements to optimize end-to-end write throughput performance.
+ * Introduces an optional call-back handle (a Scala function argument) that clients can use to receive post-write metrics.
+ * Few examples include - number of records, duration to complete certain action, and failure reason.
## Orchestration approach
-The following two diagrams illustrate write and read orchestrations.
-
-### Write
+### Read
-![Diagram that shows write orchestration.](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-write-orchestration.png)
+![A high-level data flow diagram to describe the connector's orchestration of a read request.](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-read-orchestration.png)
-### Read
+### Write
-![Diagram that shows read orchestration.](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-read-orchestration.png)
+![A high-level data flow diagram to describe the connector's orchestration of a write request.](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-write-orchestration.png)
-## Prerequisites
+## Pre-requisites
-This section discusses the prerequisite steps for Azure resource setup and configuration. It includes authentication and authorization requirements for using the Azure Synapse Dedicated SQL Pool Connector for Apache Spark.
+Pre-requisites such as setting up required Azure resources and steps to configure them are discussed in this section.
### Azure resources
-Review and set up the following dependent Azure resources:
+Review and setup following dependent Azure Resources:
-* [Azure Data Lake Storage](../../storage/blobs/data-lake-storage-introduction.md): Used as the primary storage account for the Azure Synapse workspace.
-* [Azure Synapse workspace](../../synapse-analytics/get-started-create-workspace.md): Used to create notebooks and build and deploy DataFrame-based ingress-egress workflows.
-* [Dedicated SQL pool (formerly Azure SQL Data Warehouse)](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md): Provides enterprise data warehousing features.
-* [Azure Synapse serverless Spark pool](../../synapse-analytics/get-started-analyze-spark.md): Provides the Spark runtime where the jobs are executed as Spark applications.
+* [Azure Data Lake Storage](../../storage/blobs/data-lake-storage-introduction.md) - used as the primary storage account for the Azure Synapse Workspace.
+* [Azure Synapse Workspace](../../synapse-analytics/get-started-create-workspace.md) - create notebooks, build and deploy DataFrame based ingress-egress workflows.
+* [Dedicated SQL Pool (formerly SQL DW)](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) - provides enterprise Data Warehousing features.
+* [Azure Synapse Serverless Spark Pool](../../synapse-analytics/get-started-analyze-spark.md) - Spark runtime where the jobs are executed as Spark Applications.
#### Prepare the database
-Connect to the Azure Synapse Dedicated SQL Pool database and run the following setup statements:
+Connect to the Synapse Dedicated SQL Pool database and run following setup statements:
-* Create a database user that's mapped to the Azure Active Directory (Azure AD) user identity that's used to sign in to the Azure Synapse workspace:
+* Create a database user that is mapped to the Azure Active Directory User Identity used to sign in to the Azure Synapse Workspace.
```sql
CREATE USER [username@domain.com] FROM EXTERNAL PROVIDER;
```
-* Create a schema in which tables are defined so that the connector can successfully write to and read from respective tables:
+* Create schema in which tables will be defined, such that the Connector can successfully write-to and read-from respective tables.
```sql
CREATE SCHEMA [];
@@ -81,48 +75,54 @@ Connect to the Azure Synapse Dedicated SQL Pool database and run the following s
### Authentication
-This section discusses two approaches for authentication.
-
-#### Azure AD-based authentication
+#### Azure Active Directory based authentication
-Azure AD-based authentication is an integrated authentication approach. The user is required to successfully sign in to the Azure Synapse workspace. When users interact with respective resources, such as storage and Azure Synapse Dedicated SQL Pool, the user tokens are used from the runtime.
+Azure Active Directory based authentication is an integrated authentication approach. The user is required to successfully log in to the Azure Synapse Analytics Workspace.
-Azure AD-based authentication is an integrated authentication approach. The user is required to successfully log in to the Azure Synapse workspace.
+#### Basic authentication
-#### Basic Authentication
-
-The Basic Authentication approach requires the user to configure `username` and `password` options. See the section [Configuration options](#configuration-options) to learn about relevant configuration parameters for reading from and writing to tables in Azure Synapse Dedicated SQL Pool.
+A basic authentication approach requires user to configure `username` and `password` options. Refer to the section - [Configuration Options](#configuration-options) to learn about relevant configuration parameters for reading from and writing to tables in Azure Synapse Dedicated SQL Pool.
### Authorization
-This section discusses authorization.
-
#### [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)
-There are two ways to grant access permissions to an Azure Data Lake Storage Gen2 storage account:
+There are two ways to grant access permissions to Azure Data Lake Storage Gen2 - Storage Account:
-* Role-based access control (RBAC) role: [Storage Blob Data Contributor role](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)
- * Assigning the `Storage Blob Data Contributor Role` grants the user permission to read, write, and delete from the Azure Storage Blob containers.
+* Role based Access Control role - [Storage Blob Data Contributor role](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)
+ * Assigning the `Storage Blob Data Contributor Role` grants the User permissions to read, write and delete from the Azure Storage Blob Containers.
* RBAC offers a coarse control approach at the container level.
-* [Access control lists (ACLs)](../../storage/blobs/data-lake-storage-access-control.md)
- * The ACL approach allows for fine-grained controls over specific paths or files under a given folder.
- * ACL checks aren't enforced if the user is already granted permission by using an RBAC approach.
+* [Access Control Lists (ACL)](../../storage/blobs/data-lake-storage-access-control.md)
+ * ACL approach allows for fine-grained controls over specific paths and/or files under a given folder.
+ * ACL checks aren't enforced if the User is already granted permissions using RBAC approach.
* There are two broad types of ACL permissions:
- * Access permissions are applied at a specific level or object.
- * Default permissions are automatically applied for all child objects at the time of their creation.
- * Types of permission include:
- * `Execute` enables the ability to traverse or navigate the folder hierarchies.
- * `Read` enables the ability to read.
- * `Write` enables the ability to write.
- * Configure ACLs so that the connector can successfully write and read from the storage locations.
+ * Access Permissions (applied at a specific level or object).
+ * Default Permissions (automatically applied for all child objects at the time of their creation).
+ * Type of permissions include:
+ * `Execute` enables ability to traverse or navigate the folder hierarchies.
+ * `Read` enables ability to read.
+ * `Write` enables ability to write.
+ * It's important to configure ACLs such that the Connector can successfully write and read from the storage locations.
+
+>[!Note]
+> * If you'd like to run notebooks using Synapse Workspace pipelines you must also grant above listed access permissions to the Synapse Workspace default managed identity. The workspace's default managed identity name is same as the name of the workspace.
+>
+> * To use the Synapse workspace with secured storage accounts, a managed private end point must be [configured](../../storage/common/storage-network-security.md?tabs=azure-portal) from the notebook. The managed private end point must be approved from the ADLS Gen2 storage account's `Private endpoint connections` section in the `Networking` pane.
#### [Azure Synapse Dedicated SQL Pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md)
-To enable successful interaction with Azure Synapse Dedicated SQL Pool, the following authorization is necessary unless you're a user also configured as an `Active Directory Admin` on the dedicated SQL endpoint:
+To enable successful interaction with Azure Synapse Dedicated SQL Pool, following authorization is necessary unless you're a user also configured as an `Active Directory Admin` on the Dedicated SQL End Point:
+
+* Read scenario
+ * Grant the user `db_exporter` using the system stored procedure `sp_addrolemember`.
+
+ ```sql
+ EXEC sp_addrolemember 'db_exporter', [@.com];
+ ```
* Write scenario
* Connector uses the COPY command to write data from staging to the internal table's managed location.
- * Configure the required permissions described in [this quickstart](../../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md#set-up-the-required-permissions).
+ * Configure required permissions described [here](../../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md#set-up-the-required-permissions).
* Following is a quick access snippet of the same:
```sql
@@ -137,62 +137,151 @@ To enable successful interaction with Azure Synapse Dedicated SQL Pool, the foll
GRANT INSERT ON TO [@.com]
```
-* Read scenario
- * Grant the user `db_exporter` by using the system stored procedure `sp_addrolemember`.
+## API documentation
- ```sql
- EXEC sp_addrolemember 'db_exporter', [@.com];
- ```
-
-## Connector API documentation
-
-Azure Synapse Dedicated SQL Pool Connector for Apache Spark: [API documentation](https://synapsesql.blob.core.windows.net/docs/latest/scala/index.html)
+Azure Synapse Dedicated SQL Pool Connector for Apache Spark - [API Documentation](https://synapsesql.blob.core.windows.net/docs/latest/scala/index.html).
### Configuration options
-To successfully bootstrap and orchestrate the read or write operation, the connector expects certain configuration parameters. The object definition `com.microsoft.spark.sqlanalytics.utils.Constants` provides a list of standardized constants for each parameter key.
-
-The following table describes the essential configuration options that must be set for each usage scenario:
-
-|Usage scenario| Options to configure |
-|--------------|----------------------------------|
-| Write using Azure AD-based authentication | - Azure Synapse Dedicated SQL endpoint
- `Constants.SERVER`
- By default, the connector infers the Azure Synapse Dedicated SQL endpoint associated with the database name (from the three-part table name argument to `synapsesql` method).
- Alternatively, users can provide the `Constants.SERVER` option.
- Azure Data Lake Storage Gen2 endpoint: Staging folders
- For internal table type:
- Configure either `Constants.TEMP_FOLDER` or the `Constants.DATASOURCE` option.
- If the user chose to provide the `Constants.DATASOURCE` option, the staging folder is derived by using the `location` value on the data source.
- If both are provided, the `Constants.TEMP_FOLDER` option value is used.
- In the absence of a staging folder option, the connector derives one based on the runtime configuration `spark.sqlanalyticsconnector.stagingdir.prefix`.
- For external table type:
- `Constants.DATASOURCE` is a required configuration option.
- The storage path defined on the data source's `location` parameter is used as the base path to establish the final absolute path.
- The base path is then appended with the value set on the `synapsesql` method's `location` argument, for example, `/`.
- If the `location` argument to `synapsesql` method isn't provided, the connector derives the location value as `/dbName/schemaName/tableName`.
|
-| Write using Basic Authentication | - Azure Synapse Dedicated SQL endpoint
- `Constants.SERVER`: Azure Synapse Dedicated SQL Pool endpoint (server FQDN)
- `Constants.USER`: SQL user name
- `Constants.PASSWORD`: SQL user password
- `Constants.STAGING_STORAGE_ACCOUNT_KEY` associated with the storage account that hosts `Constants.TEMP_FOLDERS` (internal table types only) or `Constants.DATASOURCE`
- Azure Data Lake Storage Gen2 endpoint: Staging folders
- SQL Basic Authentication credentials don't apply to access storage endpoints. It's required that the workspace user identity is given relevant access permissions. (See the section [Azure Data Lake Storage Gen2](#azure-data-lake-storage-gen2).)
|
-|Read using Azure AD-based authentication|- Credentials are automapped and the user isn't required to provide specific configuration options.
- Three-part table name argument on `synapsesql` method is required to read from the respective table in Azure Synapse Dedicated SQL Pool.
|
-|Read using Basic Authentication|- Azure Synapse Dedicated SQL endpoint
- `Constants.SERVER`: Azure Synapse Dedicated SQL Pool endpoint (server FQDN)
- `Constants.USER`: SQL user name
- `Constants.PASSWORD`: SQL user password
- Azure Data Lake Storage Gen2 endpoint: Staging folders
- `Constants.DATA_SOURCE`: Location setting from data source is used to stage extracted data from the Azure Synapse Dedicated SQL endpoint.
|
+To successfully bootstrap and orchestrate the read or write operation, the Connector expects certain configuration parameters. The object definition - `com.microsoft.spark.sqlanalytics.utils.Constants` provides a list of standardized constants for each parameter key.
+
+Following is the list of configuration options based on usage scenario:
+
+* **Read using Azure AD based authentication**
+ * Credentials are auto-mapped, and user isn't required to provide specific configuration options.
+ * Three-part table name argument on `synapsesql` method is required to read from respective table in Azure Synapse Dedicated SQL Pool.
+* **Read using basic authentication**
+ * Azure Synapse Dedicated SQL End Point
+ * `Constants.SERVER` - Synapse Dedicated SQL Pool End Point (Server FQDN)
+ * `Constants.USER` - SQL User Name.
+ * `Constants.PASSWORD` - SQL User Password.
+ * Azure Data Lake Storage (Gen 2) End Point - Staging Folders
+ * `Constants.DATA_SOURCE` - Storage path set on the data source location parameter is used for data staging.
+* **Write using Azure AD based authentication**
+ * Azure Synapse Dedicated SQL End Point
+ * By default, the Connector infers the Synapse Dedicated SQL end point by using the database name set on the `synapsesql` method's three-part table name parameter.
+ * Alternatively, users can use the `Constants.SERVER` option to specify the sql end point. Ensure the end point hosts the corresponding database with respective schema.
+ * Azure Data Lake Storage (Gen 2) End Point - Staging Folders
+ * For Internal Table Type:
+ * Configure either `Constants.TEMP_FOLDER` or `Constants.DATA_SOURCE` option.
+ * If user chose to provide `Constants.DATA_SOURCE` option, staging folder will be derived by using the `location` value from the DataSource.
+ * If both are provided, then the `Constants.TEMP_FOLDER` option value will be used.
+ * In the absence of a staging folder option, the Connector will derive one based on the runtime configuration - `spark.sqlanalyticsconnector.stagingdir.prefix`.
+ * For External Table Type:
+ * `Constants.DATA_SOURCE` is a required configuration option.
+ * The connector uses the storage path set on the data source's location parameter in combination with the `location` argument to the `synapsesql` method and derives the absolute path to persist external table data.
+ * If the `location` argument to `synapsesql` method isn't specified, then the connector will derive the location value as `/dbName/schemaName/tableName`.
+* **Write using Basic Authentication**
+ * Azure Synapse Dedicated SQL End Point
+ * `Constants.SERVER` - - Synapse Dedicated SQL Pool End Point (Server FQDN).
+ * `Constants.USER` - SQL User Name.
+ * `Constants.PASSWORD` - SQL User Password.
+ * `Constants.STAGING_STORAGE_ACCOUNT_KEY` associated with Storage Account that hosts `Constants.TEMP_FOLDERS` (internal table types only) or `Constants.DATA_SOURCE`.
+ * Azure Data Lake Storage (Gen 2) End Point - Staging Folders
+ * SQL basic authentication credentials don't apply to access storage end points.
+ * Hence, ensure to assign relevant storage access permissions as described in the section [Azure Data Lake Storage Gen2](#azure-data-lake-storage-gen2).
## Code templates
This section presents reference code templates to describe how to use and invoke the Azure Synapse Dedicated SQL Pool Connector for Apache Spark.
+### Read from Azure Synapse Dedicated SQL Pool
+
+#### Read Request - `synapsesql` method signature
+
+```Scala
+synapsesql(tableName:String) => org.apache.spark.sql.DataFrame
+```
+
+#### Read using Azure AD based authentication
+
+```Scala
+//Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB
+//Azure Active Directory based authentication approach is preferred here.
+import org.apache.spark.sql.DataFrame
+import com.microsoft.spark.sqlanalytics.utils.Constants
+import org.apache.spark.sql.SqlAnalyticsConnector._
+
+//Read from existing internal table
+val dfToReadFromTable:DataFrame = spark.read.
+ //If `Constants.SERVER` is not provided, the `` from the three-part table name argument
+ //to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ option(Constants.SERVER, ".sql.azuresynapse.net").
+ //Defaults to storage path defined in the runtime configurations (See section on Configuration Options above).
+ option(Constants.TEMP_FOLDER, "abfss://@.dfs.core.windows.net/").
+ //Three-part table name from where data will be read.
+ synapsesql("..").
+ //Column-pruning i.e., query select column values.
+ select("", "", "").
+ //Push-down filter criteria that gets translated to SQL Push-down Predicates.
+ filter(col("Title").startsWith("E")).
+ //Fetch a sample of 10 records
+ limit(10)
+
+//Show contents of the dataframe
+dfToReadFromTable.show()
+```
+
+#### Read using basic authentication
+
+```Scala
+//Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB
+//Azure Active Directory based authentication approach is preferred here.
+import org.apache.spark.sql.DataFrame
+import com.microsoft.spark.sqlanalytics.utils.Constants
+import org.apache.spark.sql.SqlAnalyticsConnector._
+
+//Read from existing internal table
+val dfToReadFromTable:DataFrame = spark.read.
+ //If `Constants.SERVER` is not provided, the `` from the three-part table name argument
+ //to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ option(Constants.SERVER, ".sql.azuresynapse.net").
+ //Set database user name
+ option(Constants.USER, "").
+ //Set user's password to the database
+ option(Constants.PASSWORD, "").
+ //Set name of the data source definition that is defined with database scoped credentials.
+ //Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting.
+ option(Constants.DATA_SOURCE, "").
+ //Three-part table name from where data will be read.
+ synapsesql("..").
+ //Column-pruning i.e., query select column values.
+ select("", "", "").
+ //Push-down filter criteria that gets translated to SQL Push-down Predicates.
+ filter(col("Title").startsWith("E")).
+ //Fetch a sample of 10 records
+ limit(10)
+
+//Show contents of the dataframe
+dfToReadFromTable.show()
+```
+
### Write to Azure Synapse Dedicated SQL Pool
-The following sections relate to a write scenario.
+#### Write Request - `synapsesql` method signature
-#### Write request: synapsesql method signature
+The method signature for the Connector version built for Spark 2.4.8 has one less argument, than that applied to the Spark 3.1.2 version. Following are the two method signatures:
-The method signature for the connector version built for Spark 2.4.8 has one less argument than that applied to the Spark 3.1.2 version. The following snippets are the two method signatures:
+* Spark Pool Version 2.4.8
-* Spark pool version 2.4.8
-
- ```Scala
- synapsesql(tableName:String,
- tableType:String = Constants.INTERNAL,
- location:Option[String] = None):Unit
- ```
+```Scala
+synapsesql(tableName:String,
+ tableType:String = Constants.INTERNAL,
+ location:Option[String] = None):Unit
+```
-* Spark pool version 3.1.2
+* Spark Pool Version 3.1.2
- ```Scala
- synapsesql(tableName:String,
- tableType:String = Constants.INTERNAL,
- location:Option[String] = None,
- callBackHandle=Option[(Map[String, Any], Option[Throwable])=>Unit]):Unit
- ```
+```Scala
+synapsesql(tableName:String,
+ tableType:String = Constants.INTERNAL,
+ location:Option[String] = None,
+ callBackHandle=Option[(Map[String, Any], Option[Throwable])=>Unit]):Unit
+```
-### Write using Azure AD-based authentication
+#### Write using Azure AD based authentication
-The following comprehensive code template describes how to use the connector for write scenarios:
+Following is a comprehensive code template that describes how to use the Connector for write scenarios:
```Scala
//Add required imports
@@ -201,29 +290,29 @@ import org.apache.spark.sql.SaveMode
import com.microsoft.spark.sqlanalytics.utils.Constants
import org.apache.spark.sql.SqlAnalyticsConnector._
-//Define read options, for example, if reading from a CSV source, configure header and delimiter options.
+//Define read options for example, if reading from CSV source, configure header and delimiter options.
val pathToInputSource="abfss://@.dfs.core.windows.net//.csv"
-//Define read configuration for the input CSV.
+//Define read configuration for the input CSV
val dfReadOptions:Map[String, String] = Map("header" -> "true", "delimiter" -> ",")
-//Initialize the DataFrame that reads CSV data from a given source.
+//Initialize DataFrame that reads CSV data from a given source
val readDF:DataFrame=spark.
read.
options(dfReadOptions).
csv(pathToInputSource).
limit(1000) //Reads first 1000 rows from the source CSV input.
-//Set up and trigger the read DataFrame for write to Azure Synapse Dedicated SQL Pool.
-//Fully qualified SQL Server DNS name can be obtained by using one of the following methods:
+//Setup and trigger the read DataFrame for write to Synapse Dedicated SQL Pool.
+//Fully qualified SQL Server DNS name can be obtained using one of the following methods:
// 1. Synapse Workspace - Manage Pane - SQL Pools -
-// 2. From the Azure portal, follow the breadcrumbs for -> -> and then go to the Connection Strings/JDBC tab.
+// 2. From Azure Portal, follow the bread-crumbs for -> -> and then go to Connection Strings/JDBC tab.
//If `Constants.SERVER` is not provided, the value will be inferred by using the `database_name` in the three-part table name argument to the `synapsesql` method.
-//Likewise, if `Constants.TEMP_FOLDER` is not provided, the connector will use the runtime staging directory config (see the section on Configuration options for details).
+//Like-wise, if `Constants.TEMP_FOLDER` is not provided, the connector will use the runtime staging directory config (see section on Configuration Options for details).
val writeOptionsWithAADAuth:Map[String, String] = Map(Constants.SERVER -> ".sql.azuresynapse.net",
Constants.TEMP_FOLDER -> "abfss://@.dfs.core.windows.net/")
-//Set up an optional callback/feedback function that can receive post-write metrics of the job performed.
+//Setup optional callback/feedback function that can receive post write metrics of the job performed.
var errorDuringWrite:Option[Throwable] = None
val callBackFunctionToReceivePostWriteMetrics: (Map[String, Any], Option[Throwable]) => Unit =
(feedback: Map[String, Any], errorState: Option[Throwable]) => {
@@ -231,8 +320,8 @@ val callBackFunctionToReceivePostWriteMetrics: (Map[String, Any], Option[Throwab
errorDuringWrite = errorState
}
-//Configure and submit the request to write to Azure Synapse Dedicated SQL Pool. (Note the default SaveMode is set to ErrorIfExists.)
-//The following sample uses the Azure AD-based authentication approach. See further examples to use SQL Basic Authentication.
+//Configure and submit the request to write to Synapse Dedicated SQL Pool (note - default SaveMode is set to ErrorIfExists)
+//Sample below is using AAD-based authentication approach; See further examples to leverage SQL Basic auth.
readDF.
write.
//Configure required configurations.
@@ -247,16 +336,16 @@ readDF.
//Optional parameter to receive a callback.
callBackHandle = Some(callBackFunctionToReceivePostWriteMetrics))
-//If the write request has failed, raise an error and fail the cell's execution.
+//If write request has failed, raise an error and fail the Cell's execution.
if(errorDuringWrite.isDefined) throw errorDuringWrite.get
```
#### Write using Basic Authentication
-The following code snippet replaces the write definition described in the [Write using Azure AD-based authentication](#write-using-azure-ad-based-authentication) section. To submit a write request by using the SQL Basic Authentication approach:
+Following code snippet replaces the write definition described in the [Write using Azure AD based authentication](#write-using-azure-ad-based-authentication) section, to submit write request using SQL basic authentication approach:
```Scala
-//Define write options to use SQL Basic Authentication
+//Define write options to use SQL basic authentication
val writeOptionsWithBasicAuth:Map[String, String] = Map(Constants.SERVER -> ".sql.azuresynapse.net",
//Set database user name
Constants.USER -> "",
@@ -267,10 +356,10 @@ val writeOptionsWithBasicAuth:Map[String, String] = Map(Constants.SERVER -> " "abfss://@.dfs.core.windows.net/")
-//Configure and submit the request to write to Azure Synapse Dedicated SQL Pool.
+//Configure and submit the request to write to Synapse Dedicated SQL Pool.
readDF.
write.
- options(writeOptions).
+ options(writeOptionsWithBasicAuth).
//Choose a save mode that is apt for your use case.
mode(SaveMode.Overwrite).
synapsesql(tableName = "..",
@@ -282,7 +371,7 @@ readDF.
callBackHandle = Some(callBackFunctionToReceivePostWriteMetrics))
```
-In a Basic Authentication approach, in order to read data from a source storage path, other configuration options are required. The following code snippet provides an example to read from an Azure Data Lake Storage Gen2 data source by using Service Principal credentials:
+In a basic authentication approach, in order to read data from a source storage path other configuration options are required. Following code snippet provides an example to read from an Azure Data Lake Storage Gen2 data source using Service Principal credentials:
```Scala
//Specify options that Spark runtime must support when interfacing and consuming source data
@@ -304,7 +393,7 @@ val dfReadOptions:Map[String, String]=Map("header"->"true",
"fs.abfss.impl" -> "org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem")
//Initialize the Storage Path string, where source data is maintained/kept.
val pathToInputSource=s"abfss://$storageContainerName@$storageAccountName.dfs.core.windows.net//"
-//Define the data frame to interface with the data source.
+//Define data frame to interface with the data source
val df:DataFrame = spark.
read.
options(dfReadOptions).
@@ -312,33 +401,31 @@ val df:DataFrame = spark.
limit(100)
```
-#### DataFrame write SaveMode support
+#### Supported DataFrame save modes
-The following SaveModes are supported when writing source data to a destination table in Azure Synapse Dedicated SQL Pool:
+Following save modes are supported when writing source data to a destination table in Azure Synapse Dedicated SQL Pool:
* ErrorIfExists (default save mode)
- * If a destination table exists, the write is aborted with an exception returned to the callee. Else, a new table is created with data from the staging folders.
+ * If destination table exists, then the write is aborted with an exception returned to the callee. Else, a new table is created with data from the staging folders.
* Ignore
- * If the destination table exists, the write ignores the write request without returning an error. Else, a new table is created with data from the staging folders.
+ * If the destination table exists, then the write will ignore the write request without returning an error. Else, a new table is created with data from the staging folders.
* Overwrite
- * If the destination table exists, the existing data in the destination is replaced with data from the staging folders. Else, a new table is created with data from the staging folders.
+ * If the destination table exists, then existing data in the destination is replaced with data from the staging folders. Else, a new table is created with data from the staging folders.
* Append
- * If the destination table exists, the new data is appended to it. Else, a new table is created with data from the staging folders.
+ * If the destination table exists, then the new data is appended to it. Else, a new table is created with data from the staging folders.
#### Write request callback handle
-The new write path API changes introduced an experimental feature to provide the client with a key-value map of post-write metrics. These metrics provide information like the number of records staged to the number of records written to a SQL table. They can also include information on the time spent in staging and executing the SQL statements to write data to Azure Synapse Dedicated SQL Pool.
-
-The new write path API changes introduced an experimental feature to provide the client with a key-value map of post-write metrics. Keys for the metrics are defined in the new object definition `Constants.FeedbackConstants`. Metrics can be retrieved as a JSON string by passing in the callback handle, for example, `Scala Function`. The following snippet is the function signature:
+The new write path API changes introduced an experimental feature to provide the client with a key->value map of post-write metrics. Keys for the metrics are defined in the new Object definition - `Constants.FeedbackConstants`. Metrics can be retrieved as a JSON string by passing in the callback handle (a `Scala Function`). Following is the function signature:
```Scala
-//Function signature is expected to have two arguments, a `scala.collection.immutable.Map[String, Any]` and an Option[Throwable].
+//Function signature is expected to have two arguments - a `scala.collection.immutable.Map[String, Any]` and an Option[Throwable]
//Post-write if there's a reference of this handle passed to the `synapsesql` signature, it will be invoked by the closing process.
-//These arguments will have valid objects in either a Success or Failure case. In the case of Failure, the second argument will be a `Some(Throwable)`.
+//These arguments will have valid objects in either Success or Failure case. In case of Failure the second argument will be a `Some(Throwable)`.
(Map[String, Any], Option[Throwable]) => Unit
```
-The following notable metrics are presented with internal capitalization:
+Following are some notable metrics (presented in camel case):
* `WriteFailureCause`
* `DataStagingSparkJobDurationInMilliseconds`
@@ -346,7 +433,7 @@ The following notable metrics are presented with internal capitalization:
* `SQLStatementExecutionDurationInMilliseconds`
* `rows_processed`
-The following snippet is a sample JSON string with post-write metrics:
+Following is a sample JSON string with post-write metrics:
```doc
{
@@ -378,86 +465,11 @@ The following snippet is a sample JSON string with post-write metrics:
}
```
-### Read from Azure Synapse Dedicated SQL Pool
-
-The following sections relate to a read scenario.
-
-#### Read request: synapsesql method signature
-
-```Scala
-synapsesql(tableName:String) => org.apache.spark.sql.DataFrame
-```
-
-#### Read using Azure AD-based authentication
-
-```Scala
-//Use case is to read data from an internal table in an Azure Synapse Dedicated SQL Pool database.
-//Azure Active Directory-based authentication approach is preferred here.
-import org.apache.spark.sql.DataFrame
-import com.microsoft.spark.sqlanalytics.utils.Constants
-import org.apache.spark.sql.SqlAnalyticsConnector._
-
-//Read from the existing internal table.
-val dfToReadFromTable:DataFrame = spark.read.
- //If `Constants.SERVER` is not provided, the `` from the three-part table name argument
- //to `synapsesql` method is used to infer the Synapse Dedicated SQL endpoint.
- option(Constants.SERVER, ".sql.azuresynapse.net").
- //Defaults to storage path defined in the runtime configurations (See section on Configuration Options above).
- option(Constants.TEMP_FOLDER, "abfss://@.dfs.core.windows.net/").
- //Three-part table name from where data will be read.
- synapsesql("..").
- //Column-pruning i.e., query select column values.
- select("", "", "").
- //Push-down filter criteria that gets translated to SQL Push-down Predicates.
- filter(col("Title").startsWith("E")).
- //Fetch a sample of 10 records
- limit(10)
-
-//Show contents of the DataFrame.
-dfToReadFromTable.show()
-```
-
-#### Read using Basic Authentication
-
-```Scala
-//Use case is to read data from an internal table in an Azure Synapse Dedicated SQL Pool database.
-//Azure Active Directory-based authentication approach is preferred here.
-import org.apache.spark.sql.DataFrame
-import com.microsoft.spark.sqlanalytics.utils.Constants
-import org.apache.spark.sql.SqlAnalyticsConnector._
-
-//Read from an existing internal table.
-val dfToReadFromTable:DataFrame = spark.read.
- //If `Constants.SERVER` is not provided, the `` from the three-part table name argument
- //to `synapsesql` method is used to infer the Synapse Dedicated SQL endpoint.
- option(Constants.SERVER, ".sql.azuresynapse.net").
- //Set database user name.
- option(Constants.USER, "").
- //Set user's password to the database.
- option(Constants.PASSWORD, "").
- //Set name of the data source definition that is defined with database-scoped credentials.
- //Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting.
- option(Constants.DATA_SOURCE, "").
- //Three-part table name from where data will be read.
- synapsesql("..").
- //Column pruning, for example, query select column values.
- select("", "", "").
- //Push-down filter criteria that gets translated to SQL push-down predicates.
- filter(col("Title").startsWith("E")).
- //Fetch a sample of 10 records.
- limit(10)
-
-//Show contents of the DataFrame.
-dfToReadFromTable.show()
-```
-
### More code samples
-This section includes some other code samples.
+#### Using the Connector with other language preferences
-#### Use the connector with other language preferences
-
-This example demonstrates how to use the connector with the `PySpark (Python)` language preference:
+Example that demonstrates how to use the Connector with `PySpark (Python)` language preference:
```Python
%%spark
@@ -466,42 +478,42 @@ import org.apache.spark.sql.DataFrame
import com.microsoft.spark.sqlanalytics.utils.Constants
import org.apache.spark.sql.SqlAnalyticsConnector._
-//Code to write or read goes here (refer to the aforementioned code templates).
+//Code to write or read goes here (refer to the aforementioned code templates)
```
-#### Use materialized data across cells
+#### Using materialized data across cells
+
+Spark DataFrame's `createOrReplaceTempView` can be used to access data fetched in another cell, by registering a temporary view.
-Spark DataFrame's `createOrReplaceTempView` can be used to access data fetched in another cell by registering a temporary view.
+* Cell where data is fetched (say with Notebook language preference as `Scala`)
-* Cell where data is fetched (say with notebook language preference as `Scala`):
+```Scala
+ //Necessary imports
+ import org.apache.spark.sql.DataFrame
+ import org.apache.spark.sql.SaveMode
+ import com.microsoft.spark.sqlanalytics.utils.Constants
+ import org.apache.spark.sql.SqlAnalyticsConnector._
- ```Scala
- //Necessary imports
- import org.apache.spark.sql.DataFrame
- import org.apache.spark.sql.SaveMode
- import com.microsoft.spark.sqlanalytics.utils.Constants
- import org.apache.spark.sql.SqlAnalyticsConnector._
-
- //Configure options and read from Azure Synapse Dedicated SQL Pool.
- val readDF = spark.read.
- //Set Synapse Dedicated SQL endpoint name.
- option(Constants.SERVER, ".sql.azuresynapse.net").
- //Set database user name.
- option(Constants.USER, "").
- //Set database user's password.
- option(Constants.PASSWORD, "").
- //Set name of the data source definition that is defined with database scoped credentials.
- option(Constants.DATA_SOURCE,"").
- //Set the three-part table name from which the read must be performed.
- synapsesql("..").
- //Optional - specify number of records the DataFrame would read.
- limit(10)
- //Register the temporary view (scope - current active Spark Session)
- readDF.createOrReplaceTempView("")
- ```
+ //Configure options and read from Synapse Dedicated SQL Pool.
+ val readDF = spark.read.
+ //Set Synapse Dedicated SQL End Point name.
+ option(Constants.SERVER, ".sql.azuresynapse.net").
+ //Set database user name.
+ option(Constants.USER, "").
+ //Set database user's password.
+ option(Constants.PASSWORD, "").
+ //Set name of the data source definition that is defined with database scoped credentials.
+ option(Constants.DATA_SOURCE,"").
+ //Set the three-part table name from which the read must be performed.
+ synapsesql("..").
+ //Optional - specify number of records the DataFrame would read.
+ limit(10)
+ //Register the temporary view (scope - current active Spark Session)
+ readDF.createOrReplaceTempView("")
+```
-* Now, change the language preference on the notebook to `PySpark (Python)` and fetch data from the registered view ``:
+* Now, change the language preference on the Notebook to `PySpark (Python)` and fetch data from the registered view ``
```Python
spark.sql("select * from ").show()
@@ -509,34 +521,32 @@ Spark DataFrame's `createOrReplaceTempView` can be used to access data fetched i
## Response handling
-Invoking `synapsesql` has two possible end states that are either success or fail. This section describes how to handle the request response for each scenario.
+Invoking `synapsesql` has two possible end states - Success or a Failed State. This section describes how to handle the request response for each scenario.
### Read request response
-Upon completion, the read response snippet is displayed in the cell's output. Failure in the current cell also cancels subsequent cell executions. Detailed error information is available in the Spark application logs.
+Upon completion, the read response snippet is displayed in the cell's output. Failure in the current cell will also cancel subsequent cell executions. Detailed error information is available in the Spark Application Logs.
### Write request response
-By default, a write response is printed to the cell output. On failure, the current cell is marked as failed, and subsequent cell executions are aborted. The other approach is to pass the [callback handle](#write-request-callback-handle) option to the `synapsesql` method. The callback handle provides programmatic access to the write response.
-
-## Things to note
+By default, a write response is printed to the cell output. On failure, the current cell is marked as failed, and subsequent cell executions will be aborted. The other approach is to pass the [callback handle](#write-request-callback-handle) option to the `synapsesql` method. The callback handle will provide programmatic access to the write response.
-Consider the following points on read and write performance:
+## Other considerations
-* When writing to Azure Synapse Dedicated SQL Pool tables:
+* When reading from the Azure Synapse Dedicated SQL Pool tables:
+ * Consider applying necessary filters on the DataFrame to take advantage of the Connector's column-pruning feature.
+ * Read scenario doesn't support the `TOP(n-rows)` clause, when framing the `SELECT` query statements. The choice to limit data is to use the DataFrame's limit(.) clause.
+ * Refer the example - [Using materialized data across cells](#using-materialized-data-across-cells) section.
+* When writing to the Azure Synapse Dedicated SQL Pool tables:
* For internal table types:
* Tables are created with ROUND_ROBIN data distribution.
- * Column types are inferred from the DataFrame that would read data from the source. String columns are mapped to `NVARCHAR(4000)`.
+ * Column types are inferred from the DataFrame that would read data from source. String columns are mapped to `NVARCHAR(4000)`.
* For external table types:
- * The DataFrame's initial parallelism drives the data organization for the external table.
- * Column types are inferred from the DataFrame that would read data from the source.
+ * DataFrame's initial parallelism drives the data organization for the external table.
+ * Column types are inferred from the DataFrame that would read data from source.
* Better data distribution across executors can be achieved by tuning the `spark.sql.files.maxPartitionBytes` and the DataFrame's `repartition` parameter.
- * When writing large datasets, factor in the impact of [DWU Performance Level](../../synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md) setting that limits [transaction size](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-transactions.md#transaction-size).
-* When reading from Azure Synapse Dedicated SQL Pool tables:
- * Consider applying necessary filters on the DataFrame to take advantage of the connector's column-pruning feature.
- * The read scenario doesn't support the `TOP(n-rows)` clause when framing the `SELECT` query statements. The choice to limit data is to use the DataFrame's limit(.) clause.
- * Refer to the example [Use materialized data across cells](#use-materialized-data-across-cells) section.
-* Monitor [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-best-practices.md) utilization trends to spot throttling behaviors that can [affect](../../storage/common/scalability-targets-standard-account.md) read and write performance.
+ * When writing large data sets, it's important to factor in the impact of [DWU Performance Level](../../synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md) setting that limits [transaction size](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-transactions.md#transaction-size).
+* Monitor [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-best-practices.md) utilization trends to spot throttling behaviors that can [impact](../../storage/common/scalability-targets-standard-account.md) read and write performance.
## References
diff --git a/articles/synapse-analytics/whats-new.md b/articles/synapse-analytics/whats-new.md
index e40516e5fba76..20bccc89ff4c6 100644
--- a/articles/synapse-analytics/whats-new.md
+++ b/articles/synapse-analytics/whats-new.md
@@ -39,7 +39,7 @@ The following updates are new to Azure Synapse Analytics this month.
* Synapse Spark Dedicated SQL Pool (DW) Connector now supports improved performance. The new architecture eliminates redundant data movement and uses COPY-INTO instead of PolyBase. You can authenticate through SQL basic authentication or opt into the Azure Active Directory/Azure AD based authentication method. It now has ~5x improvements over the previous version. To learn more, see [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md)
-* Synapse Spark Dedicated SQL Pool (DW) Connector now supports all Spark Dataframe SaveMode choices. It supports Append, Overwrite, ErrorIfExists, and Ignore modes. The Append and Overwrite are critical for managing data ingestion at scale. To learn more, see [DataFrame write SaveMode support](./spark/synapse-spark-sql-pool-import-export.md#dataframe-write-savemode-support)
+* Synapse Spark Dedicated SQL Pool (DW) Connector now supports all Spark Dataframe SaveMode choices. It supports Append, Overwrite, ErrorIfExists, and Ignore modes. The Append and Overwrite are critical for managing data ingestion at scale. To learn more, see [DataFrame write SaveMode support](./spark/synapse-spark-sql-pool-import-export.md#supported-dataframe-save-modes)
* Accelerate Spark execution speed using the new Intelligent Cache feature. This feature is currently in public preview. Intelligent Cache automatically stores each read within the allocated cache storage space, detecting underlying file changes and refreshing the files to provide the most recent data. To learn more, see how to [Enable/Disable the cache for your Apache Spark pool](./spark/apache-spark-intelligent-cache-concept.md) or see the [blog post](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-march-update-2022/ba-p/3269194#TOCREF_12)
diff --git a/articles/virtual-network/virtual-networks-udr-overview.md b/articles/virtual-network/virtual-networks-udr-overview.md
index b679d970a5ce1..1ba11c295106d 100644
--- a/articles/virtual-network/virtual-networks-udr-overview.md
+++ b/articles/virtual-network/virtual-networks-udr-overview.md
@@ -32,6 +32,7 @@ Each route contains an address prefix and next hop type. When traffic leaving a
|Default|Unique to the virtual network |Virtual network|
|Default|0.0.0.0/0 |Internet |
|Default|10.0.0.0/8 |None |
+|Default|172.16.0.0/12 |None |
|Default|192.168.0.0/16 |None |
|Default|100.64.0.0/10 |None |
@@ -41,7 +42,7 @@ The next hop types listed in the previous table represent how Azure routes traff
* **Internet**: Routes traffic specified by the address prefix to the Internet. The system default route specifies the 0.0.0.0/0 address prefix. If you don't override Azure's default routes, Azure routes traffic for any address not specified by an address range within a virtual network, to the Internet, with one exception. If the destination address is for one of Azure's services, Azure routes the traffic directly to the service over Azure's backbone network, rather than routing the traffic to the Internet. Traffic between Azure services does not traverse the Internet, regardless of which Azure region the virtual network exists in, or which Azure region an instance of the Azure service is deployed in. You can override Azure's default system route for the 0.0.0.0/0 address prefix with a [custom route](#custom-routes).
* **None**: Traffic routed to the **None** next hop type is dropped, rather than routed outside the subnet. Azure automatically creates default routes for the following address prefixes:
- * **10.0.0.0/8 and 192.168.0.0/16**: Reserved for private use in RFC 1918.
+ * **10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16**: Reserved for private use in RFC 1918.
* **100.64.0.0/10**: Reserved in RFC 6598.
If you assign any of the previous address ranges within the address space of a virtual network, Azure automatically changes the next hop type for the route from **None** to **Virtual network**. If you assign an address range to the address space of a virtual network that includes, but isn't the same as, one of the four reserved address prefixes, Azure removes the route for the prefix and adds a route for the address prefix you added, with **Virtual network** as the next hop type.
diff --git a/docfx.json b/docfx.json
index 84d0c23d141d6..31e67beefcc54 100644
--- a/docfx.json
+++ b/docfx.json
@@ -288,7 +288,7 @@
"articles/cognitive-services/Translator/**/*.md": "laujan",
"articles/connectors/*.md": "ecfan",
"articles/container-instances/**/*.md": "macolso",
- "articles/container-registry/**/*.md": "dlepow",
+ "articles/container-registry/**/*.md": "tejaswikolli",
"articles/data-catalog/*.md": "ChandraKavya",
"articles/data-lake-analytics/*.md": "xujxu",
"articles/defender-for-cloud/*.md": "elazark",
@@ -979,25 +979,25 @@
"articles/virtual-machine-scale-sets/**/*.yml": ["Azure","Virtual Machine Scale Sets"]
},
"titleSuffix": {
- "articles/cognitive-services/LUIS/*.md": "Azure",
- "articles/cognitive-services/LUIS/*.yml": "Azure",
- "articles/advisor/**/*.md": "Azure Advisor",
"articles/advisor/*.yml": "Azure Advisor",
+ "articles/advisor/**/*.md": "Azure Advisor",
"articles/aks/**/*.md": "Azure Kubernetes Service",
"articles/ansible/**/*.yml": "Ansible",
+ "articles/app-service-mobile/**/*.md": "Azure Mobile Apps",
+ "articles/app-service-mobile/**/*.yml": "Azure Mobile Apps",
"articles/app-service/*.md": "Azure App Service",
"articles/app-service/*.yml": "Azure App Service",
"articles/app-service/environment/*.md": "Azure App Service Environment",
"articles/app-service/environment/*.yml": "Azure App Service Environment",
"articles/app-service/scripts/*.md": "Azure App Service",
"articles/app-service/scripts/*.yml": "Azure App Service",
- "articles/app-service-mobile/**/*.md": "Azure Mobile Apps",
- "articles/app-service-mobile/**/*.yml": "Azure Mobile Apps",
"articles/asc-for-iot/**/*.md": "Azure Security Center for IoT",
"articles/asc-for-iot/**/*.yml": "Azure Security Center for IoT",
+ "articles/active-directory/develop/**/*.md": "Microsoft identity platform",
+ "articles/active-directory/develop/**/*.yml": "Microsoft identity platform",
+ "articles/azure-arc/**/*.md": "Azure Arc",
"articles/azure-fluid-relay/**/*.md": "Azure Fluid Relay",
"articles/azure-fluid-relay/**/*.yml": "Azure Fluid Relay",
- "articles/azure-arc/**/*.md": "Azure Arc",
"articles/azure-government/**/*.md": "Azure Government",
"articles/azure-monitor/**/*.md" : "Azure Monitor",
"articles/azure-monitor/**/*.yml" : "Azure Monitor",
@@ -1017,6 +1017,10 @@
"articles/azure-sql/managed-instance/**/*.yml": "Azure SQL Managed Instance",
"articles/azure-sql/virtual-machines/**/*.md": "SQL Server on Azure VMs",
"articles/azure-sql/virtual-machines/**/*.yml": "SQL Server on Azure VMs",
+ "articles/azure-video-analyzer/video-analyzer-docs/*.md": "Azure Video Analyzer",
+ "articles/azure-video-analyzer/video-analyzer-docs/*.yml": "Azure Video Analyzer",
+ "articles/azure-video-analyzer/video-analyzer-media-docs/*.md": "Azure Video Analyzer for Media",
+ "articles/azure-video-analyzer/video-analyzer-media-docs/*.yml": "Azure Video Analyzer for Media",
"articles/azure-vmware/**/*.md" : "Azure VMware Solution",
"articles/azure-vmware/**/*.yml" : "Azure VMware Solution",
"articles/backup/**/*.md": "Azure Backup",
@@ -1028,15 +1032,17 @@
"articles/blockchain/**/*.md": "Azure Blockchain",
"articles/chef/**/*.yml": "Chef",
"articles/cognitive-services/**/*.md": "Azure Cognitive Services",
+ "articles/cognitive-services/LUIS/*.md": "Azure",
+ "articles/cognitive-services/LUIS/*.yml": "Azure",
"articles/connectors/*.md": "Azure Logic Apps",
"articles/container-instances/**/*.md": "Azure Container Instances",
"articles/container-registry/**/*.md": "Azure Container Registry",
"articles/data-factory/**/*.md": "Azure Data Factory",
"articles/data-factory/**/*.yml": "Azure Data Factory",
"articles/defender-for-iot/*.md": "Microsoft Defender for IoT",
+ "articles/defender-for-iot/device-builders/*.md": "Microsoft Defender for IoT",
"articles/defender-for-iot/organizations/*.md": "Microsoft Defender for IoT",
"articles/defender-for-iot/organizations/**/*.md": "Microsoft Defender for IoT",
- "articles/defender-for-iot/device-builders/*.md": "Microsoft Defender for IoT",
"articles/dev-spaces/**/*.md": "Azure Dev Spaces",
"articles/devtest-labs/**/*.md": "Azure DevTest Labs",
"articles/event-grid/**/*.md": "Azure Event Grid",
@@ -1058,10 +1064,10 @@
"articles/media-services/live-video-analytics-edge/*.md": "Live Video Analytics on IoT Edge",
"articles/media-services/video-indexer/*.md": "Azure Media Services Video Indexer",
"articles/migrate/**/*.md": "Azure Migrate",
- "articles/open-datasets/**/*.md": "Azure Open Datasets",
- "articles/open-datasets/**/*.yml": "Azure Open Datasets",
"articles/object-anchors/**/*.md": "Azure Object Anchors",
"articles/object-anchors/**/*.yml": "Azure Object Anchors",
+ "articles/open-datasets/**/*.md": "Azure Open Datasets",
+ "articles/open-datasets/**/*.yml": "Azure Open Datasets",
"articles/purview/**/*.md": "Microsoft Purview",
"articles/purview/**/*.yml": "Microsoft Purview",
"articles/remote-rendering/**/*.md": "Azure Remote Rendering",
@@ -1069,8 +1075,8 @@
"articles/scheduler/*.md": "Azure Scheduler",
"articles/service-bus-messaging/*.md": "Azure Service Bus",
"articles/service-fabric/*.md": "Azure Service Fabric",
- "articles/service-health/**/*.md": "Azure Service Health",
"articles/service-health/*.yml": "Azure Service Health",
+ "articles/service-health/**/*.md": "Azure Service Health",
"articles/site-recovery/**/*.md": "Azure Site Recovery",
"articles/spatial-anchors/**/*.md": "Azure Spatial Anchors",
"articles/spatial-anchors/**/*.yml": "Azure Spatial Anchors",
@@ -1079,18 +1085,14 @@
"articles/synapse-analytics/**/*.md": "Azure Synapse Analytics",
"articles/synapse-analytics/**/*.yml": "Azure Synapse Analytics",
"articles/terraform/**/*.yml": "Terraform",
+ "articles/virtual-machine-scale-sets/*.md": "Azure Virtual Machine Scale Sets",
+ "articles/virtual-machine-scale-sets/*.yml": "Azure Virtual Machine Scale Sets",
"articles/virtual-machines/**/*.md": "Azure Virtual Machines",
"articles/virtual-machines/**/*.yml": "Azure Virtual Machines",
"articles/virtual-machines/windows/sql/**/*.md": "SQL Server on Azure VM",
"articles/virtual-machines/windows/sql/**/*.yml": "SQL Server on Azure VM",
"articles/virtual-machines/workloads/**/*.md": "Azure Virtual Machines",
- "articles/virtual-machines/workloads/**/*.yml": "Azure Virtual Machines",
- "articles/virtual-machine-scale-sets/*.md": "Azure Virtual Machine Scale Sets",
- "articles/virtual-machine-scale-sets/*.yml": "Azure Virtual Machine Scale Sets",
- "articles/azure-video-analyzer/video-analyzer-docs/*.md": "Azure Video Analyzer",
- "articles/azure-video-analyzer/video-analyzer-docs/*.yml": "Azure Video Analyzer",
- "articles/azure-video-indexer/*.yml": "Azure Video Indexer",
- "articles/azure-video-indexer/*.md": "Azure Video Indexer"
+ "articles/virtual-machines/workloads/**/*.yml": "Azure Virtual Machines"
}
},
"template": [