diff --git a/articles/active-directory-b2c/custom-domain.md b/articles/active-directory-b2c/custom-domain.md index 9a080acd6a23c..59ea2491bff04 100644 --- a/articles/active-directory-b2c/custom-domain.md +++ b/articles/active-directory-b2c/custom-domain.md @@ -9,7 +9,7 @@ manager: CelesteDG ms.service: active-directory ms.workload: identity ms.topic: how-to -ms.date: 11/23/2021 +ms.date: 05/13/2022 ms.author: kengaderdus ms.subservice: B2C ms.custom: "b2c-support" @@ -20,7 +20,7 @@ zone_pivot_groups: b2c-policy-type [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] -This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign-in process rather than redirecting to the Azure AD B2C default domain *<tenant-name>.b2clogin.com*. +This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign in process rather than redirecting to the Azure AD B2C default domain *<tenant-name>.b2clogin.com*. ![Screenshot demonstrates an Azure AD B2C custom domain user experience.](./media/custom-domain/custom-domain-user-experience.png) @@ -34,7 +34,7 @@ Watch this video to learn about Azure AD B2C custom domain. The following diagram illustrates Azure Front Door integration: -1. From an application, a user selects the sign-in button, which takes them to the Azure AD B2C sign-in page. This page specifies a custom domain name. +1. From an application, a user selects the sign in button, which takes them to the Azure AD B2C sign in page. This page specifies a custom domain name. 1. The web browser resolves the custom domain name to the Azure Front Door IP address. During DNS resolution, a canonical name (CNAME) record with a custom domain name points to your Front Door default front-end host (for example, `contoso-frontend.azurefd.net`). 1. The traffic addressed to the custom domain (for example, `login.contoso.com`) is routed to the specified Front Door default front-end host (`contoso-frontend.azurefd.net`). 1. Azure Front Door invokes Azure AD B2C content using the Azure AD B2C `.b2clogin.com` default domain. The request to the Azure AD B2C endpoint includes the original custom domain name. @@ -49,8 +49,7 @@ When using custom domains, consider the following: - You can set up multiple custom domains. For the maximum number of supported custom domains, see [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-classic-limits) for Azure Front Door. - Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor). -- To use Azure Front Door [Web Application Firewall](../web-application-firewall/afds/afds-overview.md), you need to confirm your firewall configuration and rules work correctly with your Azure AD B2C user flows. -- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com* (unless you're using a custom policy and you [block access](#block-access-to-the-default-domain-name). +- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com* (unless you're using a custom policy and you [block access](#optional-block-access-to-the-default-domain-name). - If you have multiple applications, migrate them all to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used. ## Prerequisites @@ -78,7 +77,7 @@ Follow these steps to add a custom domain to your Azure AD B2C tenant: |login | TXT | MS=ms12345678 | |account | TXT | MS=ms87654321 | - The TXT record must be associated with the subdomain, or hostname of the domain. For example, the *login* part of the *contoso.com* domain. If the hostname is empty or `@`, Azure AD will not be able to verify the custom domain you added. In the following examples, both records are configured incorrectly. + The TXT record must be associated with the subdomain, or hostname of the domain. For example, the *login* part of the *contoso.com* domain. If the hostname is empty or `@`, Azure AD won't be able to verify the custom domain you added. In the following examples, both records are configured incorrectly. |Name (hostname) |Type |Data | |---------|---------|---------| @@ -88,7 +87,7 @@ Follow these steps to add a custom domain to your Azure AD B2C tenant: > [!TIP] > You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use [Azure DNS zone](../dns/dns-getstarted-portal.md), or [App Service domains](../app-service/manage-custom-dns-buy-domain.md). -1. [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. For example, to be able to sign-in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not the top-level domain *contoso.com*. +1. [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. For example, to be able to sign in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not just the top-level domain *contoso.com*. > [!IMPORTANT] > After the domain is verified, **delete** the DNS TXT record you created. @@ -96,87 +95,44 @@ Follow these steps to add a custom domain to your Azure AD B2C tenant: ## Step 2. Create a new Azure Front Door instance -Follow these steps to create a Front Door for your Azure AD B2C tenant. For more information, see [creating a Front Door for your application](../frontdoor/quickstart-create-front-door.md#create-a-front-door-for-your-application). - +Follow these steps to create an Azure Front Door: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. To choose the directory that contains the Azure subscription that you’d like to use for Azure Front Door and *not* the directory containing your Azure AD B2C tenant, select the **Directories + subscriptions** icon in the portal toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**. -1. From the home page or the Azure menu, select **Create a resource**. Select **Networking** > **See All** > **Front Door**. -1. In the **Basics** tab of **Create a Front Door** page, enter or select the following information, and then select **Next: Configuration**. - - | Setting | Value | - | --- | --- | - | **Subscription** | Select your Azure subscription. | - | **Resource group** | Select an existing resource group, or select **Create new** to create a new one.| - | **Resource group location** | Select the location of the resource group. For example, **Central US**. | - -### 2.1 Add frontend host - -The frontend host is the domain name used by your application. When you create a Front Door, the default frontend host is a subdomain of `azurefd.net`. - -Azure Front Door provides the option of associating a custom domain with the frontend host. With this option, you associate the Azure AD B2C user interface with a custom domain in your URL instead of a Front Door owned domain name. For example, `https://login.contoso.com`. - -To add a frontend host, follow these steps: - -1. In **Frontends/domains**, select **+** to open **Add a frontend host**. -1. For **Host name**, enter a globally unique hostname. The host name is not your custom domain. This example uses *contoso-frontend*. Select **Add**. - - ![Screenshot demonstrates how to add a frontend host.](./media/custom-domain/add-frontend-host-azure-front-door.png) - -### 2.2 Add backend and backend pool - -A backend refers to your [Azure AD B2C tenant name](tenant-management.md#get-your-tenant-name), `tenant-name.b2clogin.com`. To add a backend pool, follow these steps: -1. Still in **Create a Front Door**, in **Backend pools**, select **+** to open **Add a backend pool**. - -1. Enter a **Name**. For example, *myBackendPool*. Select **Add a backend**. +1. To choose the directory that contains the Azure subscription that you’d like to use for Azure Front Door and *not* the directory containing your Azure AD B2C tenant: - The following screenshot demonstrates how to create a backend pool: + 1. Select the **Directories + subscriptions** icon in the portal toolbar. - ![Screenshot demonstrates how to add a frontend backend pool.](./media/custom-domain/front-door-add-backend-pool.png) - -1. In the **Add a backend** blade, select the following information, and then select **Add**. - - | Setting | Value | - | --- | --- | - | **Backend host type**| Select **Custom host**.| - | **Backend host name**| Select the name of your [Azure AD B2C](tenant-management.md#get-your-tenant-name), `.b2clogin.com`. For example, contoso.b2clogin.com.| - | **Backend host header**| Select the same value you selected for **Backend host name**.| + 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch** button next to the directory. - **Leave all other fields default.* +1. Follow the steps in [Create Front Door profile - Quick Create](../frontdoor/create-front-door-portal.md#create-front-door-profile---quick-create) to create a Front Door for your Azure AD B2C tenant using the following settings: + - The following screenshot demonstrates how to create a custom host backend that is associated with an Azure AD B2C tenant: + |Key |Value | + |---------|---------| + |Subscription|Select your Azure subscription.| + |Resource group| Select an existing resource group, or create a new one.| + |Name| Give your profile a name such as `b2cazurefrontdoor`.| + |Tier| Select either Standard or Premium tier. Standard tier is content delivery optimized. Premium tier builds on Standard tier and is focused on security. See [Tier Comparison](../frontdoor/standard-premium/tier-comparison.md).| + |Endpoint name| Enter a globally unique name for your endpoint, such as `b2cazurefrontdoor`. The **Endpoint hostname** is generated automatically. | + |Origin type| Select `Custom`.| + |Origin host name| Enter `.b2clogin.com`. Replace `` with the [name of your Azure AD B2C tenant](tenant-management.md#get-your-tenant-name).| - ![Screenshot demonstrates how to add a custom host backend.](./media/custom-domain/add-a-backend.png) - -1. To complete the configuration of the backend pool, on the **Add a backend pool** blade, select **Add**. + Leave the **Caching** and **WAF policy** empty. -1. After you add the **backend** to the **backend pool**, disable the **Health probes**. - - ![Screenshot demonstrates how to add a backend pool and disable the health probes.](./media/custom-domain/add-a-backend-pool.png) - -### 2.3 Add a routing rule - -Finally, add a routing rule. The routing rule maps your frontend host to the backend pool. The rule forwards a request for the [frontend host](#21-add-frontend-host) to the Azure AD B2C [backend](#22-add-backend-and-backend-pool). To add a routing rule, follow these steps: - -1. In **Add a rule**, for **Name**, enter *LocationRule*. Accept all the default values, then select **Add** to add the routing rule. -1. Select **Review + Create**, and then **Create**. - - ![Screenshot demonstrates how to create Azure Front Door.](./media/custom-domain/configuration-azure-front-door.png) + +1. Once the Azure Front Door resource is created, select **Overview**, and copy the **Endpoint hostname**. It looks something like `b2cazurefrontdoor-ab123e.z01.azurefd.net`. ## Step 3. Set up your custom domain on Azure Front Door -In this step, you add the custom domain you registered in [Step 1](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) to your Front Door. +In this step, you add the custom domain you registered in [Step 1](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) to your Azure Front Door. -### 3.1 Create a CNAME DNS record +### 3.1. Create a CNAME DNS record -Before you can use a custom domain with your Front Door, you must first create a canonical name (CNAME) record with your domain provider to point to your Front Door's default frontend host (say contoso-frontend.azurefd.net). +To add the custom domain, create a canonical name (CNAME) record with your domain provider. A CNAME record is a type of DNS record that maps a source domain name to a destination domain name (alias). For Azure Front Door, the source domain name is your custom domain name, and the destination domain name is your Front Door default hostname that you configured in [Step 2. Create a new Azure Front Door instance](#step-2-create-a-new-azure-front-door-instance). For example, `b2cazurefrontdoor-ab123e.z01.azurefd.net`. -A CNAME record is a type of DNS record that maps a source domain name to a destination domain name (alias). For Azure Front Door, the source domain name is your custom domain name, and the destination domain name is your Front Door default hostname you configure in [step 2.1](#21-add-frontend-host). - -After Front Door verifies the CNAME record that you created, traffic addressed to the source custom domain (such as login.contoso.com) is routed to the specified destination Front Door default frontend host, such as `contoso-frontend.azurefd.net`. For more information, see [add a custom domain to your Front Door](../frontdoor/front-door-custom-domain.md). +After Front Door verifies the CNAME record that you created, traffic addressed to the source custom domain (such as `login.contoso.com`) is routed to the specified destination Front Door default frontend host, such as `contoso-frontend.azurefd.net`. For more information, see [add a custom domain to your Front Door](../frontdoor/front-door-custom-domain.md). To create a CNAME record for your custom domain: @@ -194,47 +150,56 @@ To create a CNAME record for your custom domain: - Type: Enter *CNAME*. - - Destination: Enter your default Front Door frontend host you create in [step 2.1](#21-add-frontend-host). It must be in the following format:_<hostname>_.azurefd.net. For example, `contoso-frontend.azurefd.net`. + - Destination: Enter your default Front Door frontend host you create in [step 2](#step-2-create-a-new-azure-front-door-instance). It must be in the following format:_<hostname>_.azurefd.net. For example, `contoso-frontend.azurefd.net`. 1. Save your changes. -### 3.2 Associate the custom domain with your Front Door +### 3.2. Associate the custom domain with your Front Door -After you've registered your custom domain, you can then add it to your Front Door. - -1. On the **Front Door designer** page, under the **Frontends/domains**, select **+** to add a custom domain. +1. In the Azure portal home, search for and select the `myb2cazurefrontdoor` Azure Front Door resource to open it. - ![Screenshot demonstrates how to add a custom domain.](./media/custom-domain/azure-front-door-add-custom-domain.png) - -1. For **Frontend host**, the frontend host to use as the destination domain of your CNAME record is pre-filled and is derived from your Front Door: *<default hostname>*.azurefd.net. It cannot be changed. +1. In the left menu, under **Settings**, select **Domains**. + +1. Select **Add a domain**. + +1. For **DNS management**, select **All other DNS services**. + +1. For **Custom domain**, enter your custom domain, such as `login.contoso.com`. -1. For **Custom hostname**, enter your custom domain, including the subdomain, to use as the source domain of your CNAME record. For example, login.contoso.com. +1. Keep the other values as defaults, and then select **Add**. Your custom domain is added to the list. - ![Screenshot demonstrates how to verify a custom domain.](./media/custom-domain/azure-front-door-add-custom-domain-verification.png) +1. Under **Validation state** of the domain that you just added, select **Pending**. A pane with a TXT record info opens. - Azure verifies that the CNAME record exists for the custom domain name you entered. If the CNAME is correct, your custom domain will be validated. + 1. Sign in to the web site of the domain provider for your custom domain. + 1. Find the page for managing DNS records by consulting the provider's documentation or searching for areas of the web site labeled **Domain Name**, **DNS**, or **Name Server Management**. + + 1. Create a new TXT DNS record and complete the fields as shown below: + 1. Name: `_dnsauth.contoso.com`, but you need to enter just `_dnsauth`. + 1. Type: `TXT` + 1. Value: Something like `75abc123t48y2qrtsz2bvk......`. -1. After the custom domain name is verified, under the **Custom domain name HTTPS**, select **Enabled**. + After you add the TXT DNS record, the **Validation state** in the Front Door resource will eventually change from **Pending** to **Approved**. You may need to reload your page for the change to happen. - ![Screenshot shows how to enable HTTPS using an Azure Front Door certificate.](./media/custom-domain/azure-front-door-add-custom-domain-https-settings.png) +1. Go back to your Azure portal. Under **Endpoint association** of the domain that you just added, select **Unassociated**. -1. For the **Certificate management type**, select [Front Door management](../frontdoor/front-door-custom-domain-https.md#option-1-default-use-a-certificate-managed-by-front-door), or [Use my own certificate](../frontdoor/front-door-custom-domain-https.md#option-2-use-your-own-certificate). If you choose the *Front Door managed* option, wait until the certificate is fully provisioned. +1. For **Select endpoint**, select the hostname endpoint from the dropdown. -1. Select **Add**. +1. For **Select routes** list, select **default-route**, and then select **Associate**. -### 3.3 Update the routing rule +### 3.3. Enable the route -1. In the **Routing rules**, select the routing rule you created in [step 2.3](#23-add-a-routing-rule). +The **default-route** routes the traffic from the client to Azure Front Door. Then, Azure Front Door uses your configuration to send the traffic to Azure AD B2C. Follow these steps to enable the default-route. - ![Screenshot demonstrates how to select a routing rule.](./media/custom-domain/select-routing-rule.png) +1. Select **Front Door manager**. +1. To add enable the **default-route**, first expand an endpoint from the list of endpoints in the Front Door manager. Then, select the **default-route**. -1. Under the **Frontends/domains**, select your custom domain name. - - ![Screenshot demonstrates how to update the Azure Front Door routing rule.](./media/custom-domain/update-routing-rule.png) + The following screenshot shows how to select the default-route. + + ![Screenshot of selecting the default route.](./media/custom-domain/enable-the-route.png) -1. Select **Update**. -1. From the main window, select **Save**. +1. Select the **Enable route** checkbox. +1. Select **Update** to save the changes. ## Step 4. Configure CORS @@ -254,17 +219,18 @@ Configure Azure Blob storage for Cross-Origin Resource Sharing with the followin ## Test your custom domain 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**. +1. Make sure you're using the directory that contains your Azure AD B2C tenant: + 1. Select the **Directories + subscriptions** icon in the portal toolbar. + 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch** button next to it. 1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **User flows (policies)**. 1. Select a user flow, and then select **Run user flow**. 1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`. 1. Copy the URL under **Run user flow endpoint**. - ![Screenshot demonstrates how to copy the authorization request URI.](./media/custom-domain/user-flow-run-now.png) + ![Screenshot of how to copy the authorization request U R I.](./media/custom-domain/user-flow-run-now.png) -1. To simulate a sign-in with your custom domain, open a web browser and use the URL you copied. Replace the Azure AD B2C domain (_<tenant-name>_.b2clogin.com) with your custom domain. +1. To simulate a sign in with your custom domain, open a web browser and use the URL you copied. Replace the Azure AD B2C domain (_<tenant-name>_.b2clogin.com) with your custom domain. For example, instead of: @@ -278,18 +244,18 @@ Configure Azure Blob storage for Cross-Origin Resource Sharing with the followin https://login.contoso.com/contoso.onmicrosoft.com/oauth2/v2.0/authorize?p=B2C_1_susi&client_id=63ba0d17-c4ba-47fd-89e9-31b3c2734339&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid&response_type=id_token&prompt=login ``` -1. Verify that the Azure AD B2C is loaded correctly. Then, sign-in with a local account. +1. Verify that the Azure AD B2C is loaded correctly. Then, sign in with a local account. 1. Repeat the test with the rest of your policies. ## Configure your identity provider -When a user chooses to sign in with a social identity provider, Azure AD B2C initiates an authorization request and takes the user to the selected identity provider to complete the sign-in process. The authorization request specifies the `redirect_uri` with the Azure AD B2C default domain name: +When a user chooses to sign in with a social identity provider, Azure AD B2C initiates an authorization request and takes the user to the selected identity provider to complete the sign in process. The authorization request specifies the `redirect_uri` with the Azure AD B2C default domain name: ```http https://.b2clogin.com//oauth2/authresp ``` -If you configured your policy to allow sign-in with an external identity provider, update the OAuth redirect URIs with the custom domain. Most identity providers allow you to register multiple redirect URIs. We recommend adding redirect URIs instead of replacing them so you can test your custom policy without affecting applications that use the Azure AD B2C default domain name. +If you configured your policy to allow sign in with an external identity provider, update the OAuth redirect URIs with the custom domain. Most identity providers allow you to register multiple redirect URIs. We recommend adding redirect URIs instead of replacing them so you can test your custom policy without affecting applications that use the Azure AD B2C default domain name. In the following redirect URI: @@ -361,7 +327,7 @@ https:///11111111-1111-1111-1111-111111111111/v2.0/ ::: zone pivot="b2c-custom-policy" -## Block access to the default domain name +## (Optional) Block access to the default domain name After you add the custom domain and configure your application, users will still be able to access the <tenant-name>.b2clogin.com domain. To prevent access, you can configure the policy to check the authorization request "host name" against an allowed list of domains. The host name is the domain name that appears in the URL. The host name is available through `{Context:HostName}` [claim resolvers](claim-resolver-overview.md). Then you can present a custom error message. @@ -371,36 +337,48 @@ After you add the custom domain and configure your application, users will still ::: zone-end + +## (Optional) Azure Front Door advanced configuration + +You can use Azure Front Door advanced configuration, such as [Azure Web Application Firewall (WAF)](partner-azure-web-application-firewall.md). Azure WAF provides centralized protection of your web applications from common exploits and vulnerabilities. + +When using custom domains, consider the following points: + +- The WAF policy must be the same tier as the Azure Front Door profile. For more information about how to create a WAF policy to use with Azure Front Door, see [Configure WAF policy](../frontdoor/how-to-configure-endpoints.md). +- The WAF managed rules feature isn't officially supported as it can cause false positives and prevent legitimate requests from passing through, so only use WAF custom rules if they meet your needs. + ## Troubleshooting ### Azure AD B2C returns a page not found error -- **Symptom** - After you configure a custom domain, when you try to sign in with the custom domain, you get an HTTP 404 error message. +- **Symptom** - You configure a custom domain, but when you try to sign in with the custom domain, you get an HTTP 404 error message. - **Possible causes** - This issue could be related to the DNS configuration or the Azure Front Door backend configuration. - **Resolution**: - 1. Make sure the custom domain is [registered and successfully verified](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) in your Azure AD B2C tenant. - 1. Make sure the [custom domain](../frontdoor/front-door-custom-domain.md) is configured properly. The `CNAME` record for your custom domain must point to your Azure Front Door default frontend host (for example, contoso-frontend.azurefd.net). - 1. Make sure the [Azure Front Door backend pool configuration](#22-add-backend-and-backend-pool) points to the tenant where you set up the custom domain name, and where your user flow or custom policies are stored. + - Make sure the custom domain is [registered and successfully verified](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) in your Azure AD B2C tenant. + - Make sure the [custom domain](../frontdoor/front-door-custom-domain.md) is configured properly. The `CNAME` record for your custom domain must point to your Azure Front Door default frontend host (for example, contoso-frontend.azurefd.net). + +### Our services aren't available right now + +- **Symptom** - You configure a custom domain, but when you try to sign in with the custom domain, you get the following error message: *Our services aren't available right now. We're working to restore all services as soon as possible. Please check back soon.* +- **Possible causes** - This issue could be related to the Azure Front Door route configuration. +- **Resolution**: Check the status of the **default-route**. If it's disabled, [Enable the route](#33-enable-the-route). The following screenshot shows how the default-route should look like: + ![Screenshot of the status of the default-route.](./media/custom-domain/azure-front-door-route-status.png) -### Azure AD B2C returns the resource you are looking for has been removed, had its name changed, or is temporarily unavailable. +### Azure AD B2C returns the resource you're looking for has been removed, had its name changed, or is temporarily unavailable. -- **Symptom** - After you configure a custom domain, when you try to sign in with the custom domain, you get *the resource you are looking for has been removed, had its name changed, or is temporarily unavailable* error message. +- **Symptom** - You configure a custom domain, but when you try to sign in with the custom domain, you get *the resource you are looking for has been removed, had its name changed, or is temporarily unavailable* error message. - **Possible causes** - This issue could be related to the Azure AD custom domain verification. - **Resolution**: Make sure the custom domain is [registered and **successfully verified**](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) in your Azure AD B2C tenant. ### Identify provider returns an error - **Symptom** - After you configure a custom domain, you're able to sign in with local accounts. But when you sign in with credentials from external [social or enterprise identity providers](add-identity-provider.md), the identity provider presents an error message. -- **Possible causes** - When Azure AD B2C takes the user to sign in with a federated identity provider, it specifies the redirect URI. The redirect URI is the endpoint to where the identity provider returns the token. The redirect URI is the same domain your application uses with the authorization request. If the redirect URI is not yet registered in the identity provider, it may not trust the new redirect URI, which results in an error message. +- **Possible causes** - When Azure AD B2C takes the user to sign in with a federated identity provider, it specifies the redirect URI. The redirect URI is the endpoint to where the identity provider returns the token. The redirect URI is the same domain your application uses with the authorization request. If the redirect URI isn't yet registered in the identity provider, it may not trust the new redirect URI, which results in an error message. - **Resolution** - Follow the steps in [Configure your identity provider](#configure-your-identity-provider) to add the new redirect URI. ## Frequently asked questions -### Can I use Azure Front Door advanced configuration, such as *Web application firewall Rules*? - -While Azure Front Door advanced configuration settings are not officially supported, you can use them at your own risk. - ### When I use Run Now to try to run my policy, why I can't see the custom domain? Copy the URL, change the domain name manually, and then paste it back to your browser. @@ -409,10 +387,10 @@ Copy the URL, change the domain name manually, and then paste it back to your br Azure Front Door passes the user's original IP address. It's the IP address that you'll see in the audit reporting or your custom policy. -### Can I use a third-party web application firewall (WAF) with B2C? - -To use your own web application firewall in front of Azure Front Door, you need to configure and validate that everything works correctly with your Azure AD B2C user flows, or custom policies. +### Can I use a third-party Web Application Firewall (WAF) with B2C? +Yes, Azure AD B2C supports BYO-WAF (Bring Your Own Web Application Firewall). However, you must test WAF to ensure that it doesn't block or alert legitimate requests to Azure AD B2C user flows or custom policies. Learn how to configure [Akamai WAF](partner-akamai.md) and [Cloudflare WAF](partner-cloudflare.md) with Azure AD B2C. + ### Can my Azure Front Door instance be hosted in a different subscription than my Azure AD B2C tenant? Yes, Azure Front Door can be in a different subscription. diff --git a/articles/active-directory-b2c/identity-provider-google.md b/articles/active-directory-b2c/identity-provider-google.md index fd65ace143791..2ffdb9f7a5826 100644 --- a/articles/active-directory-b2c/identity-provider-google.md +++ b/articles/active-directory-b2c/identity-provider-google.md @@ -43,9 +43,10 @@ To enable sign-in for users with a Google account in Azure Active Directory B2C 1. In the upper-left corner of the page, select the project list, and then select **New Project**. 1. Enter a **Project Name**, select **Create**. 1. Make sure you are using the new project by selecting the project drop-down in the top-left of the screen. Select your project by name, then select **Open**. -1. In the left menu, select **OAuth consent screen**, select **External**, and then select **Create**. +1. In the left menu, select **APIs and services** and then **OAuth consent screen**. Select **External** and then select **Create**. 1. Enter a **Name** for your application. 1. Select a **User support email**. + 1. In the **App domain** section, enter a link to your **Application home page**, a link to your **Application privacy policy**, and a link to your **Application terms of service**. 1. In the **Authorized domains** section, enter *b2clogin.com*. 1. In the **Developer contact information** section, enter comma separated emails for Google to notify you about any changes to your project. 1. Select **Save**. @@ -199,4 +200,4 @@ If the sign-in process is successful, your browser is redirected to `https://jwt - Check out the Google federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#google), and how to pass Google access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#google-with-access-token) -::: zone-end \ No newline at end of file +::: zone-end diff --git a/articles/active-directory-b2c/media/custom-domain/azure-front-door-route-status.png b/articles/active-directory-b2c/media/custom-domain/azure-front-door-route-status.png new file mode 100644 index 0000000000000..26af3ab987900 Binary files /dev/null and b/articles/active-directory-b2c/media/custom-domain/azure-front-door-route-status.png differ diff --git a/articles/active-directory-b2c/media/custom-domain/enable-the-route.png b/articles/active-directory-b2c/media/custom-domain/enable-the-route.png new file mode 100644 index 0000000000000..61ee9b0fc0225 Binary files /dev/null and b/articles/active-directory-b2c/media/custom-domain/enable-the-route.png differ diff --git a/articles/active-directory/develop/workload-identities-overview.md b/articles/active-directory/develop/workload-identities-overview.md index 1f11066c84766..d0dc0d1f727c7 100644 --- a/articles/active-directory/develop/workload-identities-overview.md +++ b/articles/active-directory/develop/workload-identities-overview.md @@ -45,6 +45,7 @@ Here are some ways you can use workload identities: - Review service principals and applications that are assigned to privileged directory roles in Azure AD using [access reviews for service principals](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md). - Access Azure AD protected resources without needing to manage secrets (for supported scenarios) using [workload identity federation](workload-identity-federation.md). - Apply Conditional Access policies to service principals owned by your organization using [Conditional Access for workload identities](../conditional-access/workload-identity.md). +- Secure workload identities with [Identity Protection](../identity-protection/concept-workload-identity-risk.md). ## Next steps diff --git a/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md b/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md index 420dc938df20e..e27608712b976 100644 --- a/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md +++ b/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md @@ -16,7 +16,7 @@ ms.reviewer: davidmu # Understand how users are assigned to apps -This article help you to understand how users get assigned to an application in your tenant. +This article helps you to understand how users get assigned to an application in your tenant. ## How do users get assigned an application in Azure AD? @@ -34,6 +34,7 @@ There are several ways a user can be assigned an application. Assignment can be * An administrator enables [Self-service Application Access](./manage-self-service-access.md) to allow a user to add an application using [My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) **Add App** feature, but only **with prior approval from a selected set of business approvers** * An administrator enables [Self-service Group Management](../enterprise-users/groups-self-service-management.md) to allow a user to join a group that an application is assigned to **without business approval** * An administrator enables [Self-service Group Management](../enterprise-users/groups-self-service-management.md) to allow a user to join a group that an application is assigned to, but only **with prior approval from a selected set of business approvers** +* One of the application's roles is included in an [entitlement management access package](../governance/entitlement-management-access-package-resources.md), and a user requests or is assigned to that access package * An administrator assigns a license to a user directly, for a Microsoft service such as [Microsoft 365](https://products.office.com/) * An administrator assigns a license to a group that the user is a member of, for a Microsoft service such as [Microsoft 365](https://products.office.com/) * A user [consents to an application](consent-and-permissions-overview.md#user-consent) on behalf of themselves. diff --git a/articles/app-service/resources-kudu.md b/articles/app-service/resources-kudu.md index d66ab8c39391f..e71c8bd7c177a 100644 --- a/articles/app-service/resources-kudu.md +++ b/articles/app-service/resources-kudu.md @@ -32,7 +32,7 @@ It also provides other features, such as: - Run commands in the [Kudu console](https://github.com/projectkudu/kudu/wiki/Kudu-console). - Download IIS diagnostic dumps or Docker logs. - Manage IIS processes and site extensions. -- Add deployment webhooks for Windows aps. +- Add deployment webhooks for Windows apps. - Allow ZIP deployment UI with `/ZipDeploy`. - Generates [custom deployment scripts](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). - Allows access with [REST API](https://github.com/projectkudu/kudu/wiki/REST-API). diff --git a/articles/application-gateway/private-link-configure.md b/articles/application-gateway/private-link-configure.md index 6ef645c8f3bc5..65413721cf425 100644 --- a/articles/application-gateway/private-link-configure.md +++ b/articles/application-gateway/private-link-configure.md @@ -1,8 +1,8 @@ --- -title: Configure Azure Application Gateway Private Link +title: Configure Azure Application Gateway Private Link (preview) description: This article shows you how to configure Application Gateway Private Link. services: application-gateway -author: greglin +author: greg-lindsay ms.service: application-gateway ms.topic: how-to ms.date: 05/09/2022 @@ -10,12 +10,14 @@ ms.author: greglin --- -# Configure Azure Application Gateway Private Link +# Configure Azure Application Gateway Private Link (preview) Application Gateway Private Link allows you to connect your workloads over a private connection spanning across VNets and subscriptions. For more information, see [Application Gateway Private Link](private-link.md). :::image type="content" source="media/private-link/private-link.png" alt-text="Diagram showing Application Gateway Private Link"::: +> [!IMPORTANT] +> Azure Application Gateway Private Link is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Configuration options diff --git a/articles/application-gateway/private-link.md b/articles/application-gateway/private-link.md index c99c5acd0e916..57f2bc3f3556c 100644 --- a/articles/application-gateway/private-link.md +++ b/articles/application-gateway/private-link.md @@ -1,8 +1,8 @@ --- -title: Azure Application Gateway Private Link +title: Azure Application Gateway Private Link (preview) description: This article is an overview of Application Gateway Private Link. services: application-gateway -author: greglin +author: greg-lindsay ms.service: application-gateway ms.topic: conceptual ms.date: 05/09/2022 @@ -10,7 +10,7 @@ ms.author: greglin --- -# Application Gateway Private Link +# Application Gateway Private Link (preview) Today, you can deploy your critical workloads securely behind Application Gateway, gaining the flexibility of Layer 7 load balancing features. Access to the backend workloads is possible in two ways: @@ -21,6 +21,8 @@ Private Link for Application Gateway allows you to connect workloads over a priv :::image type="content" source="media/private-link/private-link.png" alt-text="Diagram showing Application Gateway Private Link"::: +> [!IMPORTANT] +> Azure Application Gateway Private Link is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Features and capabilities diff --git a/articles/automanage/automanage-arc.md b/articles/automanage/automanage-arc.md index b0e62e3307b97..dd92688df7b4a 100644 --- a/articles/automanage/automanage-arc.md +++ b/articles/automanage/automanage-arc.md @@ -5,7 +5,7 @@ ms.service: automanage ms.collection: linux ms.workload: infrastructure ms.topic: conceptual -ms.date: 03/22/2022 +ms.date: 05/12/2022 --- # Azure Automanage for Machines Best Practices - Azure Arc-enabled servers @@ -18,12 +18,10 @@ For all of these services, we will auto-onboard, auto-configure, monitor for dri Automanage supports the following operating systems for Azure Arc-enabled servers -- Windows Server 2012/R2 -- Windows Server 2016 -- Windows Server 2019 +- Windows Server 2012 R2, 2016, 2019, 2022 - CentOS 7.3+, 8 - RHEL 7.4+, 8 -- Ubuntu 16.04 and 18.04 +- Ubuntu 16.04, 18.04, 20.04 - SLES 12 (SP3-SP5 only) ## Participating services @@ -32,6 +30,7 @@ Automanage supports the following operating systems for Azure Arc-enabled server |-----------|---------------|----------------------| |[Machines Insights Monitoring](../azure-monitor/vm/vminsights-overview.md) |Azure Monitor for machines monitors the performance and health of your virtual machines, including their running processes and dependencies on other resources. |Production | |[Update Management](../automation/update-management/overview.md) |You can use Update Management in Azure Automation to manage operating system updates for your machines. You can quickly assess the status of available updates on all agent machines and manage the process of installing required updates for servers. |Production, Dev/Test | +|[Microsoft Antimalware](../security/fundamentals/antimalware.md) |Microsoft Antimalware for Azure is a free real-time protection that helps identify and remove viruses, spyware, and other malicious software. It generates alerts when known malicious or unwanted software tries to install itself or run on your Azure systems. **Note:** Microsoft Antimalware requires that there be no other antimalware software installed, or it may fail to work. This is also only supported for Windows Server 2016 and above. |Production, Dev/Test | |[Change Tracking & Inventory](../automation/change-tracking/overview.md) |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. |Production, Dev/Test | |[Azure Guest Configuration](../governance/policy/concepts/guest-configuration.md) | Guest Configuration policy is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure security baseline using the Guest Configuration extension. For Arc machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. |Production, Dev/Test | |[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. |Production, Dev/Test | diff --git a/articles/automanage/automanage-virtual-machines.md b/articles/automanage/automanage-virtual-machines.md index 000612d929009..fdf1d72f0e482 100644 --- a/articles/automanage/automanage-virtual-machines.md +++ b/articles/automanage/automanage-virtual-machines.md @@ -5,7 +5,7 @@ author: mmccrory ms.service: automanage ms.workload: infrastructure ms.topic: conceptual -ms.date: 10/19/2021 +ms.date: 5/12/2022 ms.author: memccror ms.custom: references_regions --- @@ -109,7 +109,7 @@ The only time you might need to interact with this machine to manage these servi ## Enabling Automanage for VMs using Azure Policy You can also enable Automanage on VMs at scale using the built-in Azure Policy. The policy has a DeployIfNotExists effect, which means that all eligible VMs located within the scope of the policy will be automatically onboarded to Automanage VM Best Practices. -A direct link to the policy is [here](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F270610db-8c04-438a-a739-e8e6745b22d3). +A direct link to the policy is [here](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff889cab7-da27-4c41-a3b0-de1f6f87c55). For more information, check out how to enable the [Automanage built-in policy](virtual-machines-policy-enable.md). diff --git a/articles/automanage/automanage-windows-server.md b/articles/automanage/automanage-windows-server.md index a791d73ebe76c..ed883bd8da8b9 100644 --- a/articles/automanage/automanage-windows-server.md +++ b/articles/automanage/automanage-windows-server.md @@ -9,7 +9,7 @@ ms.date: 03/22/2022 ms.author: memccror --- -# Azure Automanage for Machines Best Practices - Windows Server +# Azure Automanage for Machines Best Practices - Windows These Azure services are automatically onboarded for you when you use Automanage Machine Best Practices on a Windows Server VM. They are essential to our best practices white paper, which you can find in our [Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/azure-server-management). @@ -17,13 +17,14 @@ For all of these services, we will auto-onboard, auto-configure, monitor for dri ## Supported Windows Server versions -Automanage supports the following Windows Server versions: +Automanage supports the following Windows versions: -- Windows Server 2012/R2 +- Windows Server 2012 R2 - Windows Server 2016 - Windows Server 2019 - Windows Server 2022 - Windows Server 2022 Azure Edition +- Windows 10 ## Participating services diff --git a/articles/azure-functions/functions-concurrency.md b/articles/azure-functions/functions-concurrency.md index 2de562d620837..3bb3acb3700f3 100644 --- a/articles/azure-functions/functions-concurrency.md +++ b/articles/azure-functions/functions-concurrency.md @@ -11,9 +11,6 @@ ms.author: cachai This article describes the concurrency behaviors of event-driven triggers in Azure Functions. It also describes a new dynamic model for optimizing concurrency behaviors. ->[!NOTE] ->The dynamic concurrency model is currently in preview. Support for dynamic concurrency is limited to specific binding extensions. - The hosting model for Functions allows multiple function invocations to run concurrently on a single compute instance. For example, consider a case where you have three different functions in your function app, which is scaled out and running on multiple instances. In this scenario, each function processes invocations on each VM instance on which your function app is running. The function invocations on a single instance share the same VM compute resources, such as memory, CPU, and connections. When your app is hosted in a dynamic plan (Consumption or Premium), the platform scales the number of function app instances up or down based on the number of incoming events. To learn more, see [Event Driven Scaling](./Event-Driven-Scaling.md)). When you host your functions in a Dedicated (App Service) plan, you manually configure your instances or [set up an autoscale scheme](dedicated-plan.md#scaling). Because multiple function invocations can run on each instance concurrently, each function needs to have a way to throttle how many concurrent invocations it's processing at any given time. @@ -28,7 +25,7 @@ While such concurrency configurations give you control of certain trigger behavi Ideally, we want the system to allow instances to process as much work as they can while keeping each instance healthy and latencies low, which is what dynamic concurrency is designed to do. -## Dynamic concurrency (preview) +## Dynamic concurrency Functions now provides a dynamic concurrency model that simplifies configuring concurrency for all function apps running in the same plan. @@ -74,7 +71,7 @@ When dynamic concurrency is enabled, you'll see dynamic concurrency decisions in ### Extension support -Dynamic concurrency is enabled for a function app at the host level, and any extensions that support dynamic concurrency run in that mode. Dynamic concurrency requires collaboration between the host and individual trigger extensions. For preview, only the listed versions of the following extensions support dynamic concurrency. +Dynamic concurrency is enabled for a function app at the host level, and any extensions that support dynamic concurrency run in that mode. Dynamic concurrency requires collaboration between the host and individual trigger extensions. Only the listed versions of the following extensions support dynamic concurrency. #### Azure Queues diff --git a/articles/azure-maps/authentication-best-practices.md b/articles/azure-maps/authentication-best-practices.md index b80fc960bed90..bfc8d8c450b0c 100644 --- a/articles/azure-maps/authentication-best-practices.md +++ b/articles/azure-maps/authentication-best-practices.md @@ -1,7 +1,7 @@ --- -title: Authentication and authorization best practices in Azure Maps +title: Authentication best practices in Azure Maps titleSuffix: Microsoft Azure Maps -description: Learn tips & tricks to optimize the use of Authentication and Authorization in your Azure Maps applications. +description: Learn tips & tricks to optimize the use of Authentication in your Azure Maps applications. author: stevemunk ms.author: v-munksteve ms.date: 05/11/2022 @@ -10,7 +10,7 @@ ms.service: azure-maps services: azure-maps --- -# Authentication and authorization best practices +# Authentication best practices The single most important part of your application is its security. No matter how good the user experience might be, if your application isn't secure a hacker can ruin it. diff --git a/articles/azure-monitor/agents/resource-manager-data-collection-rules.md b/articles/azure-monitor/agents/resource-manager-data-collection-rules.md index f567c09c7babc..13916337f983e 100644 --- a/articles/azure-monitor/agents/resource-manager-data-collection-rules.md +++ b/articles/azure-monitor/agents/resource-manager-data-collection-rules.md @@ -78,7 +78,7 @@ The following sample creates an association between an Azure virtual machine and "contentVersion": "1.0.0.0", "parameters": { "vmName": { - "value": "my-windows-vm" + "value": "my-azure-vm" }, "associationName": { "value": "my-windows-vm-my-dcr" @@ -142,7 +142,7 @@ The following sample creates an association between an Azure Arc-enabled server "contentVersion": "1.0.0.0", "parameters": { "vmName": { - "value": "my-windows-vm" + "value": "my-hybrid-vm" }, "associationName": { "value": "my-windows-vm-my-dcr" diff --git a/articles/azure-monitor/essentials/resource-logs-schema.md b/articles/azure-monitor/essentials/resource-logs-schema.md index 9b6576cf8e8f6..7d6d3da399924 100644 --- a/articles/azure-monitor/essentials/resource-logs-schema.md +++ b/articles/azure-monitor/essentials/resource-logs-schema.md @@ -90,6 +90,7 @@ The schema for resource logs varies depending on the resource and log category. | Azure Storage | [Blobs](../../storage/blobs/monitor-blob-storage-reference.md#resource-logs-preview), [Files](../../storage/files/storage-files-monitoring-reference.md#resource-logs-preview), [Queues](../../storage/queues/monitor-queue-storage-reference.md#resource-logs-preview), [Tables](../../storage/tables/monitor-table-storage-reference.md#resource-logs-preview) | | Azure Stream Analytics |[Job logs](../../stream-analytics/stream-analytics-job-diagnostic-logs.md) | | Azure Traffic Manager | [Traffic Manager log schema](../../traffic-manager/traffic-manager-diagnostic-logs.md) | +| Azure Video Indexer|[Monitor Azure Video Indexer data reference](/azure/azure-video-indexer/monitor-video-indexer-data-reference)| | Azure Virtual Network | Schema not available | | Virtual network gateways | [Logging for Virtual Network Gateways](../../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md)| diff --git a/articles/azure-monitor/logs/custom-fields.md b/articles/azure-monitor/logs/custom-fields.md index 2cbf057b5a9d1..997866e2cd12c 100644 --- a/articles/azure-monitor/logs/custom-fields.md +++ b/articles/azure-monitor/logs/custom-fields.md @@ -25,7 +25,7 @@ For example, the sample record below has useful data buried in the event descrip ![Sample extract](media/custom-fields/sample-extract.png) > [!NOTE] -> In the Preview, you are limited to 100 custom fields in your workspace. This limit will be expanded when this feature reaches general availability. +> In the Preview, you are limited to 500 custom fields in your workspace. This limit will be expanded when this feature reaches general availability. ## Creating a custom field When you create a custom field, Log Analytics must understand which data to use to populate its value. It uses a technology from Microsoft Research called FlashExtract to quickly identify this data. Rather than requiring you to provide explicit instructions, Azure Monitor learns about the data you want to extract from examples that you provide. diff --git a/articles/azure-monitor/logs/powershell-workspace-configuration.md b/articles/azure-monitor/logs/powershell-workspace-configuration.md index 1a2ea161deee6..6111f1075bd3d 100644 --- a/articles/azure-monitor/logs/powershell-workspace-configuration.md +++ b/articles/azure-monitor/logs/powershell-workspace-configuration.md @@ -36,7 +36,7 @@ try { } # Create the workspace -New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Sku Standard -ResourceGroupName $ResourceGroup +New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Sku PerGB2018 -ResourceGroupName $ResourceGroup ``` ## Create workspace and configure data sources @@ -71,7 +71,7 @@ try { } # Create the workspace -New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Sku Standard -ResourceGroupName $ResourceGroup +New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Sku PerGB2018 -ResourceGroupName $ResourceGroup # List of solutions to enable $Solutions = "Security", "Updates", "SQLAssessment" diff --git a/articles/azure-portal/TOC.yml b/articles/azure-portal/TOC.yml index 548b776a9eda0..0ca05cc4b41b7 100644 --- a/articles/azure-portal/TOC.yml +++ b/articles/azure-portal/TOC.yml @@ -52,9 +52,11 @@ href: supportability/how-to-manage-azure-support-request.md - name: View and increase quotas items: + - name: Quotas overview + href: supportability/quotas-overview.md - name: View quotas href: supportability/view-quotas.md - - name: Increase vCPU quotas + - name: Increase compute quotas items: - name: Increase VM-family vCPU quotas href: supportability/per-vm-quota-requests.md diff --git a/articles/azure-portal/supportability/media/per-vm-quota-requests/new-per-vm-quota-request.png b/articles/azure-portal/supportability/media/per-vm-quota-requests/new-per-vm-quota-request.png new file mode 100644 index 0000000000000..26734eaed1ce5 Binary files /dev/null and b/articles/azure-portal/supportability/media/per-vm-quota-requests/new-per-vm-quota-request.png differ diff --git a/articles/azure-portal/supportability/media/per-vm-quota-requests/per-vm-quota-not-available.png b/articles/azure-portal/supportability/media/per-vm-quota-requests/per-vm-quota-not-available.png new file mode 100644 index 0000000000000..ca0099353ea90 Binary files /dev/null and b/articles/azure-portal/supportability/media/per-vm-quota-requests/per-vm-quota-not-available.png differ diff --git a/articles/azure-portal/supportability/media/per-vm-quota-requests/per-vm-request-quota-increase-adjust-usage.png b/articles/azure-portal/supportability/media/per-vm-quota-requests/per-vm-request-quota-increase-adjust-usage.png new file mode 100644 index 0000000000000..60438eb356ed4 Binary files /dev/null and b/articles/azure-portal/supportability/media/per-vm-quota-requests/per-vm-request-quota-increase-adjust-usage.png differ diff --git a/articles/azure-portal/supportability/media/per-vm-quota-requests/per-vm-request-quota-increase-new-limit.png b/articles/azure-portal/supportability/media/per-vm-quota-requests/per-vm-request-quota-increase-new-limit.png new file mode 100644 index 0000000000000..f6670c5bbde2b Binary files /dev/null and b/articles/azure-portal/supportability/media/per-vm-quota-requests/per-vm-request-quota-increase-new-limit.png differ diff --git a/articles/azure-portal/supportability/media/per-vm-quota-requests/quota-details.png b/articles/azure-portal/supportability/media/per-vm-quota-requests/quota-details.png new file mode 100644 index 0000000000000..6001237c9be38 Binary files /dev/null and b/articles/azure-portal/supportability/media/per-vm-quota-requests/quota-details.png differ diff --git a/articles/azure-portal/supportability/media/per-vm-quota-requests/quota-request-problem-details.png b/articles/azure-portal/supportability/media/per-vm-quota-requests/quota-request-problem-details.png new file mode 100644 index 0000000000000..996e966476785 Binary files /dev/null and b/articles/azure-portal/supportability/media/per-vm-quota-requests/quota-request-problem-details.png differ diff --git a/articles/azure-portal/supportability/media/per-vm-quota-requests/request-quota-increase-options.png b/articles/azure-portal/supportability/media/per-vm-quota-requests/request-quota-increase-options.png new file mode 100644 index 0000000000000..87e636e81a347 Binary files /dev/null and b/articles/azure-portal/supportability/media/per-vm-quota-requests/request-quota-increase-options.png differ diff --git a/articles/azure-portal/supportability/media/per-vm-quota-requests/select-per-vm-quotas.png b/articles/azure-portal/supportability/media/per-vm-quota-requests/select-per-vm-quotas.png new file mode 100644 index 0000000000000..44679b52c8a7a Binary files /dev/null and b/articles/azure-portal/supportability/media/per-vm-quota-requests/select-per-vm-quotas.png differ diff --git a/articles/azure-portal/supportability/media/per-vm-quota-requests/support-icon.png b/articles/azure-portal/supportability/media/per-vm-quota-requests/support-icon.png new file mode 100644 index 0000000000000..1d2532f81a029 Binary files /dev/null and b/articles/azure-portal/supportability/media/per-vm-quota-requests/support-icon.png differ diff --git a/articles/azure-portal/supportability/media/regional-quota-requests/regional-request-quota-increase-adjust-usage.png b/articles/azure-portal/supportability/media/regional-quota-requests/regional-request-quota-increase-adjust-usage.png new file mode 100644 index 0000000000000..d8e1c369d16e6 Binary files /dev/null and b/articles/azure-portal/supportability/media/regional-quota-requests/regional-request-quota-increase-adjust-usage.png differ diff --git a/articles/azure-portal/supportability/media/regional-quota-requests/regional-request-quota-increase-new-limit.png b/articles/azure-portal/supportability/media/regional-quota-requests/regional-request-quota-increase-new-limit.png new file mode 100644 index 0000000000000..f6670c5bbde2b Binary files /dev/null and b/articles/azure-portal/supportability/media/regional-quota-requests/regional-request-quota-increase-new-limit.png differ diff --git a/articles/azure-portal/supportability/media/regional-quota-requests/request-quota-increase-options.png b/articles/azure-portal/supportability/media/regional-quota-requests/request-quota-increase-options.png new file mode 100644 index 0000000000000..87e636e81a347 Binary files /dev/null and b/articles/azure-portal/supportability/media/regional-quota-requests/request-quota-increase-options.png differ diff --git a/articles/azure-portal/supportability/media/regional-quota-requests/select-regional-quotas.png b/articles/azure-portal/supportability/media/regional-quota-requests/select-regional-quotas.png new file mode 100644 index 0000000000000..44679b52c8a7a Binary files /dev/null and b/articles/azure-portal/supportability/media/regional-quota-requests/select-regional-quotas.png differ diff --git a/articles/azure-portal/supportability/media/resource-manager-core-quotas-request/regional-quota-total.png b/articles/azure-portal/supportability/media/resource-manager-core-quotas-request/regional-quota-total.png deleted file mode 100644 index 6da998bf2d4c8..0000000000000 Binary files a/articles/azure-portal/supportability/media/resource-manager-core-quotas-request/regional-quota-total.png and /dev/null differ diff --git a/articles/azure-portal/supportability/media/spot-quota/request-quota-increase-options.png b/articles/azure-portal/supportability/media/spot-quota/request-quota-increase-options.png new file mode 100644 index 0000000000000..87e636e81a347 Binary files /dev/null and b/articles/azure-portal/supportability/media/spot-quota/request-quota-increase-options.png differ diff --git a/articles/azure-portal/supportability/media/spot-quota/select-spot-quotas.png b/articles/azure-portal/supportability/media/spot-quota/select-spot-quotas.png new file mode 100644 index 0000000000000..715c86625eabb Binary files /dev/null and b/articles/azure-portal/supportability/media/spot-quota/select-spot-quotas.png differ diff --git a/articles/azure-portal/supportability/media/spot-quota/spot-request-quota-increase-adjust-usage.png b/articles/azure-portal/supportability/media/spot-quota/spot-request-quota-increase-adjust-usage.png new file mode 100644 index 0000000000000..16450a5dd46ad Binary files /dev/null and b/articles/azure-portal/supportability/media/spot-quota/spot-request-quota-increase-adjust-usage.png differ diff --git a/articles/azure-portal/supportability/media/spot-quota/spot-request-quota-increase-new-limit.png b/articles/azure-portal/supportability/media/spot-quota/spot-request-quota-increase-new-limit.png new file mode 100644 index 0000000000000..c93ef44ce3ecc Binary files /dev/null and b/articles/azure-portal/supportability/media/spot-quota/spot-request-quota-increase-new-limit.png differ diff --git a/articles/azure-portal/supportability/media/view-quotas/my-quotas.png b/articles/azure-portal/supportability/media/view-quotas/my-quotas.png index 6187843ce9ec2..cc8e6af33207c 100644 Binary files a/articles/azure-portal/supportability/media/view-quotas/my-quotas.png and b/articles/azure-portal/supportability/media/view-quotas/my-quotas.png differ diff --git a/articles/azure-portal/supportability/per-vm-quota-requests.md b/articles/azure-portal/supportability/per-vm-quota-requests.md index 3fc5bbe7659a9..7c9c707d5c125 100644 --- a/articles/azure-portal/supportability/per-vm-quota-requests.md +++ b/articles/azure-portal/supportability/per-vm-quota-requests.md @@ -1,7 +1,7 @@ --- title: Increase VM-family vCPU quotas description: Learn how to request an increase in the vCPU quota limit for a VM family in the Azure portal, which increases the total regional vCPU limit by the same amount. -ms.date: 1/26/2022 +ms.date: 05/11/2022 ms.topic: how-to --- @@ -19,19 +19,43 @@ Standard vCPU quotas apply to pay-as-you-go VMs and reserved VM instances. They This article shows how to request increases for VM-family vCPU quotas. You can also request increases for [vCPU quotas by region](regional-quota-requests.md) or [spot vCPU quotas](spot-quota.md). -## Increase a VM-family vCPU quota +## Adjustable and non-adjustable quotas -To request a standard vCPU quota increase per VM-family from **Usage + quotas**: +When requesting a quota increase, the steps differ depending on whether the quota is adjustable or non-adjustable. -1. In the Azure portal, search for and select **Subscriptions**. -1. Select the subscription whose quota you want to increase. -1. In the left pane, select **Usage + quotas**. -1. In the main pane, find the VM-family vCPU quota you want to increase, then select the pencil icon. The example below shows Standard DSv3 Family vCPUs deployed in the East US region. The **Usage** column displays the current quota usage and the current quota limit. -1. In **Quota details**, enter your new quota limit, then select **Save and continue**. +- **Adjustable quotas**: Quotas for which you can request quota increases fall into this category. Each subscription has a default quota value for each quota. You can request an increase for an adjustable quota from the [Azure Home](https://ms.portal.azure.com/#home) **My quotas** page, providing an amount or usage percentage and submitting it directly. This is the quickest way to increase quotas. +- **Non-adjustable quotas**: These are quotas which have a hard limit, usually determined by the scope of the subscription. To make changes, you must submit a support request, and the Azure support team will help provide solutions. - :::image type="content" source="media/resource-manager-core-quotas-request/quota-increase-example.png" alt-text="Screenshot of the Usage + quotas pane." lightbox="media/resource-manager-core-quotas-request/quota-increase-example.png"::: +## Request an increase for adjustable quotas -Your request will be reviewed, and you'll be notified whether the request is approved or rejected. This usually happens within a few minutes. If your request is rejected, you'll see a link where you can open a support request so that a support engineer can assist you with the increase. +You can submit a request for a standard vCPU quota increase per VM-family from **My quotas**, quickly accessed from [Azure Home](https://ms.portal.azure.com/#home). + +1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. + + > [!TIP] + > After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://ms.portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it. + +1. On the **Overview** page, select **Compute**. +1. On the **My quotas** page, select the quota or quotas you want to increase. + + :::image type="content" source="media/per-vm-quota-requests/select-per-vm-quotas.png" alt-text="Screenshot showing per-VM quota selection in the Azure portal."::: + +1. Near the top of the page, select **Request quota increase**, then select the way you'd like to increase the quota(s). + + :::image type="content" source="media/per-vm-quota-requests/request-quota-increase-options.png" alt-text="Screenshot showing the options to request a quota increase in the Azure portal."::: + + > [!TIP] + > Choosing **Adjust the usage %** allows you to select one usage percentage to apply to all the selected quotas without requiring you to calculate an absolute number (limit) for each quota. This option is recommended when the selected quotas have very high usage. + +1. If you selected **Enter a new limit**, in the **Request quota increase** pane, enter a numerical value for your new quota limit(s), then select **Submit**. + + :::image type="content" source="media/per-vm-quota-requests/per-vm-request-quota-increase-new-limit.png" alt-text="Screenshot showing the Enter a new limit option for a per-VM quota increase request."::: + +1. If you selected **Adjust the usage %**, in the **Request quota increase** pane, adjust the slider to a new usage percent. Adjusting the percentage automatically calculates the new limit for each quota to be increased. This option is recommended when the selected quotas have very high usage. When you're finished, select **Submit**. + + :::image type="content" source="media/per-vm-quota-requests/per-vm-request-quota-increase-adjust-usage.png" alt-text="Screenshot showing the Adjust the usage % option for a per-VM quota increase request."::: + +Your request will be reviewed, and you'll be notified if the request can be fulfilled. This usually happens within a few minutes. If your request is not fulfilled, you'll see a link where you can [open a support request](how-to-create-azure-support-request.md) so that a support engineer can assist you with the increase. > [!NOTE] > If your request to increase your VM-family quota is approved, Azure will automatically increase the regional vCPU quota for the region where your VM is deployed. @@ -39,40 +63,48 @@ Your request will be reviewed, and you'll be notified whether the request is app > [!TIP] > When creating or resizing a virtual machine and selecting your VM size, you may see some options listed under **Insufficient quota - family limit**. If so, you can request a quota increase directly from the VM creation page by selecting the **Request quota** link. -## Increase a VM-family vCPU quota from Help + support +## Request an increase when a quota isn't available + +At times you may see a message that a selected quota isn’t available for an increase. To see which quotas are not available, look for the Information icon next to the quota name. + +:::image type="content" source="media/per-vm-quota-requests/per-vm-quota-not-available.png" alt-text="Screenshot showing a quota that is not available in the Azure portal."::: + +If a quota you want to increase isn't currently available, the quickest solution may be to consider other series or regions. If you want to continue and receive assistance for your specified quota, you can submit a support request for the increase. + +1. When following the steps above, if a quota isn't available, select the Information icon next to the quota. Then select **Create a support request**. +1. In the **Quota details** pane, confirm the pre-filled information is correct, then enter the desired new vCPU limit(s). + + :::image type="content" source="media/per-vm-quota-requests/quota-details.png" alt-text="Screenshot of the Quota details pane in the Azure portal."::: + +1. Select **Save and continue** to open the **New support request** form. Continue to enter the required information, then select **Next**. +1. Review your request information and select **Previous** to make changes, or **Create** to submit the request. -To request a standard vCPU quota increase per VM family from **Help + support**, create a new support request in the Azure portal. +## Request an increase for non-adjustable quotas -1. For **Issue type**, select **Service and subscription limits (quotas)**. -1. For **Subscription**, select the subscription whose quota you want to increase. -1. For **Quota type**, select **Compute-VM (cores-vCPUs) subscription limit increases**. +To request an increase for a non-adjustable quota, such as Virtual Machines or Virtual Machine Scale Sets, you must submit a support request so that a support engineer can assist you. - :::image type="content" source="media/resource-manager-core-quotas-request/new-per-vm-quota-request.png" alt-text="Screenshot showing a support request to increase a VM-family vCPU quota in the Azure portal."::: +1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. +1. From the **Overview** page, select **Compute**. +1. Find the quota you want to increase, then select the support icon. -From there, follow the steps described in [Create a support request](how-to-create-azure-support-request.md#create-a-support-request). + :::image type="content" source="media/per-vm-quota-requests/support-icon.png" alt-text="Screenshot showing the support icon in the Azure portal."::: -## Increase multiple VM-family CPU quotas in one request +1. In the **New support request form**, on the first page, confirm that the pre-filled information is correct. +1. For **Quota type**, select **Other Requests**, then select **Next**. -You can also request multiple increases at the same time (bulk request). Doing a bulk request quota increase may take longer than requesting to increase a single quota. + :::image type="content" source="media/per-vm-quota-requests/new-per-vm-quota-request.png" alt-text="Screenshot showing a new quota increase support request in the Azure portal."::: -To request multiple increases together, first go to the **Usage + quotas** page as described above. Then do the following: +1. On the **Additional details** page, under **Problem details**, enter the information required for your quota increase, including the new limit requested. -1. Select **Request Increase** near the top of the screen. -1. For **Quota type**, select **Compute-VM (cores-vCPUs) subscription limit increases**. -1. Select **Next** to go to the **Additional details** screen, then select **Enter details**. -1. In the **Quota details** screen: + :::image type="content" source="media/per-vm-quota-requests/quota-request-problem-details.png" alt-text="Screenshot showing the Problem details step of a quota increase request in the Azure portal."::: - :::image type="content" source="media/resource-manager-core-quotas-request/quota-details-standard-set-vcpu-limit.png" alt-text="Screenshot showing the Quota details screen and selections."::: +1. Scroll down and complete the form. When finished, select **Next**. +1. Review your request information and select **Previous** to make changes, or **Create** to submit the request. - 1. For **Deployment model**, ensure **Resource Manager** is selected. - 1. For **Locations**, select all regions in which you want to increase quotas. - 1. For each region you selected, select one or more VM series from the **Quotas** drop-down list. - 1. For each **VM Series** you selected, enter the new vCPU limit that you want for this subscription. - 1. When you're finished, select **Save and continue**. -1. Enter or confirm your contact details, then select **Next**. -1. Finally, ensure that everything looks correct on the **Review + create** page, then select **Create** to submit your request. +For more information, see [Create a support request](how-to-create-azure-support-request.md). ## Next steps - Learn more about [vCPU quotas](../../virtual-machines/windows/quotas.md). +- Learn more in [Quotas overview](quotas-overview.md). - Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md). \ No newline at end of file diff --git a/articles/azure-portal/supportability/quotas-overview.md b/articles/azure-portal/supportability/quotas-overview.md new file mode 100644 index 0000000000000..ec6855f7d2faa --- /dev/null +++ b/articles/azure-portal/supportability/quotas-overview.md @@ -0,0 +1,44 @@ +--- +title: Quotas overview +description: Learn about to view quotas and request increases in the Azure portal. +ms.date: 05/11/2022 +ms.topic: how-to +--- + +# Quotas overview + +Many Azure services have quotas, which are the assigned number of resources for your Azure subscription. Each quota represents a specific countable resource, such as the number of virtual machines you can create, the number of storage accounts you can use concurrently, the number of networking resources you can consume, or the number of API calls to a particular service you can make. + +The concept of quotas is designed to help protect customers from things like inaccurately resourced deployments and mistaken consumption. For Azure, it helps minimize risks from deceptive or inappropriate consumption and unexpected demand. Quotas are set and enforced in the scope of the [subscription](/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide). + +## Quotas or limits? + +Quotas were previously referred to as limits. Quotas do have limits, but the limits are variable and dependent on many factors. Each subscription has a default value for each quota. + +> [!NOTE] +> There is no cost associated with requesting a quota increase. Costs are incurred based on resource usage, not the quotas themselves. + +## Adjustable and non-adjustable quotas + +Quotas can be adjustable or non-adjustable. + +- **Adjustable quotas**: Quotas for which you can request quota increases fall into this category. Each subscription has a default quota value for each quota. You can request an increase for an adjustable quota from the [Azure Home](https://ms.portal.azure.com/#home) **My quotas** page, providing an amount or usage percentage and submitting it directly. This is the quickest way to increase quotas. +- **Non-adjustable quotas**: These are quotas which have a hard limit, usually determined by the scope of the subscription. To make changes, you must submit a support request, and the Azure support team will help provide solutions. + +## Work with quotas + +Different entry points, data views, actions, and programming options are available, depending on your organization and administrator preferences. + +| Option | Azure portal | Quota APIs | Support API | +|---------|---------|---------|---------| +| Summary | The portal provides a customer-friendly user interface for accessing quota information.

From [Azure Home](https://ms.portal.azure.com/#home), **Quotas** is a centralized location to directly view quotas and quota usage and request quota increases.

From the Subscriptions page, **Quotas + usage** offers quick access to requesting quota increases for a given subscription.| The [Azure Quota API](/rest/api/reserved-vm-instances/quotaapi) programmatically provides the ability to get current quota limits, find current usage, and request quota increases by subscription, resource provider, and location. | The [Azure Support REST API](/rest/api/support/) enables customers to create service quota support tickets programmatically. | +| Availability | All customers | All customers | All customers with unified, premier, professional direct support plans | +| Which to choose? | Useful for customers desiring a central location and an efficient visual interface for viewing and managing quotas. Provides quick access to requesting quota increases. | Useful for customers who want granular and programmatic control of quota management for adjustable quotas. Intended for end to end automation of quota usage validation and quota increase requests through APIs. | Customers who want end to end automation of support request creation and management. Provides an alternative path to Azure portal for requests. | +| Providers supported | All providers | Compute, Machine Learning | All providers | + +## Next steps + +- Learn more about [viewing quotas in the Azure portal](view-quotas.md). +- Learn how to increase Compute quotas. +- Learn how to request increases for [VM-family vCPU quotas](per-vm-quota-requests.md), [vCPU quotas by region](regional-quota-requests.md), and [spot vCPU quotas](spot-quota.md). +- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md). \ No newline at end of file diff --git a/articles/azure-portal/supportability/regional-quota-requests.md b/articles/azure-portal/supportability/regional-quota-requests.md index 8f93d25661675..a71eaee8c0b2e 100644 --- a/articles/azure-portal/supportability/regional-quota-requests.md +++ b/articles/azure-portal/supportability/regional-quota-requests.md @@ -28,42 +28,43 @@ When considering your vCPU needs across regions, keep in mind the following: - When you request an increase in the vCPU quota for a VM series, Azure increases the regional vCPU quota limit by the same amount. -- When you create a new subscription, the default value for the total number of vCPUs in a region might not be equal to the total default vCPU quota for all individual VM series. This discrepancy can result in a subscription with enough quota for each individual VM series that you want to deploy. However, there might not be enough quota to accommodate the total regional vCPUs for all deployments. In this case, you must submit a request to explicitly increase the quota limit of the regional vCPU quotas. +- When you create a new subscription, the default value for the total number of vCPUs in a region might not be equal to the total default vCPU quota for all individual VM series. This can result in a subscription without enough quota for each individual VM series that you want to deploy. However, there might not be enough quota to accommodate the total regional vCPUs for all deployments. In this case, you must submit a request to explicitly increase the quota limit of the regional vCPU quotas. -## Increase a regional vCPU quota +## Request an increase for regional vCPU quotas -To request a regional vCPU quota from **Usage + quotas**: +1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. -1. In the Azure portal, search for and select **Subscriptions**. + > [!TIP] + > After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://ms.portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it. -1. Select the subscription whose quota you want to increase. +1. On the **Overview** page, select **Compute**. +1. On the **My quotas** page, select **Region** and then unselect **All**. +1. In the **Region** list, select the regions you want to include for the quota increase request. +1. Filter for any other requirements, such as **Usage**, as needed. +1. Select the quota(s) that you want to increase. -1. In the left pane, select **Usage + quotas**. Use the filters to view your quota by usage. + :::image type="content" source="media/regional-quota-requests/select-regional-quotas.png" alt-text="Screenshot showing regional quota selection in the Azure portal"::: -1. In the main pane, select **Total Regional vCPUs**, then select the pencil icon. The example below shows the regional vCPU quota for the NorthEast US region. +1. Near the top of the page, select **Request quota increase**, then select the way you'd like to increase the quota(s). - :::image type="content" source="media/resource-manager-core-quotas-request/regional-quota-total.png" alt-text="Screenshot of the Usage + quotas screen showing Total Regional vCPUs in the Azure portal." lightbox="media/resource-manager-core-quotas-request/regional-quota-total.png"::: + :::image type="content" source="media/regional-quota-requests/request-quota-increase-options.png" alt-text="Screenshot showing the options to request a quota increase in the Azure portal."::: -1. In **Quota details**, enter your new quota limit, then select **Save and continue**. + > [!TIP] + > Choosing **Adjust the usage %** allows you to select one usage percentage to apply to all the selected quotas without requiring you to calculate an absolute number (limit) for each quota. This option is recommended when the selected quotas have very high usage. - Your request will be reviewed, and you'll be notified whether the request is approved or rejected. This usually happens within a few minutes. If your request is rejected, you'll see a link where you can open a support request so that a support engineer can assist you with the increase. +1. If you selected **Enter a new limit**, in the **Request quota increase** pane, enter a numerical value for your new quota limit(s), then select **Submit**. -> [!TIP] -> You can also request multiple increases at the same time. For more information, see [Increase multiple VM-family CPU quotas in one request](per-vm-quota-requests.md#increase-multiple-vm-family-cpu-quotas-in-one-request). + :::image type="content" source="media/regional-quota-requests/regional-request-quota-increase-new-limit.png" alt-text="Screenshot showing the Enter a new limit option for a regional quota increase request."::: -## Increase a regional quota from Help + support +1. If you selected **Adjust the usage %**, in the **Request quota increase** pane, adjust the slider to a new usage percent. Adjusting the percentage automatically calculates the new limit for each quota to be increased. This option is recommended when the selected quotas have very high usage. When you're finished, select **Submit**. -To request a standard vCPU quota increase per VM family from **Help + support**, create a new support request in the Azure portal. + :::image type="content" source="media/regional-quota-requests/regional-request-quota-increase-adjust-usage.png" alt-text="Screenshot showing the Adjust the usage % option for a regional quota increase request."::: -1. For **Issue type**, select **Service and subscription limits (quotas)**. -1. For **Subscription**, select the subscription whose quota you want to increase. -1. For **Quota type**, select **Compute-VM (cores-vCPUs) subscription limit increases**. - - :::image type="content" source="media/resource-manager-core-quotas-request/new-per-vm-quota-request.png" alt-text="Screenshot showing a support request to increase a VM-family vCPU quota in the Azure portal."::: - -From there, follow the steps described in [Create a support request](how-to-create-azure-support-request.md#create-a-support-request). +Your request will be reviewed, and you'll be notified if the request can be fulfilled. This usually happens within a few minutes. If your request is not fulfilled, you'll see a link where you can [open a support request](how-to-create-azure-support-request.md) so that a support engineer can assist you with the increase. ## Next steps +- Learn more about [vCPU quotas](../../virtual-machines/windows/quotas.md). +- Learn more in [Quotas overview](quotas-overview.md). +- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md). - Review the [list of Azure regions and their locations](https://azure.microsoft.com/regions/). -- Get an overview of [Azure regions for virtual machines](../../virtual-machines/regions.md) and how to to maximize a VM performance, availability, and redundancy in a given region. diff --git a/articles/azure-portal/supportability/spot-quota.md b/articles/azure-portal/supportability/spot-quota.md index 224b01896eb34..ad53b51926ffa 100644 --- a/articles/azure-portal/supportability/spot-quota.md +++ b/articles/azure-portal/supportability/spot-quota.md @@ -1,11 +1,11 @@ --- -title: Increase spot vCPU quotas +title: Request an increase for spot vCPU quotas description: Learn how to request increases for spot vCPU quotas in the Azure portal. -ms.date: 1/26/2022 +ms.date: 05/11/2022 ms.topic: how-to --- -# Increase spot vCPU quotas +# Request an increase for spot vCPU quotas Azure Resource Manager enforces two types of vCPU quotas for virtual machines: @@ -29,37 +29,39 @@ When considering your spot vCPU needs, keep in mind the following: - At any point in time when Azure needs the capacity back, the Azure infrastructure will evict spot VMs. -## Increase a spot vCPU quota +## Request an increase for spot vCPU quotas -To request a quota increase for a spot vCPU quota: +1. To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. -1. In the Azure portal, search for and select **Subscriptions**. -1. Select the subscription whose quota you want to increase. -1. In the left pane, select **Usage + quotas**. -1. In the main pane, search for spot and select **Total Regional Spot vCPUs** for the region you want to increase. -1. In **Quota details**, enter your new quota limit. + > [!TIP] + > After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://ms.portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it. - The example below requests a new quota limit of 103 for the Spot vCPUs across all VM-family vCPUs in the West US region. +1. On the **Overview** page, select **Compute**. +1. On the **My quotas** page, enter "spot" in the **Search** box. +1. Filter for any other requirements, such as **Usage**, as needed. +1. Find the quota or quotas you want to increase, and select them. - :::image type="content" source="media/resource-manager-core-quotas-request/spot-quota.png" alt-text="Screenshot of a spot vCPU quota increase request in the Azure portal." lightbox="media/resource-manager-core-quotas-request/spot-quota.png"::: + :::image type="content" source="media/spot-quota/select-spot-quotas.png" alt-text="Screenshot showing spot quota selection in the Azure portal"::: -1. Select **Save and continue**. +1. Near the top of the page, select **Request quota increase**, then select the way you'd like to increase the quota(s). -Your request will be reviewed, and you'll be notified whether the request is approved or rejected. This usually happens within a few minutes. If your request is rejected, you'll see a link where you can open a support request so that a support engineer can assist you with the increase. + :::image type="content" source="media/spot-quota/request-quota-increase-options.png" alt-text="Screenshot showing the options to request a quota increase in the Azure portal."::: -## Increase a spot quota from Help + support + > [!TIP] + > Choosing **Adjust the usage %** allows you to select one usage percentage to apply to all the selected quotas without requiring you to calculate an absolute number (limit) for each quota. This option is recommended when the selected quotas have very high usage. -To request a spot vCPU quota increase from **Help + support**, create a new support request in the Azure portal. +1. If you selected **Enter a new limit**, in the **Request quota increase** pane, enter a numerical value for your new quota limit(s), then select **Submit**. -1. For **Issue type**, select **Service and subscription limits (quotas)**. -1. For **Subscription**, select the subscription whose quota you want to increase. -1. For **Quota type**, select **Compute-VM (cores-vCPUs) subscription limit increases**. + :::image type="content" source="media/spot-quota/spot-request-quota-increase-new-limit.png" alt-text="Screenshot showing the Enter a new limit option for a regional quota increase request."::: - :::image type="content" source="media/resource-manager-core-quotas-request/new-per-vm-quota-request.png" alt-text="Screenshot showing a support request to increase a VM-family vCPU quota in the Azure portal."::: +1. If you selected **Adjust the usage %**, in the **Request quota increase** pane, adjust the slider to a new usage percent. Adjusting the percentage automatically calculates the new limit for each quota to be increased. This option is recommended when the selected quotas have very high usage. When you're finished, select **Submit**. -From there, follow the steps described in [Create a support request](how-to-create-azure-support-request.md#create-a-support-request). + :::image type="content" source="media/spot-quota/spot-request-quota-increase-adjust-usage.png" alt-text="Screenshot showing the Adjust the usage % option for a regional quota increase request."::: + +Your request will be reviewed, and you'll be notified if the request can be fulfilled. This usually happens within a few minutes. If your request is not fulfilled, you'll see a link where you can [open a support request](how-to-create-azure-support-request.md) so that a support engineer can assist you with the increase. ## Next steps -- Learn more about [Azure spot virtual machines](../../virtual-machines/spot-vms.md). +- Learn more about [Azure virtual machines](../../virtual-machines/spot-vms.md). +- Learn more in [Quotas overview](quotas-overview.md). - Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md). \ No newline at end of file diff --git a/articles/azure-portal/supportability/view-quotas.md b/articles/azure-portal/supportability/view-quotas.md index b03f480e9dc8b..7d8ce718fa019 100644 --- a/articles/azure-portal/supportability/view-quotas.md +++ b/articles/azure-portal/supportability/view-quotas.md @@ -1,6 +1,6 @@ --- title: View quotas -description: Learn how to view quotas and request increases in the Azure portal. +description: Learn how to view quotas in the Azure portal. ms.date: 02/14/2022 ms.topic: how-to --- @@ -9,52 +9,26 @@ ms.topic: how-to The **Quotas** page in the Azure portal is the centralized location where you can view your quotas. **My quotas** provides a comprehensive, customizable view of usage and other quota information so that you can assess quota usage. You can also request quota increases directly from **My quotas**. -To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box. +To view the **Quotas** page, sign in to the [Azure portal](https://portal.azure.com) and enter "quotas" into the search box, then select **Quotas**. > [!TIP] -> After you've accessed **Quotas**, the service will appear at the top of the Home page in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it. +> After you've accessed **Quotas**, the service will appear at the top of [Azure Home](https://ms.portal.azure.com/#home) in the Azure portal. You can also [add **Quotas** to your **Favorites** list](../azure-portal-add-remove-sort-favorites.md) so that you can quickly go back to it. ## View quota details -To view detailed information about your quotas, select **My quotas** in the left menu on the **Quotas** page. +To view detailed information about your quotas, select **My quotas** in the left pane on the **Quotas** page. > [!NOTE] > You can also select a specific Azure provider from the **Quotas** overview page to view quotas and usage for that provider. If you don't see a provider, check the [Azure subscription and service limits page](../..//azure-resource-manager/management/azure-subscription-service-limits.md) for more information. -On the **My quotas** page, you can choose which quotas and usage data to display. The filter options at the top of the page let you filter by location, provider, subscription, and usage. You can also use the search box to look for a specific quota. +On the **My quotas** page, you can choose which quotas and usage data to display. The filter options at the top of the page let you filter by location, provider, subscription, and usage. You can also use the search box to look for a specific quota. Depending on the provider you select, you may see some differences in filters and columns. :::image type="content" source="media/view-quotas/my-quotas.png" alt-text="Screenshot of the My quotas screen in the Azure portal."::: In the list of quotas, you can toggle the arrow shown next to **Quota** to expand and close categories. You can do the same next to each category to drill down and create a view of the information you need. -## Request quota increases - -You can request quota increases directly from **My quotas**. The process for requesting an increase will depend on the type of quota. - -> [!NOTE] -> There is no cost associated with requesting a quota increase. Costs are incurred based on resource usage, not the quotas themselves. - -### Request a quota increase - -Some quotas display a pencil icon. Select this icon to quickly request an increase for that quota. - -:::image type="content" source="media/view-quotas/quota-pencil-icon.png" alt-text="Screenshot of the pencil icon to request a quota increase in the Azure portal."::: - -After you select the pencil icon, enter the new limit for your request in the **Quota Details** pane, then select **Save and Continue**. After a few minutes, you'll see a status update confirming whether the increase was fulfilled. If you close **Quota details** before the update appears, you can check it later in the Azure Activity Log. - -If your request wasn't fulfilled, you can select **Create a support request** so that your request can be evaluated by our support team. - -### Create a support request - -If the quota displays a support icon rather than a pencil, you'll need to create a support request in order to request the increase. - -:::image type="content" source="media/view-quotas/quota-support-icon.png" alt-text="Screenshot of the support icon to request a quota increase in the Azure portal."::: - -Selecting the support icon will take you to the **New support request** page, where you can enter details about your new request. A support engineer will then assist you with the quota increase request. - -For more information about opening a support request, see [Create an Azure support request](how-to-create-azure-support-request.md). - ## Next steps -- Learn about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md). -- Learn about other ways to request increases for [VM-family vCPU quotas](per-vm-quota-requests.md), [vCPU quotas by region](regional-quota-requests.md), and [spot vCPU quotas](spot-quota.md). +- Learn more in [Quota overview](quotas-overview.md). +- about [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md). +- Learn how to request increases for [VM-family vCPU quotas](per-vm-quota-requests.md), [vCPU quotas by region](regional-quota-requests.md), and [spot vCPU quotas](spot-quota.md). diff --git a/articles/azure-resource-manager/management/tag-support.md b/articles/azure-resource-manager/management/tag-support.md index 8ba32a994b7d4..83fb46f8d032b 100644 --- a/articles/azure-resource-manager/management/tag-support.md +++ b/articles/azure-resource-manager/management/tag-support.md @@ -2,7 +2,7 @@ title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. ms.topic: conceptual -ms.date: 04/20/2022 +ms.date: 05/13/2022 --- # Tag support for Azure resources @@ -2934,7 +2934,7 @@ To get the same data as a file of comma-separated values, download [tag-support. > | servers / usages | No | No | > | servers / virtualNetworkRules | No | No | > | servers / vulnerabilityAssessments | No | No | -> | virtualClusters | Yes | Yes | +> | virtualClusters | No | No | diff --git a/articles/azure-resource-manager/templates/deploy-github-actions.md b/articles/azure-resource-manager/templates/deploy-github-actions.md index 4a48fa3983828..539e9d0b71222 100644 --- a/articles/azure-resource-manager/templates/deploy-github-actions.md +++ b/articles/azure-resource-manager/templates/deploy-github-actions.md @@ -2,7 +2,7 @@ title: Deploy Resource Manager templates by using GitHub Actions description: Describes how to deploy Azure Resource Manager templates (ARM templates) by using GitHub Actions. ms.topic: conceptual -ms.date: 02/07/2022 +ms.date: 05/10/2022 ms.custom: github-actions-azure --- @@ -27,11 +27,13 @@ The file has two sections: |Section |Tasks | |---------|---------| -|**Authentication** | 1. Define a service principal.
2. Create a GitHub secret. | +|**Authentication** | 1. Generate deployment credentials. | |**Deploy** | 1. Deploy the Resource Manager template. | ## Generate deployment credentials +# [Service principal](#tab/userlevel) + You can create a [service principal](../../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button. Create a resource group if you do not already have one. @@ -61,8 +63,29 @@ In the example above, replace the placeholders with your subscription ID and res > [!IMPORTANT] > It is always a good practice to grant minimum access. The scope in the previous example is limited to the resource group. +# [OpenID Connect](#tab/openid) + +You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option. + +1. Open your GitHub repository and go to **Settings**. + +1. Select **Settings > Secrets > New secret**. + +1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets: + + |GitHub Secret | Active Directory Application | + |---------|---------| + |AZURE_CLIENT_ID | Application (client) ID | + |AZURE_TENANT_ID | Directory (tenant) ID | + |AZURE_SUBSCRIPTION_ID | Subscription ID | + +1. Save each secret by selecting **Add secret**. + +--- ## Configure the GitHub secrets +# [Service principal](#tab/userlevel) + You need to create secrets for your Azure credentials, resource group, and subscriptions. 1. In [GitHub](https://github.com/), browse your repository. @@ -75,6 +98,25 @@ You need to create secrets for your Azure credentials, resource group, and subsc 1. Create an additional secret named `AZURE_SUBSCRIPTION`. Add your subscription ID to the secret's value field (example: `90fd3f9d-4c61-432d-99ba-1273f236afa2`). +# [OpenID Connect](#tab/openid) + +You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option. + +1. Open your GitHub repository and go to **Settings**. + +1. Select **Settings > Secrets > New secret**. + +1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets: + + |GitHub Secret | Active Directory Application | + |---------|---------| + |AZURE_CLIENT_ID | Application (client) ID | + |AZURE_TENANT_ID | Directory (tenant) ID | + |AZURE_SUBSCRIPTION_ID | Subscription ID | + +1. Save each secret by selecting **Add secret**. + +--- ## Add Resource Manager template Add a Resource Manager template to your GitHub repository. This template creates a storage account. @@ -94,8 +136,9 @@ The workflow file must be stored in the **.github/workflows** folder at the root 1. Select **set up a workflow yourself**. 1. Rename the workflow file if you prefer a different name other than **main.yml**. For example: **deployStorageAccount.yml**. 1. Replace the content of the yml file with the following: + # [Service principal](#tab/userlevel) - ```yml + ```yml on: [push] name: Azure ARM jobs: @@ -122,15 +165,57 @@ The workflow file must be stored in the **.github/workflows** folder at the root # output containerName variable from template - run: echo ${{ steps.deploy.outputs.containerName }} - ``` + ``` + + > [!NOTE] + > You can specify a JSON format parameters file instead in the ARM Deploy action (example: `.azuredeploy.parameters.json`). + + The first section of the workflow file includes: + + - **name**: The name of the workflow. + - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there is a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file. + + # [OpenID Connect](#tab/openid) + + ```yml + on: [push] + name: Azure ARM + jobs: + build-and-deploy: + runs-on: ubuntu-latest + steps: + + # Checkout code + - uses: actions/checkout@main + + # Log into Azure + - uses: azure/login@v1 + with: + client-id: ${{ secrets.AZURE_CLIENT_ID }} + tenant-id: ${{ secrets.AZURE_TENANT_ID }} + subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} + + # Deploy ARM template + - name: Run ARM deploy + uses: azure/arm-deploy@v1 + with: + subscriptionId: ${{ secrets.AZURE_SUBSCRIPTION }} + resourceGroupName: ${{ secrets.AZURE_RG }} + template: ./azuredeploy.json + parameters: storageAccountType=Standard_LRS + + # output containerName variable from template + - run: echo ${{ steps.deploy.outputs.containerName }} + ``` - > [!NOTE] - > You can specify a JSON format parameters file instead in the ARM Deploy action (example: `.azuredeploy.parameters.json`). + > [!NOTE] + > You can specify a JSON format parameters file instead in the ARM Deploy action (example: `.azuredeploy.parameters.json`). - The first section of the workflow file includes: + The first section of the workflow file includes: - - **name**: The name of the workflow. - - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there is a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file. + - **name**: The name of the workflow. + - **on**: The name of the GitHub events that triggers the workflow. The workflow is trigger when there is a push event on the main branch, which modifies at least one of the two files specified. The two files are the workflow file and the template file. + --- 1. Select **Start commit**. 1. Select **Commit directly to the main branch**. diff --git a/articles/azure-signalr/signalr-concept-authenticate-oauth.md b/articles/azure-signalr/signalr-concept-authenticate-oauth.md index a312d5b2d942a..c7099af8f2d5c 100644 --- a/articles/azure-signalr/signalr-concept-authenticate-oauth.md +++ b/articles/azure-signalr/signalr-concept-authenticate-oauth.md @@ -57,7 +57,7 @@ To complete this tutorial, you must have the following prerequisites: | Setting Name | Suggested Value | Description | | ------------ | --------------- | ----------- | | Application name | *Azure SignalR Chat* | The GitHub user should be able to recognize and trust the app they are authenticating with. | - | Homepage URL | `http://localhost:5000/home` | | + | Homepage URL | `http://localhost:5000` | | | Application description | *A chat room sample using the Azure SignalR Service with GitHub authentication* | A useful description of the application that will help your application users understand the context of the authentication being used. | | Authorization callback URL | `http://localhost:5000/signin-github` | This setting is the most important setting for your OAuth application. It's the callback URL that GitHub returns the user to after successful authentication. In this tutorial, you must use the default callback URL for the *AspNet.Security.OAuth.GitHub* package, */signin-github*. | @@ -547,7 +547,7 @@ The last thing you need to do is update the **Homepage URL** and **Authorization | Setting | Example | | ------- | ------- | - | Homepage URL | `https://signalrtestwebapp22665120.azurewebsites.net/home` | + | Homepage URL | `https://signalrtestwebapp22665120.azurewebsites.net` | | Authorization callback URL | `https://signalrtestwebapp22665120.azurewebsites.net/signin-github` | 3. Navigate to your web app URL and test the application. diff --git a/articles/azure-video-indexer/media/monitor/diagnostics-settings-destination.png b/articles/azure-video-indexer/media/monitor/diagnostics-settings-destination.png new file mode 100644 index 0000000000000..47e92536f0b1d Binary files /dev/null and b/articles/azure-video-indexer/media/monitor/diagnostics-settings-destination.png differ diff --git a/articles/azure-video-indexer/media/monitor/toc-diagnostics-save.png b/articles/azure-video-indexer/media/monitor/toc-diagnostics-save.png new file mode 100644 index 0000000000000..4391227b529a1 Binary files /dev/null and b/articles/azure-video-indexer/media/monitor/toc-diagnostics-save.png differ diff --git a/articles/azure-video-indexer/monitor-video-indexer-data-reference.md b/articles/azure-video-indexer/monitor-video-indexer-data-reference.md new file mode 100644 index 0000000000000..b5bc077244d93 --- /dev/null +++ b/articles/azure-video-indexer/monitor-video-indexer-data-reference.md @@ -0,0 +1,272 @@ +--- +title: Monitoring Azure Video Indexer data reference #Required; *your official service name* +description: Important reference material needed when you monitor Azure Video Indexer +author: itnorman #Required; your GitHub user alias, with correct capitalization. +ms.topic: reference +ms.author: itnorman #Required; Microsoft alias of author; optional team alias. +ms.service: azure-video-indexer #Required; service you are monitoring +ms.custom: subject-monitoring +ms.date: 05/10/2022 #Required; mm/dd/yyyy format. +--- + + + + +# Monitor Azure Video Indexer data reference + +See [Monitoring Azure Video Indexer](monitor-video-indexer.md) for details on collecting and analyzing monitoring data for Azure Video Indexer. + +## Metrics + +Azure Video Indexer currently does not support any monitoring on metrics. + + + + + + + + + + + + + + + + + +For more information, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported). + +## Metric dimensions + +Azure Video Indexer currently does not support any monitoring on metrics. + + + + + + + +## Resource logs + + +This section lists the types of resource logs you can collect for Azure Video Indexer. + + + +For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema). + + + + + + + + + + + +### Azure Video Indexer + +Resource Provider and Type: [Microsoft.VideoIndexer/accounts](/azure/azure-monitor/platform/resource-logs-categories#microsoftvideoindexeraccounts) + +| Category | Display Name | Additional information | +|:---------|:-------------|------------------| +| VIAudit | Azure Video Indexer Audit Logs | Logs are produced from both the Video Indexer portal and the REST API. | + + + +## Azure Monitor Logs tables + + +This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Video Indexer and available for query by Log Analytics. + + + + + +|Resource Type | Notes | +|-------|-----| +| [Azure Video Indexer](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer) | | + + + +### Azure Video Indexer + +| Table | Description | Additional information | +|:---------|:-------------|------------------| +| [VIAudit](/azure/azure-monitor/reference/tables/tables-resourcetype#azure-video-indexer) | Events produced using Azure Video Indexer [portal](https://aka.ms/VIportal) or [REST API](https://aka.ms/vi-dev-portal). | | + + + + + +For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype). + + + + + + +## Activity log + + +The following table lists the operations related to Azure Video Indexer that may be created in the Activity log. + + +| Operation | Description | +|:---|:---| +|Generate_AccessToken | | +|Accounts_Update | | +|Write tags | | +|Create or update resource diagnostic setting| | +|Delete resource diagnostic setting| + + + +For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema). + +## Schemas + + +The following schemas are in use by Azure Video Indexer + + + +```json +{ + "time": "2022-03-22T10:59:39.5596929Z", + "resourceId": "/SUBSCRIPTIONS/602a61eb-c111-43c0-8323-74825230a47d/RESOURCEGROUPS/VI-RESOURCEGROUP/PROVIDERS/MICROSOFT.VIDEOINDEXER/ACCOUNTS/VIDEOINDEXERACCOUNT", + "operationName": "Get-Video-Thumbnail", + "category": "Audit", + "location": "westus2", + "durationMs": "192", + "resultSignature": "200", + "resultType": "Success", + "resultDescription": "Get Video Thumbnail", + "correlationId": "33473fc3-bcbc-4d47-84cc-9fba2f3e9faa", + "callerIpAddress": "46.*****", + "operationVersion": "Operations", + "identity": { + "externalUserId": "4704F34286364F2*****", + "upn": "alias@outlook.com", + "claims": { "permission": "Reader", "scope": "Account" } + }, + "properties": { + "accountName": "videoIndexerAccoount", + "accountId": "8878b584-d8a0-4752-908c-00d6e5597f55", + "videoId": "1e2ddfdd77" + } + } + ``` + +## Next steps + + +- See [Monitoring Azure Azure Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure Video Indexer. +- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. diff --git a/articles/azure-video-indexer/monitor-video-indexer.md b/articles/azure-video-indexer/monitor-video-indexer.md new file mode 100644 index 0000000000000..6147e8934bbeb --- /dev/null +++ b/articles/azure-video-indexer/monitor-video-indexer.md @@ -0,0 +1,179 @@ +--- +title: Monitoring Azure Video Indexer #Required; Must be "Monitoring *Azure Video Indexer* +description: Start here to learn how to monitor Azure Video Indexer #Required; +ms.topic: how-to +author: itnorman #Required; your GitHub user alias, with correct capitalization. +ms.author: itnorman #Required; Microsoft alias of author; optional team alias. +ms.service: azure-video-indexer #Required; The service you are monitoring +ms.custom: subject-monitoring +ms.date: 05/10/2022 #Required; mm/dd/yyyy format. +--- + + + + + +# Monitoring Azure Video Indexer + + + +When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. + +This article describes the monitoring data generated by Azure Video Indexer. Azure Video Indexer uses [Azure Monitor](/azure/azure-monitor/overview). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). + + + + + + + + + +Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights". + + + +## Monitoring data + + +Azure Video Indexer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources). + +See [Monitoring *Azure Video Indexer* data reference](monitor-video-indexer-data-reference.md) for detailed information on the metrics and logs metrics created by Azure Video Indexer. + + + +## Collection and routing + + + +Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. + +Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations. + + + +See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure Video Indexer* are listed in [Azure Video Indexer monitoring data reference](monitor-video-indexer-data-reference.md#resource-logs). + +| Category | Description | +|:---|:---| +|Audit | Read/Write operations| + +:::image type="content" source="./media/monitor/toc-diagnostics-save.png" alt-text="Screenshot of diagnostic settings." lightbox="./media/monitor/toc-diagnostics-save.png"::: + +:::image type="content" source="./media/monitor/diagnostics-settings-destination.png" alt-text="Screenshot of where to send lots." lightbox="./media/monitor/diagnostics-settings-destination.png"::: + + +The metrics and logs you can collect are discussed in the following sections. + +## Analyzing metrics + +Currently Azure Video Indexer does not support monitoring of metrics. + + + + + + + +## Analyzing logs + + + +Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. + +All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema) The schema for Azure Video Indexer resource logs is found in the [Azure Video Indexer Data Reference](monitor-video-indexer-data-reference.md#schemas) + +The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform sign-in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. + +For a list of the types of resource logs collected for Azure Video Indexer, see [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md#resource-logs) + +For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md#azure-monitor-logs-tables) + + + +### Sample Kusto queries + + + + +> [!IMPORTANT] +> When you select **Logs** from the Azure Video Indexer account menu, Log Analytics is opened with the query scope set to the current Azure Video Indexer account. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Azure Video Indexer account or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details. + + + +Following are queries that you can use to help you monitor your Azure Video Indexer account. + + +```kusto +// Project failures summarized by operationName and Upn, aggregated in 30m windows. +VIAudit +| where Status == "Failure" +| summarize count() by OperationName, bin(TimeGenerated, 30m), Upn +| render timechart +``` + +```kusto +// Project failures with detailed error message. +VIAudit +| where Status == "Failure" +| parse Description with "ErrorType: " ErrorType ". Message: " ErrorMessage ". Trace" * +| project TimeGenerated, OperationName, ErrorMessage, ErrorType, CorrelationId, _ResourceId +``` + +## Alerts + + + +Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks. + + + + +The following table lists common and recommended alert rules for Azure Video Indexer. + + +| Alert type | Condition | Description | +|:---|:---|:---| +| Log Alert|Failed operation |Send an alert when an upload failed | + +```kusto +//All failed uploads, aggregated in one hour window. +VIAudit +| where OperationName == "Upload-Video" and Status == "Failure" +| summarize count() by bin(TimeGenerated, 1h) +``` + +## Next steps + + + +- See [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure Video Indexer account. +- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. diff --git a/articles/azure-video-indexer/toc.yml b/articles/azure-video-indexer/toc.yml index 32448f9be5949..48ff401bb0e59 100644 --- a/articles/azure-video-indexer/toc.yml +++ b/articles/azure-video-indexer/toc.yml @@ -114,12 +114,16 @@ href: customize-language-model-with-website.md - name: using the API href: customize-language-model-with-api.md + - name: Monitor Video Indexer + href: monitor-video-indexer.md - name: Reference items: - name: Azure Video Indexer API href: https://api-portal.videoindexer.ai/ - name: Azure Video Indexer ARM REST API href: /rest/api/videoindexer/accounts?branch=videoindex + - name: Monitor Video Indexer data reference + href: monitor-video-indexer-data-reference.md - name: Resources items: - name: Azure Roadmap diff --git a/articles/backup/backup-azure-vms-troubleshoot.md b/articles/backup/backup-azure-vms-troubleshoot.md index c56c85080c15b..c5ab69960ee42 100644 --- a/articles/backup/backup-azure-vms-troubleshoot.md +++ b/articles/backup/backup-azure-vms-troubleshoot.md @@ -3,7 +3,7 @@ title: Troubleshoot backup errors with Azure VMs description: In this article, learn how to troubleshoot errors encountered with backup and restore of Azure virtual machines. ms.reviewer: srinathv ms.topic: troubleshooting -ms.date: 06/02/2021 +ms.date: 05/13/2022 --- # Troubleshooting backup failures on Azure virtual machines @@ -23,12 +23,12 @@ This section covers backup operation failure of Azure Virtual machine. * Verify that the VM has internet connectivity. * Make sure another backup service isn't running. * From `Services.msc`, ensure the **Windows Azure Guest Agent** service is **Running**. If the **Windows Azure Guest Agent** service is missing, install it from [Back up Azure VMs in a Recovery Services vault](./backup-azure-arm-vms-prepare.md#install-the-vm-agent). -* The **Event log** may show backup failures that are from other backup products, for example, Windows Server backup, and aren't due to Azure Backup. Use the following steps to determine whether the issue is with Azure Backup: +* The **Event log** may show backup failures that are from other backup products, for example, Windows Server backup aren't happening due to Azure Backup. Use the following steps to determine whether the issue is with Azure Backup: * If there's an error with the entry **Backup** in the event source or message, check whether Azure IaaS VM Backup backups were successful, and whether a Restore Point was created with the desired snapshot type. * If Azure Backup is working, then the issue is likely with another backup solution. * Here is an example of an Event Viewer error 517 where Azure Backup was working fine but "Windows Server Backup" was failing: ![Windows Server Backup failing](media/backup-azure-vms-troubleshoot/windows-server-backup-failing.png) - * If Azure Backup is failing, then look for the corresponding Error Code in the section Common VM backup errors in this article. + * If Azure Backup is failing, then look for the corresponding error code in the [Common issues](#common-issues) section. * If you see Azure Backup option greyed out on an Azure VM, hover over the disabled menu to find the reason. The reasons could be "Not available with EphemeralDisk" or "Not available with Ultra Disk". ![Reasons for the disablement of Azure Backup option](media/backup-azure-vms-troubleshoot/azure-backup-disable-reasons.png) diff --git a/articles/backup/backup-support-matrix.md b/articles/backup/backup-support-matrix.md index c44ff8bb036ae..8ee684c133ab1 100644 --- a/articles/backup/backup-support-matrix.md +++ b/articles/backup/backup-support-matrix.md @@ -146,8 +146,8 @@ Azure Backup has added the Cross Region Restore feature to strengthen data avail | Backup Management type | Supported | Supported Regions | | ---------------------- | ------------------------------------------------------------ | ----------------- | -| Azure VM | Supported for Azure VMs (including encrypted Azure VMs) with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions, except for UG IOWA and UG Virginia. | -| SQL /SAP HANA | Available | Available in all Azure public regions and sovereign regions, except for France Central, UG IOWA, and UG Virginia. | +| Azure VM | Supported for Azure VMs (including encrypted Azure VMs) with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions, except for UG IOWA. | +| SQL /SAP HANA | Available | Available in all Azure public regions and sovereign regions, except for France Central and UG IOWA. | | MARS Agent/On premises | No | N/A | | AFS (Azure file shares) | No | N/A | diff --git a/articles/backup/scripts/delete-recovery-services-vault.md b/articles/backup/scripts/delete-recovery-services-vault.md index 8c9fff3feeaba..4b689eac5a0df 100644 --- a/articles/backup/scripts/delete-recovery-services-vault.md +++ b/articles/backup/scripts/delete-recovery-services-vault.md @@ -63,19 +63,10 @@ foreach ($softitem in $containerSoftDelete) { Undo-AzRecoveryServicesBackupItemDeletion -Item $softitem -VaultId $VaultToDelete.ID -Force #undelete items in soft delete state } -#Invoking API to disable enhanced security -$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile -$profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList ($azProfile) -$accesstoken = Get-AzAccessToken -$token = $accesstoken.Token -$authHeader = @{ - 'Content-Type'='application/json' - 'Authorization'='Bearer ' + $token -} -$body = @{properties=@{enhancedSecurityState= "Disabled"}} -$restUri = 'https://management.azure.com/subscriptions/'+$SubscriptionId+'/resourcegroups/'+$ResourceGroup+'/providers/Microsoft.RecoveryServices/vaults/'+$VaultName+'/backupconfig/vaultconfig?api-version=2019-05-13' #Replace "management.azure.com" with "management.usgovcloudapi.net" if your subscription is in USGov. -$response = Invoke-RestMethod -Uri $restUri -Headers $authHeader -Body ($body | ConvertTo-JSON -Depth 9) -Method PATCH +#Invoking API to disable Security features (Enhanced Security) to remove MARS/MAB/DPM servers. +Set-AzRecoveryServicesVaultProperty -VaultId $VaultToDelete.ID -DisableHybridBackupSecurityFeature $true +Write-Host "Disabled Security features for the vault" #Fetch all protected items and servers $backupItemsVM = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -VaultId $VaultToDelete.ID @@ -255,4 +246,4 @@ Remove-AzRecoveryServicesVault -Vault $VaultToDelete ## Next steps -[Learn more](../backup-azure-delete-vault.md) about vault deletion process. \ No newline at end of file +[Learn more](../backup-azure-delete-vault.md) about vault deletion process. diff --git a/articles/cognitive-services/QnAMaker/includes/quickstart-sdk-ruby.md b/articles/cognitive-services/QnAMaker/includes/quickstart-sdk-ruby.md index 1301395f13e0a..2803be6674e06 100644 --- a/articles/cognitive-services/QnAMaker/includes/quickstart-sdk-ruby.md +++ b/articles/cognitive-services/QnAMaker/includes/quickstart-sdk-ruby.md @@ -170,7 +170,7 @@ Here is the main method for the application. :::code language="ruby" source="~/cognitive-services-quickstart-code/ruby/qnamaker/sdk/quickstart.rb" id="Main"::: -Run the application with the ruby command on your quickstart file. +Run the application with the Ruby command on your quickstart file. ```console ruby quickstart.rb diff --git a/articles/communication-services/how-tos/calling-sdk/teams-interoperability.md b/articles/communication-services/how-tos/calling-sdk/teams-interoperability.md index 8e6feae28902a..4a5775764ed5b 100644 --- a/articles/communication-services/how-tos/calling-sdk/teams-interoperability.md +++ b/articles/communication-services/how-tos/calling-sdk/teams-interoperability.md @@ -45,13 +45,6 @@ const locator = { const call = callAgent.join(locator); ``` -Join by using meeting id (this is currently in limited preview): - -```js -const locator = { meetingId: ''} -const call = callAgent.join(locator); -``` - ## Next steps - [Learn how to manage calls](./manage-calls.md) - [Learn how to manage video](./manage-video.md) diff --git a/articles/communication-services/how-tos/cte-calling-sdk/includes/manage-calls/manage-calls-web.md b/articles/communication-services/how-tos/cte-calling-sdk/includes/manage-calls/manage-calls-web.md index 7664177325df8..0e8f2542220c5 100644 --- a/articles/communication-services/how-tos/cte-calling-sdk/includes/manage-calls/manage-calls-web.md +++ b/articles/communication-services/how-tos/cte-calling-sdk/includes/manage-calls/manage-calls-web.md @@ -58,11 +58,6 @@ To join a Teams meeting, use the `join` method on `callAgent` and pass either on 2. `meetingLink` 3. Combination of `threadId`, `organizerId`, `tenantId`, `messageId` -#### Join using `meetingId` -```js -const meetingCall = callAgent.join({ meetingId: '' }); -``` - #### Join using `meetingLink` ```js const meetingCall = callAgent.join({ meetingLink: '' }); diff --git a/articles/container-apps/disaster-recovery.md b/articles/container-apps/disaster-recovery.md index 75a395bc37ec3..347a1c7d3ceb8 100644 --- a/articles/container-apps/disaster-recovery.md +++ b/articles/container-apps/disaster-recovery.md @@ -11,7 +11,7 @@ ms.date: 5/10/2022 # Disaster recovery guidance for Azure Container Apps -Azure Container Apps uses [availability zones](../availability-zones/az-overview.md#availability-zones) to offer high-availability protection for your applications and data from data center failures. +Azure Container Apps uses [availability zones](../availability-zones/az-overview.md#availability-zones) where offered to provide high-availability protection for your applications and data from data center failures. Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating in other zones. diff --git a/articles/container-apps/samples.md b/articles/container-apps/samples.md index 9bac13db43ab1..448f406245fd2 100644 --- a/articles/container-apps/samples.md +++ b/articles/container-apps/samples.md @@ -15,6 +15,8 @@ Refer to the following samples to learn how to use Azure Container Apps in diffe | Name | Description | |--|--| -| [Deploy an Orleans Cluster to Container Apps](https://github.com/Azure-Samples/Orleans-Cluster-on-Azure-Container-Apps) | An end-to-end sample and tutorial for getting a Microsoft Orleans cluster running on Azure Container Apps. Worker microservices rapidly transmit data to a back-end Orleans cluster for monitoring and storage, emulating thousands of physical devices in the field. | -| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-on-Azure-Container-Apps ) | This sample demonstrates ASP.NET Core 6.0 can be used to build a cloud-native application hosted in Azure Container Apps. | -| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps (with Dapr)](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-with-DAPR-on-Azure-Container-Apps ) | Demonstrates how ASP.NET Core 6.0 is used to build a cloud-native application hosted in Azure Container Apps using Dapr. | +| [A/B Testing your ASP.NET Core apps using Azure Container Apps](https://github.com/Azure-Samples/dotNET-Frontend-AB-Testing-on-Azure-Container-Apps)
| Shows how to use Azure App Configuration, ASP.NET Core Feature Flags, and Azure Container Apps revisions together to gradually release features or perform A/B tests.
| +| [gRPC with ASP.NET Core on Azure Container Apps](https://github.com/Azure-Samples/dotNET-Workers-with-gRPC-messaging-on-Azure-Container-Apps) | This repository contains a simple scenario built to demonstrate how ASP.NET Core 6.0 can be used to build a cloud-native application hosted in Azure Container Apps that uses gRPC request/response transmission from Worker microservices. The gRPC service simultaneously streams sensor data to a Blazor server frontend, so you can watch the data be charted in real-time.
| +| [Deploy an Orleans Cluster to Container Apps](https://github.com/Azure-Samples/Orleans-Cluster-on-Azure-Container-Apps) | An end-to-end sample and tutorial for getting a Microsoft Orleans cluster running on Azure Container Apps. Worker microservices rapidly transmit data to a back-end Orleans cluster for monitoring and storage, emulating thousands of physical devices in the field.
| +| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-on-Azure-Container-Apps )
| This sample demonstrates ASP.NET Core 6.0 can be used to build a cloud-native application hosted in Azure Container Apps. | +| [ASP.NET Core front-end with two back-end APIs on Azure Container Apps (with Dapr)](https://github.com/Azure-Samples/dotNET-FrontEnd-to-BackEnd-with-DAPR-on-Azure-Container-Apps )
| Demonstrates how ASP.NET Core 6.0 is used to build a cloud-native application hosted in Azure Container Apps using Dapr. | diff --git a/articles/cosmos-db/mongodb/find-request-unit-charge-mongodb.md b/articles/cosmos-db/mongodb/find-request-unit-charge-mongodb.md index 927990c91fe50..f8b6964b48bc1 100644 --- a/articles/cosmos-db/mongodb/find-request-unit-charge-mongodb.md +++ b/articles/cosmos-db/mongodb/find-request-unit-charge-mongodb.md @@ -6,7 +6,7 @@ ms.author: gahllevy ms.service: cosmos-db ms.subservice: cosmosdb-mongo ms.topic: how-to -ms.date: 08/26/2021 +ms.date: 05/12/2022 ms.devlang: csharp, java, javascript ms.custom: devx-track-csharp, devx-track-java, devx-track-js --- @@ -18,7 +18,7 @@ Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, an The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article. -This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB API for MongoDB. If you are using a different API, see [SQL API](../find-request-unit-charge.md), [Cassandra API](../cassandra/find-request-unit-charge-cassandra.md), [Gremlin API](../find-request-unit-charge-gremlin.md), and [Table API](../table/find-request-unit-charge.md) articles to find the RU/s charge. +This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB API for MongoDB. If you're using a different API, see [SQL API](../find-request-unit-charge.md), [Cassandra API](../cassandra/find-request-unit-charge-cassandra.md), [Gremlin API](../find-request-unit-charge-gremlin.md), and [Table API](../table/find-request-unit-charge.md) articles to find the RU/s charge. The RU charge is exposed by a custom [database command](https://docs.mongodb.com/manual/reference/command/) named `getLastRequestStatistics`. The command returns a document that contains the name of the last operation executed, its request charge, and its duration. If you use the Azure Cosmos DB API for MongoDB, you have multiple options for retrieving the RU charge. @@ -42,7 +42,9 @@ The RU charge is exposed by a custom [database command](https://docs.mongodb.com `db.runCommand({getLastRequestStatistics: 1})` -## Use the MongoDB .NET driver +## Use a MongoDB driver + +### [.NET driver](#tab/dotnet-driver) When you use the [official MongoDB .NET driver](https://docs.mongodb.com/ecosystem/drivers/csharp/), you can execute commands by calling the `RunCommand` method on a `IMongoDatabase` object. This method requires an implementation of the `Command<>` abstract class: @@ -61,8 +63,7 @@ double requestCharge = (double)stats["RequestCharge"]; For more information, see [Quickstart: Build a .NET web app by using an Azure Cosmos DB API for MongoDB](create-mongodb-dotnet.md). -## Use the MongoDB Java driver - +### [Java driver](#tab/java-driver) When you use the [official MongoDB Java driver](https://mongodb.github.io/mongo-java-driver/), you can execute commands by calling the `runCommand` method on a `MongoDatabase` object: @@ -73,7 +74,7 @@ Double requestCharge = stats.getDouble("RequestCharge"); For more information, see [Quickstart: Build a web app by using the Azure Cosmos DB API for MongoDB and the Java SDK](create-mongodb-java.md). -## Use the MongoDB Node.js driver +### [Node.js driver](#tab/node-driver) When you use the [official MongoDB Node.js driver](https://mongodb.github.io/node-mongodb-native/), you can execute commands by calling the `command` method on a `db` object: @@ -86,6 +87,15 @@ db.command({ getLastRequestStatistics: 1 }, function(err, result) { For more information, see [Quickstart: Migrate an existing MongoDB Node.js web app to Azure Cosmos DB](create-mongodb-nodejs.md). +### [Python driver](#tab/python-driver) + +```python +response = db.command('getLastRequestStatistics') +requestCharge = response['RequestCharge'] +``` + +--- + ## Next steps To learn about optimizing your RU consumption, see these articles: @@ -94,5 +104,5 @@ To learn about optimizing your RU consumption, see these articles: * [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md) * [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md) * Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) + * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md) \ No newline at end of file diff --git a/articles/cosmos-db/sql/how-to-time-to-live.md b/articles/cosmos-db/sql/how-to-time-to-live.md index 927a66b928baa..454dac1ae8f49 100644 --- a/articles/cosmos-db/sql/how-to-time-to-live.md +++ b/articles/cosmos-db/sql/how-to-time-to-live.md @@ -1,13 +1,13 @@ --- title: Configure and manage Time to Live in Azure Cosmos DB description: Learn how to configure and manage time to live on a container and an item in Azure Cosmos DB -author: rothja +author: seesharprun ms.service: cosmos-db ms.subservice: cosmosdb-sql ms.topic: how-to -ms.date: 12/09/2021 -ms.author: jroth -ms.devlang: csharp +ms.date: 05/12/2022 +ms.author: sidandrews +ms.reviewer: mjbrown ms.custom: devx-track-js, devx-track-csharp, devx-track-azurecli --- @@ -16,11 +16,11 @@ ms.custom: devx-track-js, devx-track-csharp, devx-track-azurecli In Azure Cosmos DB, you can choose to configure Time to Live (TTL) at the container level, or you can override it at an item level after setting for the container. You can configure TTL for a container by using Azure portal or the language-specific SDKs. Item level TTL overrides can be configured by using the SDKs. -> This content is related to Azure Cosmos DB transactional store TTL. If you are looking for analitycal store TTL, that enables NoETL HTAP scenarios through [Azure Synapse Link](../synapse-link.md), please click [here](../analytical-store-introduction.md#analytical-ttl). +> This article's content is related to Azure Cosmos DB transactional store TTL. If you are looking for analitycal store TTL, that enables NoETL HTAP scenarios through [Azure Synapse Link](../synapse-link.md), please click [here](../analytical-store-introduction.md#analytical-ttl). -## Enable time to live on a container using Azure portal +## Enable time to live on a container using the Azure portal -Use the following steps to enable time to live on a container with no expiration. Enable this to allow TTL to be overridden at the item level. You can also set the TTL by entering a non-zero value for seconds. +Use the following steps to enable time to live on a container with no expiration. Enabling TTL at the container level to allow the same value to be overridden at an individual item's level. You can also set the TTL by entering a non-zero value for seconds. 1. Sign in to the [Azure portal](https://portal.azure.com/). @@ -36,177 +36,169 @@ Use the following steps to enable time to live on a container with no expiration * Set it to **On (no default)** or * Turn **On** with a TTL value specified in seconds. - * Click **Save** to save the changes. + * Select **Save** to save the changes. :::image type="content" source="./media/how-to-time-to-live/how-to-time-to-live-portal.png" alt-text="Configure Time to live in Azure portal"::: -* When DefaultTimeToLive is null then your Time to Live is Off -* When DefaultTimeToLive is -1 then your Time to Live setting is On (No default) -* When DefaultTimeToLive has any other Int value (except 0) your Time to Live setting is On. The server will automatically delete items based on the configured value. +* When DefaultTimeToLive is null, then your Time to Live is Off +* When DefaultTimeToLive is -1 then, your Time to Live setting is On (No default) +* When DefaultTimeToLive has any other Int value (except 0), then your Time to Live setting is On. The server will automatically delete items based on the configured value. -## Enable time to live on a container using Azure CLI or PowerShell +## Enable time to live on a container using Azure CLI or Azure PowerShell To create or enable TTL on a container see, * [Create a container with TTL using Azure CLI](manage-with-cli.md#create-a-container-with-ttl) * [Create a container with TTL using PowerShell](manage-with-powershell.md#create-container-unique-key-ttl) -## Enable time to live on a container using SDK +## Enable time to live on a container using an SDK -### .NET SDK +### [.NET SDK v3](#tab/dotnet-sdk-v3) -# [.NET SDK V2](#tab/dotnetv2) +```csharp +Database database = client.GetDatabase("database"); -.NET SDK V2 (Microsoft.Azure.DocumentDB) +ContainerProperties properties = new () +{ + Id = "container", + PartitionKeyPath = "/customerId", + // Never expire by default + DefaultTimeToLive = -1 +}; -```csharp // Create a new container with TTL enabled and without any expiration value -DocumentCollection collectionDefinition = new DocumentCollection(); -collectionDefinition.Id = "myContainer"; -collectionDefinition.PartitionKey.Paths.Add("/myPartitionKey"); -collectionDefinition.DefaultTimeToLive = -1; //(never expire by default) - -DocumentCollection ttlEnabledCollection = await client.CreateDocumentCollectionAsync( - UriFactory.CreateDatabaseUri("myDatabaseName"), - collectionDefinition); +Container container = await database + .CreateContainerAsync(properties); ``` -# [.NET SDK V3](#tab/dotnetv3) +### [Java SDK v4](#tab/javav4) -.NET SDK V3 (Microsoft.Azure.Cosmos) +```java +CosmosDatabase database = client.getDatabase("database"); + +CosmosContainerProperties properties = new CosmosContainerProperties( + "container", + "/customerId" +); +// Never expire by default +properties.setDefaultTimeToLiveInSeconds(-1); -```csharp // Create a new container with TTL enabled and without any expiration value -await client.GetDatabase("database").CreateContainerAsync(new ContainerProperties -{ - Id = "container", - PartitionKeyPath = "/myPartitionKey", - DefaultTimeToLive = -1 //(never expire by default) -}); +CosmosContainerResponse response = database + .createContainerIfNotExists(properties); ``` ---- -### Java SDK +### [Node SDK](#tab/node-sdk) -# [Java SDK V4](#tab/javav4) +```javascript +const database = await client.database("database"); -Java SDK V4 (Maven com.azure::azure-cosmos) +const properties = { + id: "container", + partitionKey: "/customerId", + // Never expire by default + defaultTtl: -1 +}; -```java -CosmosAsyncContainer container; +const { container } = await database.containers + .createIfNotExists(properties); -// Create a new container with TTL enabled and without any expiration value -CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey"); -containerProperties.setDefaultTimeToLiveInSeconds(-1); -container = database.createContainerIfNotExists(containerProperties, 400).block().getContainer(); ``` -# [Java SDK V3](#tab/javav3) - -Java SDK V3 (Maven com.microsoft.azure::azure-cosmos) +### [Python SDK](#tab/python-sdk) -```java -CosmosContainer container; +```python +database = client.get_database_client('database') -// Create a new container with TTL enabled and without any expiration value -CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey"); -containerProperties.defaultTimeToLive(-1); -container = database.createContainerIfNotExists(containerProperties, 400).block().container(); +database.create_container( + id='container', + partition_key=PartitionKey(path='/customerId'), + # Never expire by default + default_ttl=-1 +) ``` + --- -## Set time to live on a container using SDK +## Set time to live on a container using an SDK To set the time to live on a container, you need to provide a non-zero positive number that indicates the time period in seconds. Based on the configured TTL value, all items in the container after the last modified timestamp of the item `_ts` are deleted. -### .NET SDK - -# [.NET SDK V2](#tab/dotnetv2) - -.NET SDK V2 (Microsoft.Azure.DocumentDB) +### [.NET SDK v3](#tab/dotnet-sdk-v3) ```csharp -// Create a new container with TTL enabled and a 90 day expiration -DocumentCollection collectionDefinition = new DocumentCollection(); -collectionDefinition.Id = "myContainer"; -collectionDefinition.PartitionKey.Paths.Add("/myPartitionKey"); -collectionDefinition.DefaultTimeToLive = 90 * 60 * 60 * 24 // expire all documents after 90 days - -DocumentCollection ttlEnabledCollection = await client.CreateDocumentCollectionAsync( - UriFactory.CreateDatabaseUri("myDatabaseName"), - collectionDefinition; -``` +Database database = client.GetDatabase("database"); -# [.NET SDK V3](#tab/dotnetv3) - -.NET SDK V3 (Microsoft.Azure.Cosmos) - -```csharp -// Create a new container with TTL enabled and a 90 day expiration -await client.GetDatabase("database").CreateContainerAsync(new ContainerProperties +ContainerProperties properties = new () { Id = "container", - PartitionKeyPath = "/myPartitionKey", - DefaultTimeToLive = 90 * 60 * 60 * 24 // expire all documents after 90 days -}); -``` ---- - -### Java SDK + PartitionKeyPath = "/customerId", + // Expire all documents after 90 days + DefaultTimeToLive = 90 * 60 * 60 * 24 +}; -# [Java SDK V4](#tab/javav4) +// Create a new container with TTL enabled and without any expiration value +Container container = await database + .CreateContainerAsync(properties); +``` -Java SDK V4 (Maven com.azure::azure-cosmos) +### [Java SDK v4](#tab/javav4) ```java -CosmosAsyncContainer container; +CosmosDatabase database = client.getDatabase("database"); + +CosmosContainerProperties properties = new CosmosContainerProperties( + "container", + "/customerId" +); +// Expire all documents after 90 days +properties.setDefaultTimeToLiveInSeconds(90 * 60 * 60 * 24); -// Create a new container with TTL enabled with default expiration value -CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey"); -containerProperties.setDefaultTimeToLiveInSeconds(90 * 60 * 60 * 24); -container = database.createContainerIfNotExists(containerProperties, 400).block().getContainer(); +CosmosContainerResponse response = database + .createContainerIfNotExists(properties); ``` -# [Java SDK V3](#tab/javav3) +### [Node SDK](#tab/node-sdk) -Java SDK V3 (Maven com.microsoft.azure::azure-cosmos) +```javascript +const database = await client.database("database"); -```java -CosmosContainer container; +const properties = { + id: "container", + partitionKey: "/customerId", + // Expire all documents after 90 days + defaultTtl: 90 * 60 * 60 * 24 +}; -// Create a new container with TTL enabled with default expiration value -CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey"); -containerProperties.defaultTimeToLive(90 * 60 * 60 * 24); -container = database.createContainerIfNotExists(containerProperties, 400).block().container(); +const { container } = await database.containers + .createIfNotExists(properties); ``` ---- -### NodeJS SDK +### [Python SDK](#tab/python-sdk) -```javascript -const containerDefinition = { - id: "sample container1", - }; - -async function createcontainerWithTTL(db: Database, containerDefinition: ContainerDefinition, collId: any, defaultTtl: number) { - containerDefinition.id = collId; - containerDefinition.defaultTtl = defaultTtl; - await db.containers.create(containerDefinition); -} +```python +database = client.get_database_client('database') + +database.create_container( + id='container', + partition_key=PartitionKey(path='/customerId'), + # Expire all documents after 90 days + default_ttl=90 * 60 * 60 * 24 +) ``` -## Set time to live on an item +--- + +## Set time to live on an item using the Portal In addition to setting a default time to live on a container, you can set a time to live for an item. Setting time to live at the item level will override the default TTL of the item in that container. -* To set the TTL on an item, you need to provide a non-zero positive number, which indicates the period, in seconds, to expire the item after the last modified timestamp of the item `_ts`. You can provide a `-1` as well when the item should not expire. +* To set the TTL on an item, you need to provide a non-zero positive number, which indicates the period, in seconds, to expire the item after the last modified timestamp of the item `_ts`. You can provide a `-1` as well when the item shouldn't expire. * If the item doesn't have a TTL field, then by default, the TTL set to the container will apply to the item. * If TTL is disabled at the container level, the TTL field on the item will be ignored until TTL is re-enabled on the container. -### Azure portal - Use the following steps to enable time to live on an item: 1. Sign in to the [Azure portal](https://portal.azure.com/). @@ -217,346 +209,237 @@ Use the following steps to enable time to live on an item: 4. Select an existing container, expand it and modify the following values: - * Open the **Scale & Settings** window. - * Under **Setting** find, **Time to Live**. - * Select **On (no default)** or select **On** and set a TTL value. - * Click **Save** to save the changes. + * Open the **Scale & Settings** window. + * Under **Setting** find, **Time to Live**. + * Select **On (no default)** or select **On** and set a TTL value. + * Select **Save** to save the changes. 5. Next navigate to the item for which you want to set time to live, add the `ttl` property and select **Update**. - ```json - { - "id": "1", - "_rid": "Jic9ANWdO-EFAAAAAAAAAA==", - "_self": "dbs/Jic9AA==/colls/Jic9ANWdO-E=/docs/Jic9ANWdO-EFAAAAAAAAAA==/", - "_etag": "\"0d00b23f-0000-0000-0000-5c7712e80000\"", - "_attachments": "attachments/", - "ttl": 10, - "_ts": 1551307496 - } - ``` - -### .NET SDK (any) + ```json + { + "id": "1", + "_rid": "Jic9ANWdO-EFAAAAAAAAAA==", + "_self": "dbs/Jic9AA==/colls/Jic9ANWdO-E=/docs/Jic9ANWdO-EFAAAAAAAAAA==/", + "_etag": "\"0d00b23f-0000-0000-0000-5c7712e80000\"", + "_attachments": "attachments/", + "ttl": 10, + "_ts": 1551307496 + } + ``` -```csharp -// Include a property that serializes to "ttl" in JSON -public class SalesOrder -{ - [JsonProperty(PropertyName = "id")] - public string Id { get; set; } - [JsonProperty(PropertyName="cid")] - public string CustomerId { get; set; } - // used to set expiration policy - [JsonProperty(PropertyName = "ttl", NullValueHandling = NullValueHandling.Ignore)] - public int? ttl { get; set; } - - //... -} -// Set the value to the expiration in seconds -SalesOrder salesOrder = new SalesOrder -{ - Id = "SO05", - CustomerId = "CO18009186470", - ttl = 60 * 60 * 24 * 30; // Expire sales orders in 30 days -}; -``` +## Set time to live on an item using an SDK -### NodeJS SDK +### [.NET SDK v3](#tab/dotnet-sdk-v3) -```javascript -const itemDefinition = { - id: "doc", - name: "sample Item", - key: "value", - ttl: 2 - }; +```csharp +public record SalesOrder(string id, string customerId, int? ttl); ``` -### Java SDK - -# [Java SDK V4](#tab/javav4) - -Java SDK V4 (Maven com.azure::azure-cosmos) - -```java -// Include a property that serializes to "ttl" in JSON -public class SalesOrder -{ - private String id; - private String customerId; - private Integer ttl; - - public SalesOrder(String id, String customerId, Integer ttl) { - this.id = id; - this.customerId = customerId; - this.ttl = ttl; - } - - public String getId() {return this.id;} - public void setId(String new_id) {this.id = new_id;} - public String getCustomerId() {return this.customerId;} - public void setCustomerId(String new_cid) {this.customerId = new_cid;} - public Integer getTtl() {return this.ttl;} - public void setTtl(Integer new_ttl) {this.ttl = new_ttl;} - - //... -} +```csharp +Container container = database.GetContainer("container"); -// Set the value to the expiration in seconds -SalesOrder salesOrder = new SalesOrder( - "SO05", - "CO18009186470", - 60 * 60 * 24 * 30 // Expire sales orders in 30 days +SalesOrder item = new ( + "SO05", + "CO18009186470" + // Expire sales order in 30 days using "ttl" property + ttl: 60 * 60 * 24 * 30 ); +await container.CreateItemAsync(item); ``` -# [Java SDK V3](#tab/javav3) - -Java SDK V3 (Maven com.microsoft.azure::azure-cosmos) +### [Java SDK v4](#tab/javav4) ```java -// Include a property that serializes to "ttl" in JSON -public class SalesOrder -{ - private String id; - private String customerId; - private Integer ttl; - - public SalesOrder(String id, String customerId, Integer ttl) { - this.id = id; - this.customerId = customerId; - this.ttl = ttl; - } +public class SalesOrder { - public String id() {return this.id;} - public void id(String new_id) {this.id = new_id;} - public String customerId() {return this.customerId;} - public void customerId(String new_cid) {this.customerId = new_cid;} - public Integer ttl() {return this.ttl;} - public void ttl(Integer new_ttl) {this.ttl = new_ttl;} + public String id; - //... -} + public String customerId; -// Set the value to the expiration in seconds -SalesOrder salesOrder = new SalesOrder( - "SO05", - "CO18009186470", - 60 * 60 * 24 * 30 // Expire sales orders in 30 days -); + // Include a property that serializes to "ttl" in JSON + public Integer ttl; +} ``` ---- - -## Reset time to live - -You can reset the time to live on an item by performing a write or update operation on the item. The write or update operation will set the `_ts` to the current time, and the TTL for the item to expire will begin again. If you wish to change the TTL of an item, you can update the field just as you update any other field. -### .NET SDK - -# [.NET SDK V2](#tab/dotnetv2) +```java +CosmosContainer container = database.getContainer("container"); -.NET SDK V2 (Microsoft.Azure.DocumentDB) +SalesOrder item = new SalesOrder(); +item.id = "SO05"; +item.customerId = "CO18009186470"; +// Expire sales order in 30 days using "ttl" property +item.ttl = 60 * 60 * 24 * 30; -```csharp -// This examples leverages the Sales Order class above. -// Read a document, update its TTL, save it. -response = await client.ReadDocumentAsync( - "/dbs/salesdb/colls/orders/docs/SO05"), - new RequestOptions { PartitionKey = new PartitionKey("CO18009186470") }); - -Document readDocument = response.Resource; -readDocument.ttl = 60 * 30 * 30; // update time to live -response = await client.ReplaceDocumentAsync(readDocument); +container.createItem(item); ``` -# [.NET SDK V3](#tab/dotnetv3) +### [Node SDK](#tab/node-sdk) -.NET SDK V3 (Microsoft.Azure.Cosmos) +```javascript +const container = await database.container("container"); -```csharp -// This examples leverages the Sales Order class above. -// Read a document, update its TTL, save it. -ItemResponse itemResponse = await client.GetContainer("database", "container").ReadItemAsync("SO05", new PartitionKey("CO18009186470")); +const item = { + id: 'SO05', + customerId: 'CO18009186470', + // Expire sales order in 30 days using "ttl" property + ttl: 60 * 60 * 24 * 30 +}; -itemResponse.Resource.ttl = 60 * 30 * 30; // update time to live -await client.GetContainer("database", "container").ReplaceItemAsync(itemResponse.Resource, "SO05"); +await container.items.create(item); ``` ---- -### Java SDK +### [Python SDK](#tab/python-sdk) -# [Java SDK V4](#tab/javav4) +```python +container = database.get_container_client('container') -Java SDK V4 (Maven com.azure::azure-cosmos) +item = { + 'id': 'SO05', + 'customerId': 'CO18009186470', + # Expire sales order in 30 days using "ttl" property + 'ttl': 60 * 60 * 24 * 30 +} -```java -// This examples leverages the Sales Order class above. -// Read a document, update its TTL, save it. -CosmosAsyncItemResponse itemResponse = container.readItem("SO05", new PartitionKey("CO18009186470"), SalesOrder.class) - .flatMap(readResponse -> { - SalesOrder salesOrder = readResponse.getItem(); - salesOrder.setTtl(60 * 30 * 30); - return container.createItem(salesOrder); -}).block(); +container.create_item(body=item) ``` -# [Java SDK V3](#tab/javav3) - -SDK V3 (Maven com.microsoft.azure::azure-cosmos) - -```java -// This examples leverages the Sales Order class above. -// Read a document, update its TTL, save it. -container.getItem("SO05", new PartitionKey("CO18009186470")).read() - .flatMap(readResponse -> { - SalesOrder salesOrder = null; - try { - salesOrder = readResponse.properties().getObject(SalesOrder.class); - } catch (Exception err) { - - } - salesOrder.ttl(60 * 30 * 30); - return container.createItem(salesOrder); -}).block(); -``` --- -## Turn off time to live - -If time to live has been set on an item and you no longer want that item to expire, then you can get the item, remove the TTL field, and replace the item on the server. When the TTL field is removed from the item, the default TTL value assigned to the container is applied to the item. Set the TTL value to -1 to prevent an item from expiring and to not inherit the TTL value from the container. - -### .NET SDK +## Reset time to live using an SDK -# [.NET SDK V2](#tab/dotnetv2) +You can reset the time to live on an item by performing a write or update operation on the item. The write or update operation will set the `_ts` to the current time, and the TTL for the item to expire will begin again. If you wish to change the TTL of an item, you can update the field just as you update any other field. -.NET SDK V2 (Microsoft.Azure.DocumentDB) +### [.NET SDK v3](#tab/dotnet-sdk-v3) ```csharp -// This examples leverages the Sales Order class above. -// Read a document, turn off its override TTL, save it. -response = await client.ReadDocumentAsync( - "/dbs/salesdb/colls/orders/docs/SO05"), - new RequestOptions { PartitionKey = new PartitionKey("CO18009186470") }); +SalesOrder item = await container.ReadItemAsync( + "SO05", + new PartitionKey("CO18009186470") +); -Document readDocument = response.Resource; -readDocument.ttl = null; // inherit the default TTL of the container +// Update ttl to 2 hours +SalesOrder modifiedItem = item with { + ttl = 60 * 60 * 2 +}; -response = await client.ReplaceDocumentAsync(readDocument); +await container.ReplaceItemAsync( + modifiedItem, + "SO05", + new PartitionKey("CO18009186470") +); ``` -# [.NET SDK V3](#tab/dotnetv3) +### [Java SDK v4](#tab/javav4) -.NET SDK V3 (Microsoft.Azure.Cosmos) +```java +CosmosItemResponse response = container.readItem( + "SO05", + new PartitionKey("CO18009186470"), + SalesOrder.class +); -```csharp -// This examples leverages the Sales Order class above. -// Read a document, turn off its override TTL, save it. -ItemResponse itemResponse = await client.GetContainer("database", "container").ReadItemAsync("SO05", new PartitionKey("CO18009186470")); +SalesOrder item = response.getItem(); -itemResponse.Resource.ttl = null; // inherit the default TTL of the container -await client.GetContainer("database", "container").ReplaceItemAsync(itemResponse.Resource, "SO05"); -``` ---- - -### Java SDK +// Update ttl to 2 hours +item.ttl = 60 * 60 * 2; -# [Java SDK V4](#tab/javav4) +CosmosItemRequestOptions options = new CosmosItemRequestOptions(); +container.replaceItem( + item, + "SO05", + new PartitionKey("CO18009186470"), + options +); +``` -Java SDK V4 (Maven com.azure::azure-cosmos) +### [Node SDK](#tab/node-sdk) -```java -// This examples leverages the Sales Order class above. -// Read a document, update its TTL, save it. -CosmosAsyncItemResponse itemResponse = container.readItem("SO05", new PartitionKey("CO18009186470"), SalesOrder.class) - .flatMap(readResponse -> { - SalesOrder salesOrder = readResponse.getItem(); - salesOrder.setTtl(null); - return container.createItem(salesOrder); -}).block(); +```javascript +const { resource: item } = await container.item( + 'SO05', + 'CO18009186470' +).read(); + +// Update ttl to 2 hours +item.ttl = 60 * 60 * 2; + +await container.item( + 'SO05', + 'CO18009186470' +).replace(item); ``` -# [Java SDK V3](#tab/javav3) +### [Python SDK](#tab/python-sdk) -Java SDK V3 (Maven com.microsoft.azure::azure-cosmos) +```python +item = container.read_item( + item='SO05', + partition_key='CO18009186470' +) -```java -// This examples leverages the Sales Order class above. -// Read a document, update its TTL, save it. -container.getItem("SO05", new PartitionKey("CO18009186470")).read() - .flatMap(readResponse -> { - SalesOrder salesOrder = null; - try { - salesOrder = readResponse.properties().getObject(SalesOrder.class); - } catch (Exception err) { - - } - salesOrder.ttl(null); - return container.createItem(salesOrder); -}).block(); +# Update ttl to 2 hours +item['ttl'] = 60 * 60 * 2 + +container.replace_item( + item='SO05', + body=item +) ``` + --- -## Disable time to live +## Disable time to live using an SDK To disable time to live on a container and stop the background process from checking for expired items, the `DefaultTimeToLive` property on the container should be deleted. Deleting this property is different from setting it to -1. When you set it to -1, new items added to the container will live forever, however you can override this value on specific items in the container. When you remove the TTL property from the container the items will never expire, even if there are they have explicitly overridden the previous default TTL value. -### .NET SDK +### [.NET SDK v3](#tab/dotnet-sdk-v3) -# [.NET SDK V2](#tab/dotnetv2) +```csharp +ContainerProperties properties = await container.ReadContainerAsync(); -.NET SDK V2 (Microsoft.Azure.DocumentDB) +// Disable ttl at container-level +properties.DefaultTimeToLive = null; -```csharp -// Get the container, update DefaultTimeToLive to null -DocumentCollection collection = await client.ReadDocumentCollectionAsync("/dbs/salesdb/colls/orders"); -// Disable TTL -collection.DefaultTimeToLive = null; -await client.ReplaceDocumentCollectionAsync(collection); +await container.ReplaceContainerAsync(properties); ``` -# [.NET SDK V3](#tab/dotnetv3) +### [Java SDK v4](#tab/javav4) -.NET SDK V3 (Microsoft.Azure.Cosmos) +```java +CosmosContainerResponse response = container.read(); +CosmosContainerProperties properties = response.getProperties(); -```csharp -// Get the container, update DefaultTimeToLive to null -ContainerResponse containerResponse = await client.GetContainer("database", "container").ReadContainerAsync(); -// Disable TTL -containerResponse.Resource.DefaultTimeToLive = null; -await client.GetContainer("database", "container").ReplaceContainerAsync(containerResponse.Resource); +// Disable ttl at container-level +properties.setDefaultTimeToLiveInSeconds(null); + +container.replace(properties); ``` ---- -### Java SDK +### [Node SDK](#tab/node-sdk) -# [Java SDK V4](#tab/javav4) +```javascript +const { resource: definition } = await container.read(); -Java SDK V4 (Maven com.azure::azure-cosmos) +// Disable ttl at container-level +definition.defaultTtl = null; -```java -CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey"); -// Disable TTL -containerProperties.setDefaultTimeToLiveInSeconds(null); -// Update container settings -container.replace(containerProperties).block(); +await container.replace(definition); ``` -# [Java SDK V3](#tab/javav3) +### [Python SDK](#tab/python-sdk) -Java SDK V3 (Maven com.microsoft.azure::azure-cosmos) - -```java -CosmosContainer container; - -// Create a new container with TTL enabled and without any expiration value -CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/myPartitionKey"); -// Disable TTL -containerProperties.defaultTimeToLive(null); -// Update container settings -container = database.createContainerIfNotExists(containerProperties, 400).block().container(); +```python +database.replace_container( + container, + partition_key=PartitionKey(path='/id'), + # Disable ttl at container-level + default_ttl=None +) ``` + --- ## Next steps diff --git a/articles/cosmos-db/sql/kafka-connector-sink.md b/articles/cosmos-db/sql/kafka-connector-sink.md index d4dcfb1751f7c..5c7dc2ae8ff3b 100644 --- a/articles/cosmos-db/sql/kafka-connector-sink.md +++ b/articles/cosmos-db/sql/kafka-connector-sink.md @@ -232,7 +232,7 @@ For more information on using this SMT, see the [InsertUUID repository](https:// ### Using SMTs to configure Time to live (TTL) -Using both the `InsertField` and `Cast` SMTs, you can configure TTL on each item created in Azure Cosmos DB. Enable TTL on the container before enabling TTL at an item level. For more information, see the [time-to-live](how-to-time-to-live.md#enable-time-to-live-on-a-container-using-azure-portal) doc. +Using both the `InsertField` and `Cast` SMTs, you can configure TTL on each item created in Azure Cosmos DB. Enable TTL on the container before enabling TTL at an item level. For more information, see the [time-to-live](how-to-time-to-live.md#enable-time-to-live-on-a-container-using-the-azure-portal) doc. Inside your Sink connector config, add the following properties to set the TTL in seconds. In this following example, the TTL is set to 100 seconds. If the message already contains the `TTL` field, the `TTL` value will be overwritten by these SMTs. diff --git a/articles/data-factory/connector-azure-database-for-mysql.md b/articles/data-factory/connector-azure-database-for-mysql.md index 273efcc5efbdc..f17fd39233a0b 100644 --- a/articles/data-factory/connector-azure-database-for-mysql.md +++ b/articles/data-factory/connector-azure-database-for-mysql.md @@ -8,7 +8,7 @@ ms.service: data-factory ms.subservice: data-movement ms.topic: conceptual ms.custom: synapse -ms.date: 12/20/2021 +ms.date: 05/12/2022 --- # Copy and transform data in Azure Database for MySQL using Azure Data Factory or Synapse Analytics @@ -294,6 +294,12 @@ The below table lists the properties supported by Azure Database for MySQL sink. > 1. It's recommended to break single batch scripts with multiple commands into multiple batches. > 2. Only Data Definition Language (DDL) and Data Manipulation Language (DML) statements that return a simple update count can be run as part of a batch. Learn more from [Performing batch operations](/sql/connect/jdbc/performing-batch-operations) +* Enable incremental extract: Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. + +* Incremental date column: When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table. + +* Start reading from beginning: Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on. + #### Azure Database for MySQL sink script example When you use Azure Database for MySQL as sink type, the associated data flow script is: diff --git a/articles/data-factory/connector-azure-database-for-postgresql.md b/articles/data-factory/connector-azure-database-for-postgresql.md index b353a3a601ae4..523cd51a2dbe4 100644 --- a/articles/data-factory/connector-azure-database-for-postgresql.md +++ b/articles/data-factory/connector-azure-database-for-postgresql.md @@ -336,6 +336,13 @@ The below table lists the properties supported by Azure Database for PostgreSQL > 1. It's recommended to break single batch scripts with multiple commands into multiple batches. > 2. Only Data Definition Language (DDL) and Data Manipulation Language (DML) statements that return a simple update count can be run as part of a batch. Learn more from [Performing batch operations](/sql/connect/jdbc/performing-batch-operations) + +* Enable incremental extract: Use this option to tell ADF to only process rows that have changed since the last time that the pipeline executed. + +* Incremental date column: When using the incremental extract feature, you must choose the date/time column that you wish to use as the watermark in your source table. + +* Start reading from beginning: Setting this option with incremental extract will instruct ADF to read all rows on first execution of a pipeline with incremental extract turned on. + #### Azure Database for PostgreSQL sink script example When you use Azure Database for PostgreSQL as sink type, the associated data flow script is: diff --git a/articles/data-factory/data-flow-conditional-split.md b/articles/data-factory/data-flow-conditional-split.md index e259b3378bd85..0a1b34aeac783 100644 --- a/articles/data-factory/data-flow-conditional-split.md +++ b/articles/data-factory/data-flow-conditional-split.md @@ -9,7 +9,7 @@ ms.service: data-factory ms.subservice: data-flows ms.topic: conceptual ms.custom: synapse -ms.date: 01/20/2022 +ms.date: 05/12/2022 --- # Conditional split transformation in mapping data flow @@ -46,7 +46,7 @@ Use the data flow expression builder to enter an expression for the split condit ### Example -The below example is a conditional split transformation named `SplitByYear` that takes in incoming stream `CleanData`. This transformation has two split conditions `year < 1960` and `year > 1980`. `disjoint` is false because the data goes to the first matching condition. Every row matching the first condition goes to output stream `moviesBefore1960`. All remaining rows matching the second condition go to output stream `moviesAFter1980`. All other rows flow through the default stream `AllOtherMovies`. +The below example is a conditional split transformation named `SplitByYear` that takes in incoming stream `CleanData`. This transformation has two split conditions `year < 1960` and `year > 1980`. `disjoint` is false because the data goes to the first matching condition rather than all matching conditions. Every row matching the first condition goes to output stream `moviesBefore1960`. All remaining rows matching the second condition go to output stream `moviesAFter1980`. All other rows flow through the default stream `AllOtherMovies`. In the service UI, this transformation looks like the below image: diff --git a/articles/data-factory/tutorial-data-flow-delta-lake.md b/articles/data-factory/tutorial-data-flow-delta-lake.md index b56604fa3510e..9d2dfa81e6aa4 100644 --- a/articles/data-factory/tutorial-data-flow-delta-lake.md +++ b/articles/data-factory/tutorial-data-flow-delta-lake.md @@ -51,7 +51,7 @@ In this step, you'll create a pipeline that contains a data flow activity. 1. On the home page, select **Orchestrate**. - :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page."::: + :::image type="content" source="./media/tutorial-data-flow/orchestrate.png" alt-text="Screenshot that shows the ADF home page."::: 1. In the **General** tab for the pipeline, enter **DeltaLake** for **Name** of the pipeline. 1. In the **Activities** pane, expand the **Move and Transform** accordion. Drag and drop the **Data Flow** activity from the pane to the pipeline canvas. diff --git a/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md b/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md index 92bfb6b41a698..1bc23324f2331 100644 --- a/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md +++ b/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md @@ -18,36 +18,42 @@ Script actions can also be published to the Azure Marketplace as an HDInsight ap A script action is Bash script that runs on the nodes in an HDInsight cluster. Characteristics and features of script actions are as follows: - The Bash script URI (the location to access the file) has to be accessible from the HDInsight resource provider and the cluster. + - The following are possible storage locations: - - For regular (non-ESP) clusters: - - A blob in an Azure Storage account that's either the primary or additional storage account for the HDInsight cluster. HDInsight is granted access to both of these types of storage accounts during cluster creation. - - > [!IMPORTANT] - > Do not rotate the storage key on this Azure Storage account, as it will cause subsequent script actions with scripts stored there to fail. + - For regular (non-ESP) clusters: + + - A blob in an Azure Storage account that's either the primary or additional storage account for the HDInsight cluster. HDInsight is granted access to both of these types of storage accounts during cluster creation. + + > [!IMPORTANT] + > Do not rotate the storage key on this Azure Storage account, as it will cause subsequent script actions with scripts stored there to fail. - - Data Lake Storage Gen1: The service principal HDInsight uses to access Data Lake Storage must have read access to the script. The Bash script URI format is `adl://DATALAKESTOREACCOUNTNAME.azuredatalakestore.net/path_to_file`. + - Data Lake Storage Gen1: The service principal HDInsight uses to access Data Lake Storage must have read access to the script. The Bash script URI format is `adl://DATALAKESTOREACCOUNTNAME.azuredatalakestore.net/path_to_file`. - - Data Lake Storage Gen2 is not recommended to use for script actions. `abfs://` is not supported for the Bash script URI. `https://` URIs are possible, but those work for containers that have public access, and the firewall open for the HDInsight Resource Provider, and therefore is not recommended. + - Data Lake Storage Gen2 is not recommended to use for script actions. `abfs://` is not supported for the Bash script URI. `https://` URIs are possible, but those work for containers that have public access, and the firewall open for the HDInsight Resource Provider, and therefore is not recommended. - - A public file-sharing service accessible through `https://` paths. Examples are Azure Blob, GitHub, or OneDrive. For example URIs, see [Example script action scripts](#example-script-action-scripts). + - A public file-sharing service accessible through `https://` paths. Examples are Azure Blob, GitHub, or OneDrive. For example URIs, see [Example script action scripts](#example-script-action-scripts). - For clusters with ESP, the `wasb://` or `wasbs://` or `http[s]://` URIs are supported. - The script actions can be restricted to run on only certain node types. Examples are head nodes or worker nodes. + - The script actions can be persisted or *ad hoc*. - Persisted script actions must have a unique name. Persisted scripts are used to customize new worker nodes added to the cluster through scaling operations. A persisted script might also apply changes to another node type when scaling operations occur. An example is a head node. - *Ad hoc* scripts aren't persisted. Script actions used during cluster creation are automatically persisted. They aren't applied to worker nodes added to the cluster after the script has run. Then you can promote an *ad hoc* script to a persisted script or demote a persisted script to an *ad hoc* script. Scripts that fail aren't persisted, even if you specifically indicate that they should be. - Script actions can accept parameters that are used by the script during execution. + - Script actions run with root-level privileges on the cluster nodes. + - Script actions can be used through the Azure portal, Azure PowerShell, Azure CLI, or HDInsight .NET SDK. + - Script actions that remove or modify service files on the VM may impact service health and availability. The cluster keeps a history of all scripts that have been run. The history helps when you need to find the ID of a script for promotion or demotion operations. -> [!IMPORTANT] +> [!IMPORTANT] > There's no automatic way to undo the changes made by a script action. Either manually reverse the changes or provide a script that reverses them. ## Permissions @@ -83,7 +89,6 @@ Script actions used during cluster creation are slightly different from script a The following diagram illustrates when script action runs during the creation process: - :::image type="content" source="./media/hdinsight-hadoop-customize-cluster-linux/cluster-provisioning-states.png" alt-text="Stages during cluster creation" border="false"::: The script runs while HDInsight is being configured. The script runs in parallel on all the specified nodes in the cluster. It runs with root privileges on the nodes. @@ -92,9 +97,12 @@ You can do operations like stopping and starting services, including Apache Hado During cluster creation, you can use many script actions at once. These scripts are invoked in the order in which they were specified. -> [!IMPORTANT] +> [!NOTE] +> If the script is present in any other storage account other than what is specified as cluster storage (at cluster create time), that will need a public access. + +> [!IMPORTANT] > Script actions must finish within 60 minutes, or they time out. During cluster provisioning, the script runs concurrently with other setup and configuration processes. Competition for resources such as CPU time or network bandwidth might cause the script to take longer to finish than it does in your development environment. -> +> > To minimize the time it takes to run the script, avoid tasks like downloading and compiling applications from the source. Precompile applications and store the binary in Azure Storage. ### Script action on a running cluster @@ -112,7 +120,7 @@ EndTime : 8/14/2017 7:41:05 PM Status : Succeeded ``` -> [!IMPORTANT] +> [!IMPORTANT] > If you change the cluster user, admin, password after the cluster is created, script actions run against this cluster might fail. If you have any persisted script actions that target worker nodes, these scripts might fail when you scale the cluster. ## Example script action scripts @@ -155,7 +163,7 @@ This section explains the different ways you can use script actions when you cre | Bash script URI |Specify the URI of the script. | | Head/Worker/ZooKeeper |Specify the nodes on which the script is run: **Head**, **Worker**, or **ZooKeeper**. | | Parameters |Specify the parameters, if required by the script. | - + Use the __Persist this script action__ entry to make sure that the script is applied during scaling operations. 1. Select __Create__ to save the script. Then you can use __+ Submit new__ to add another script. @@ -228,7 +236,7 @@ This section explains how to apply script actions on a running cluster. | Bash script URI |Specify the URI of the script. | | Head/Worker/Zookeeper |Specify the nodes on which the script is run: **Head**, **Worker**, or **ZooKeeper**. | | Parameters |Specify the parameters, if required by the script. | - + Use the __Persist this script action__ entry to make sure the script is applied during scaling operations. 1. Finally, select the **Create** button to apply the script to the cluster. @@ -324,10 +332,10 @@ The following example script demonstrates using the cmdlets to promote and then ### HDInsight .NET SDK -For an example of using the .NET SDK to retrieve script history from a cluster, promote or demote scripts, see [ -Apply a Script Action against a running Linux-based HDInsight cluster](https://github.com/Azure-Samples/hdinsight-dotnet-script-action). +For an example of using the .NET SDK to retrieve script history from a cluster, promote or demote scripts, see +Apply a Script Action against a running Linux-based HDInsight cluster. -> [!NOTE] +> [!NOTE] > This example also demonstrates how to install an HDInsight application by using the .NET SDK. ## Next steps diff --git a/articles/hdinsight/hdinsight-sales-insights-etl.md b/articles/hdinsight/hdinsight-sales-insights-etl.md index 9469f794a337f..7b8fc544221b0 100644 --- a/articles/hdinsight/hdinsight-sales-insights-etl.md +++ b/articles/hdinsight/hdinsight-sales-insights-etl.md @@ -4,7 +4,7 @@ description: Learn how to use create ETL pipelines with Azure HDInsight to deriv ms.service: hdinsight ms.topic: tutorial ms.custom: hdinsightactive -ms.date: 04/15/2020 +ms.date: 05/13/2022 --- # Tutorial: Create an end-to-end data pipeline to derive sales insights in Azure HDInsight diff --git a/articles/hdinsight/kafka/tutorial-cli-rest-proxy.md b/articles/hdinsight/kafka/tutorial-cli-rest-proxy.md index ace01a642e64e..6822090a7e14d 100644 --- a/articles/hdinsight/kafka/tutorial-cli-rest-proxy.md +++ b/articles/hdinsight/kafka/tutorial-cli-rest-proxy.md @@ -3,7 +3,7 @@ title: 'Tutorial: Create an Apache Kafka REST proxy enabled cluster in HDInsight description: Learn how to perform Apache Kafka operations using a Kafka REST proxy on Azure HDInsight. ms.service: hdinsight ms.topic: tutorial -ms.date: 02/27/2020 +ms.date: 05/13/2022 ms.custom: devx-track-azurecli --- diff --git a/articles/hdinsight/spark/safely-manage-jar-dependency.md b/articles/hdinsight/spark/safely-manage-jar-dependency.md index 0708c6af149a3..e232506b1dd25 100644 --- a/articles/hdinsight/spark/safely-manage-jar-dependency.md +++ b/articles/hdinsight/spark/safely-manage-jar-dependency.md @@ -4,7 +4,7 @@ description: This article discusses best practices for managing Java Archive (JA ms.custom: hdinsightactive ms.service: hdinsight ms.topic: how-to -ms.date: 02/05/2020 +ms.date: 05/13/2022 --- # Safely manage jar dependencies @@ -73,4 +73,4 @@ Then you can run `sbt clean` and `sbt assembly` to build the shaded jar file. * [Use HDInsight IntelliJ Tools](../hadoop/apache-hadoop-visual-studio-tools-get-started.md) -* [Create a Scala Maven application for Spark in IntelliJ](./apache-spark-create-standalone-application.md) \ No newline at end of file +* [Create a Scala Maven application for Spark in IntelliJ](./apache-spark-create-standalone-application.md) diff --git a/articles/hdinsight/storm/apache-troubleshoot-storm.md b/articles/hdinsight/storm/apache-troubleshoot-storm.md index edd84151fcb39..cf3c5323946ca 100644 --- a/articles/hdinsight/storm/apache-troubleshoot-storm.md +++ b/articles/hdinsight/storm/apache-troubleshoot-storm.md @@ -4,7 +4,7 @@ description: Get answers to common questions about using Apache Storm with Azure keywords: Azure HDInsight, Storm, FAQ, troubleshooting guide, common problems ms.service: hdinsight ms.topic: troubleshooting -ms.date: 11/08/2019 +ms.date: 05/13/2022 ms.custom: seodec18 --- @@ -180,4 +180,4 @@ If you didn't see your problem or are unable to solve your issue, visit one of t - Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts. -- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). \ No newline at end of file +- If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). diff --git a/articles/iot-edge/how-to-create-test-certificates.md b/articles/iot-edge/how-to-create-test-certificates.md index 0c625e5d84335..1408313aad9dc 100644 --- a/articles/iot-edge/how-to-create-test-certificates.md +++ b/articles/iot-edge/how-to-create-test-certificates.md @@ -76,7 +76,7 @@ In this section, you clone the IoT Edge repo and execute the scripts. mkdir wrkdir cd .\wrkdir\ cp ..\iotedge\tools\CACertificates\*.cnf . - cp ..\iotedge\tools\CACertificates\certGen.sh . + cp ..\iotedge\tools\CACertificates\ca-certs.ps1 . ``` If you downloaded the repo as a ZIP, then the folder name is `iotedge-master` and the rest of the path is the same. @@ -408,4 +408,4 @@ The certificates in this section are for the steps in the IoT Hub X.509 certific * `certs/iot-device--full-chain.cert.pem` * `private/iot-device-.key.pem` ---- \ No newline at end of file +--- diff --git a/articles/key-vault/secrets/overview-storage-keys-powershell.md b/articles/key-vault/secrets/overview-storage-keys-powershell.md index f76c4e943da60..710e197ef8a77 100644 --- a/articles/key-vault/secrets/overview-storage-keys-powershell.md +++ b/articles/key-vault/secrets/overview-storage-keys-powershell.md @@ -11,7 +11,10 @@ ms.custom: devx-track-azurepowershell # Customer intent: As a developer I want storage credentials and SAS tokens to be managed securely by Azure Key Vault. --- -# Manage storage account keys with Key Vault and Azure PowerShell +# Manage storage account keys with Key Vault and Azure PowerShell (legacy) +> [!IMPORTANT] +> Key Vault Managed Storage Account Keys (legacy) is supported as-is with no more updates planned. Only Account SAS are supported with SAS definitions signed storage service version no later than 2018-03-28. + > [!IMPORTANT] > We recommend using Azure Storage integration with Azure Active Directory (Azure AD), Microsoft's cloud-based identity and access management service. Azure AD integration is available for [Azure blobs and queues](../../storage/blobs/authorize-access-azure-active-directory.md), and provides OAuth2 token-based access to Azure Storage (just like Azure Key Vault). > Azure AD allows you to authenticate your client application by using an application or user identity, instead of storage account credentials. You can use an [Azure AD managed identity](../../active-directory/managed-identities-azure-resources/index.yml) when you run on Azure. Managed identities remove the need for client authentication and storing credentials in or with your application. Use below solution only when Azure AD authentication is not possible. @@ -159,10 +162,10 @@ Tags : ### Enable key regeneration -If you want Key Vault to regenerate your storage account keys periodically, you can use the Azure PowerShell [Add-AzKeyVaultManagedStorageAccount](/powershell/module/az.keyvault/add-azkeyvaultmanagedstorageaccount) cmdlet to set a regeneration period. In this example, we set a regeneration period of three days. When it is time to rotate, Key Vault regenerates the key that is not active, and then sets the newly created key as active. Only one of the keys are used to issue SAS tokens at any one time. This is the active key. +If you want Key Vault to regenerate your storage account keys periodically, you can use the Azure PowerShell [Add-AzKeyVaultManagedStorageAccount](/powershell/module/az.keyvault/add-azkeyvaultmanagedstorageaccount) cmdlet to set a regeneration period. In this example, we set a regeneration period of thirty days. When it is time to rotate, Key Vault regenerates the key that is not active, and then sets the newly created key as active. Only one of the keys are used to issue SAS tokens at any one time. This is the active key. ```azurepowershell-interactive -$regenPeriod = [System.Timespan]::FromDays(3) +$regenPeriod = [System.Timespan]::FromDays(30) Add-AzKeyVaultManagedStorageAccount -VaultName $keyVaultName -AccountName $storageAccountName -AccountResourceId $storageAccount.Id -ActiveKeyName $storageAccountKey -RegenerationPeriod $regenPeriod ``` @@ -176,7 +179,7 @@ AccountName : sacontoso Account Resource Id : /subscriptions/03f0blll-ce69-483a-a092-d06ea46dfb8z/resourceGroups/rgContoso/providers/Microsoft.Storage/storageAccounts/sacontoso Active Key Name : key1 Auto Regenerate Key : True -Regeneration Period : 3.00:00:00 +Regeneration Period : 30.00:00:00 Enabled : True Created : 11/19/2018 11:54:47 PM Updated : 11/19/2018 11:54:47 PM @@ -190,7 +193,6 @@ You can also ask Key Vault to generate shared access signature tokens. A shared The commands in this section complete the following actions: - Set an account shared access signature definition. -- Create an account shared access signature token for Blob, File, Table, and Queue services. The token is created for resource types Service, Container, and Object. The token is created with all permissions, over https, and with the specified start and end dates. - Set a Key Vault managed storage shared access signature definition in the vault. The definition has the template URI of the shared access signature token that was created. The definition has the shared access signature type `account` and is valid for N days. - Verify that the shared access signature was saved in your key vault as a secret. @@ -207,28 +209,36 @@ $keyVaultName = $storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -Protocol Https -StorageAccountKey Key1 #(or "Primary" for Classic Storage Account) ``` -### Create a shared access signature token +### Define a shared access signature definition template -Create a shared access signature definition using the Azure PowerShell [New-AzStorageAccountSASToken](/powershell/module/az.storage/new-azstorageaccountsastoken) cmdlets. +Key Vault uses SAS definition template to generate tokens for client applications. -```azurepowershell-interactive -$start = [System.DateTime]::Now.AddDays(-1) -$end = [System.DateTime]::Now.AddMonths(1) +#### Account SAS parameters required in SAS definition template for Key Vault +|SAS Query Parameter|Description| +|-------------------------|-----------------| +|`SignedVersion (sv)`|Required. Specifies the signed storage service version to use to authorize requests made with this account SAS. Must be set to version 2015-04-05 or later. **Key Vault supports versions no later than 2018-03-28**| +|`SignedServices (ss)`|Required. Specifies the signed services accessible with the account SAS. Possible values include:

- Blob (`b`)
- Queue (`q`)
- Table (`t`)
- File (`f`)

You can combine values to provide access to more than one service. For example, `ss=bf` specifies access to the Blob and File endpoints.| +|`SignedResourceTypes (srt)`|Required. Specifies the signed resource types that are accessible with the account SAS.

- Service (`s`): Access to service-level APIs (*e.g.*, Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)
- Container (`c`): Access to container-level APIs (*e.g.*, Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete Share, List Blobs/Files and Directories)
- Object (`o`): Access to object-level APIs for blobs, queue messages, table entities, and files(*e.g.* Put Blob, Query Entity, Get Messages, Create File, etc.)

You can combine values to provide access to more than one resource type. For example, `srt=sc` specifies access to service and container resources.| +|`SignedPermission (sp)`|Required. Specifies the signed permissions for the account SAS. Permissions are only valid if they match the specified signed resource type; otherwise they are ignored.

- Read (`r`): Valid for all signed resources types (Service, Container, and Object). Permits read permissions to the specified resource type.
- Write (`w`): Valid for all signed resources types (Service, Container, and Object). Permits write permissions to the specified resource type.
- Delete (`d`): Valid for Container and Object resource types, except for queue messages.
- Permanent Delete (`y`): Valid for Object resource type of Blob only.
- List (`l`): Valid for Service and Container resource types only.
- Add (`a`): Valid for the following Object resource types only: queue messages, table entities, and append blobs.
- Create (`c`): Valid for the following Object resource types only: blobs and files. Users can create new blobs or files, but may not overwrite existing blobs or files.
- Update (`u`): Valid for the following Object resource types only: queue messages and table entities.
- Process (`p`): Valid for the following Object resource type only: queue messages.
- Tag (`t`): Valid for the following Object resource type only: blobs. Permits blob tag operations.
- Filter (`f`): Valid for the following Object resource type only: blob. Permits filtering by blob tag.
- Set Immutability Policy (`i`): Valid for the following Object resource type only: blob. Permits set/delete immutability policy and legal hold on a blob.| +|`SignedProtocol (spr)`|Optional. Specifies the protocol permitted for a request made with the account SAS. Possible values are both HTTPS and HTTP (`https,http`) or HTTPS only (`https`). The default value is `https,http`.

Note that HTTP only is not a permitted value.| -$sasToken = New-AzStorageAccountSasToken -Service blob,file,Table,Queue -ResourceType Service,Container,Object -Permission "racwdlup" -Protocol HttpsOnly -StartTime $start -ExpiryTime $end -Context $storageContext +SAS definition template example: +```azurepowershell-interactive +$sasTemplate="sv=2018-03-28&ss=bfqt&srt=sco&sp=rw&spr=https" ``` -The value of $sasToken will look similar to this. -```console -?sv=2018-11-09&sig=5GWqHFkEOtM7W9alOgoXSCOJO%2B55qJr4J7tHQjCId9S%3D&spr=https&st=2019-09-18T18%3A25%3A00Z&se=2019-10-19T18%3A25%3A00Z&srt=sco&ss=bfqt&sp=racupwdl -``` +For more information about account SAS, see: +[Create an account SAS](https://docs.microsoft.com/rest/api/storageservices/create-account-sas) + +> [!NOTE] +> Key Vault ignores lifetime parameters like 'Signed Expiry', 'Signed Start' and parameters introduced after 2018-03-28 version -### Generate a shared access signature definition +### Set shared access signature definition in Key Vault Use the the Azure PowerShell [Set-AzKeyVaultManagedStorageSasDefinition](/powershell/module/az.keyvault/set-azkeyvaultmanagedstoragesasdefinition) cmdlet to create a shared access signature definition. You can provide the name of your choice to the `-Name` parameter. ```azurepowershell-interactive -Set-AzKeyVaultManagedStorageSasDefinition -AccountName $storageAccountName -VaultName $keyVaultName -Name -TemplateUri $sasToken -SasType 'account' -ValidityPeriod ([System.Timespan]::FromDays(30)) +Set-AzKeyVaultManagedStorageSasDefinition -AccountName $storageAccountName -VaultName $keyVaultName -Name -TemplateUri $sasTemplate -SasType 'account' -ValidityPeriod ([System.Timespan]::FromDays(1)) ``` ### Verify the shared access signature definition diff --git a/articles/key-vault/secrets/overview-storage-keys.md b/articles/key-vault/secrets/overview-storage-keys.md index 41f2b9bc4bcfa..08e8535fd340f 100644 --- a/articles/key-vault/secrets/overview-storage-keys.md +++ b/articles/key-vault/secrets/overview-storage-keys.md @@ -12,7 +12,11 @@ ms.custom: devx-track-azurecli # Customer intent: As a developer, I want to use Azure Key Vault and Azure CLI for secure management of my storage credentials and shared access signature tokens. --- -# Manage storage account keys with Key Vault and the Azure CLI +# Manage storage account keys with Key Vault and the Azure CLI (legacy) + +> [!IMPORTANT] +> Key Vault Managed Storage Account Keys (legacy) is supported as-is with no more updates planned. Only Account SAS are supported with SAS definitions signed storage service version no later than 2018-03-28. + > [!IMPORTANT] > We recommend using Azure Storage integration with Azure Active Directory (Azure AD), Microsoft's cloud-based identity and access management service. Azure AD integration is available for [Azure blobs and queues](../../storage/blobs/authorize-access-azure-active-directory.md), and provides OAuth2 token-based access to Azure Storage (just like Azure Key Vault). > Azure AD allows you to authenticate your client application by using an application or user identity, instead of storage account credentials. You can use an [Azure AD managed identity](../../active-directory/managed-identities-azure-resources/index.yml) when you run on Azure. Managed identities remove the need for client authentication and storing credentials in or with your application. Use below solution only when Azure AD authentication is not possible. @@ -33,7 +37,7 @@ When you use the managed storage account key feature, consider the following poi ## Service principal application ID -An Azure AD tenant provides each registered application with a [service principal](../../active-directory/develop/developer-glossary.md#service-principal-object). The service principal serves as the Application ID, which is used during authorization setup for access to other Azure resources via Azure RBAC. +An Azure AD tenant provides each registered application with a [service principal](../../active-directory/develop/developer-glossary.md#service-principal-object). The service principal serves as the Application ID, which is used during authorization setup for access to other Azure resources via Azure role-base access control (Azure RBAC). Key Vault is a Microsoft application that's pre-registered in all Azure AD tenants. Key Vault is registered under the same Application ID in each Azure cloud. @@ -45,7 +49,7 @@ Key Vault is a Microsoft application that's pre-registered in all Azure AD tenan ## Prerequisites -To complete this guide, you must first do the following: +To complete this guide, you must first do the following steps: - [Install the Azure CLI](/cli/azure/install-azure-cli). - [Create a key vault](quick-create-cli.md) @@ -66,8 +70,8 @@ az login Use the Azure CLI [az role assignment create](/cli/azure/role/assignment) command to give Key Vault access your storage account. Provide the command the following parameter values: - `--role`: Pass the "Storage Account Key Operator Service Role" Azure role. This role limits the access scope to your storage account. For a classic storage account, pass "Classic Storage Account Key Operator Service Role" instead. -- `--assignee`: Pass the value "https://vault.azure.net", which is the url for Key Vault in the Azure public cloud. (For Azure Goverment cloud use '--assignee-object-id' instead, see [Service principal application ID](#service-principal-application-id).) -- `--scope`: Pass your storage account resource ID, which is in the form `/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts/`. To find your subscription ID, use the Azure CLI [az account list](/cli/azure/account?#az-account-list) command; to find your storage account name and storage account resource group, use the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command. +- `--assignee`: Pass the value "https://vault.azure.net", which is the url for Key Vault in the Azure public cloud. (For Azure Government cloud use '--assignee-object-id' instead, see [Service principal application ID](#service-principal-application-id).) +- `--scope`: Pass your storage account resource ID, which is in the form `/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts/`. Find your subscription ID, by using the Azure CLI [az account list](/cli/azure/account?#az-account-list) command. Find your storage account name and storage account resource group, by using the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command. ```azurecli-interactive az role assignment create --role "Storage Account Key Operator Service Role" --assignee "https://vault.azure.net" --scope "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts/" @@ -85,14 +89,14 @@ az keyvault set-policy --name --upn user@domain.com --storage Note that permissions for storage accounts aren't available on the storage account "Access policies" page in the Azure portal. ### Create a Key Vault Managed storage account - Create a Key Vault managed storage account using the Azure CLI [az keyvault storage](/cli/azure/keyvault/storage?#az-keyvault-storage-add) command. Set a regeneration period of 90 days. When it is time to rotate, KeyVault regenerates the key that is not active, and then sets the newly created key as active. Only one of the keys are used to issue SAS tokens at any one time, this is the active key. Provide the command the following parameter values: + Create a Key Vault managed storage account using the Azure CLI [az keyvault storage](/cli/azure/keyvault/storage?#az-keyvault-storage-add) command. Set a regeneration period of 30 days. When it is time to rotate, KeyVault regenerates the key that is not active, and then sets the newly created key as active. Only one of the keys are used to issue SAS tokens at any one time, this is the active key. Provide the command the following parameter values: - `--vault-name`: Pass the name of your key vault. To find the name of your key vault, use the Azure CLI [az keyvault list](/cli/azure/keyvault?#az-keyvault-list) command. - `-n`: Pass the name of your storage account. To find the name of your storage account, use the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command. -- `--resource-id`: Pass your storage account resource ID, which is in the form `/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts/`. To find your subscription ID, use the Azure CLI [az account list](/cli/azure/account?#az-account-list) command; to find your storage account name and storage account resource group, use the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command. +- `--resource-id`: Pass your storage account resource ID, which is in the form `/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts/`. Find your subscription ID, by using the Azure CLI [az account list](/cli/azure/account?#az-account-list) command. Find your storage account name and storage account resource group, by using the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command. ```azurecli-interactive -az keyvault storage add --vault-name -n --active-key-name key1 --auto-regenerate-key --regeneration-period P90D --resource-id "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts/" +az keyvault storage add --vault-name -n --active-key-name key1 --auto-regenerate-key --regeneration-period P30D --resource-id "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts/" ``` ## Shared access signature tokens @@ -102,29 +106,39 @@ You can also ask Key Vault to generate shared access signature tokens. A shared The commands in this section complete the following actions: - Set an account shared access signature definition ``. The definition is set on a Key Vault managed storage account `` in your key vault ``. -- Create an account shared access signature token for Blob, File, Table, and Queue services. The token is created for resource types Service, Container, and Object. The token is created with all permissions, over https, and with the specified start and end dates. - Set a Key Vault managed storage shared access signature definition in the vault. The definition has the template URI of the shared access signature token that was created. The definition has the shared access signature type `account` and is valid for N days. - Verify that the shared access signature was saved in your key vault as a secret. -### Create a shared access signature token +### Define a shared access signature definition template -Create a shared access signature definition using the Azure CLI [az storage account generate-sas](/cli/azure/storage/account?#az-storage-account-generate-sas) command. This operation requires the `storage` and `setsas` permissions. +Key Vault uses SAS definition template to generate tokens for client applications. +#### Account SAS parameters required in SAS definition template for Key Vault -```azurecli-interactive -az storage account generate-sas --expiry 2020-01-01 --permissions rw --resource-types sco --services bfqt --https-only --account-name --account-key 00000000 -``` -After the operation runs successfully, copy the output. +|SAS Query Parameter|Description| +|-------------------------|-----------------| +|`SignedVersion (sv)`|Required. Specifies the signed storage service version to use to authorize requests made with this account SAS. Must be set to version 2015-04-05 or later. **Key Vault supports versions no later than 2018-03-28**| +|`SignedServices (ss)`|Required. Specifies the signed services accessible with the account SAS. Possible values include:

- Blob (`b`)
- Queue (`q`)
- Table (`t`)
- File (`f`)

You can combine values to provide access to more than one service. For example, `ss=bf` specifies access to the Blob and File endpoints.| +|`SignedResourceTypes (srt)`|Required. Specifies the signed resource types that are accessible with the account SAS.

- Service (`s`): Access to service-level APIs (*for example*, Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)
- Container (`c`): Access to container-level APIs (*for example*, Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete Share, List Blobs/Files and Directories)
- Object (`o`): Access to object-level APIs for blobs, queue messages, table entities, and files(*for example,* Put Blob, Query Entity, Get Messages, Create File, etc.)

You can combine values to provide access to more than one resource type. For example, `srt=sc` specifies access to service and container resources.| +|`SignedPermission (sp)`|Required. Specifies the signed permissions for the account SAS. Permissions are only valid if they match the specified signed resource type; otherwise they're ignored.

- Read (`r`): Valid for all signed resources types (Service, Container, and Object). Permits read permissions to the specified resource type.
- Write (`w`): Valid for all signed resources types (Service, Container, and Object). Permits write permissions to the specified resource type.
- Delete (`d`): Valid for Container and Object resource types, except for queue messages.
- Permanent Delete (`y`): Valid for Object resource type of Blob only.
- List (`l`): Valid for Service and Container resource types only.
- Add (`a`): Valid for the following Object resource types only: queue messages, table entities, and append blobs.
- Create (`c`): Valid for the following Object resource types only: blobs and files. Users can create new blobs or files, but may not overwrite existing blobs or files.
- Update (`u`): Valid for the following Object resource types only: queue messages and table entities.
- Process (`p`): Valid for the following Object resource type only: queue messages.
- Tag (`t`): Valid for the following Object resource type only: blobs. Permits blob tag operations.
- Filter (`f`): Valid for the following Object resource type only: blob. Permits filtering by blob tag.
- Set Immutability Policy (`i`): Valid for the following Object resource type only: blob. Permits set/delete immutability policy and legal hold on a blob.| +|`SignedProtocol (spr)`|Optional. Specifies the protocol permitted for a request made with the account SAS. Possible values are both HTTPS and HTTP (`https,http`) or HTTPS only (`https`). The default value is `https,http`.

Note that HTTP only isn't a permitted value.| +SAS definition template example: ```console -"se=2020-01-01&sp=***" +"sv=2018-03-28&ss=bfqt&srt=sco&sp=rw&spr=https" ``` -This output will be the passed to the `--template-uri` parameter in the next step. +SAS definition template will be the passed to the `--template-uri` parameter in the next step. + +For more information about account SAS, see: +[Create an account SAS](https://docs.microsoft.com/rest/api/storageservices/create-account-sas) + +> [!NOTE] +> Key Vault ignores lifetime parameters like 'Signed Expiry', 'Signed Start' and parameters introduced after 2018-03-28 version -### Generate a shared access signature definition +### Set shared access signature definition in Key Vault -Use the the Azure CLI [az keyvault storage sas-definition create](/cli/azure/keyvault/storage/sas-definition?#az-keyvault-storage-sas-definition-create) command, passing the output from the previous step to the `--template-uri` parameter, to create a shared access signature definition. You can provide the name of your choice to the `-n` parameter. +Use the Azure CLI [az keyvault storage sas-definition create](/cli/azure/keyvault/storage/sas-definition?#az-keyvault-storage-sas-definition-create) command, passing the SAS definition template from the previous step to the `--template-uri` parameter, to create a shared access signature definition. You can provide the name of your choice to the `-n` parameter. ```azurecli-interactive az keyvault storage sas-definition create --vault-name --account-name -n --validity-period P2D --sas-type account --template-uri diff --git a/articles/lab-services/administrator-guide.md b/articles/lab-services/administrator-guide.md index ae6efdc6253d4..32e4284003d7c 100644 --- a/articles/lab-services/administrator-guide.md +++ b/articles/lab-services/administrator-guide.md @@ -163,6 +163,7 @@ For information on VM sizes and their cost, see the [Azure Lab Services Pricing] | Medium (nested virtualization) | 4 | 16 | [Standard_D4s_v4](../virtual-machines/dv4-dsv4-series.md) | Best suited for relational databases, in-memory caching, and analytics. This size also supports nested virtualization. | Large | 8 | 16 | [Standard_F8s_v2](../virtual-machines/fsv2-series.md) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. | | Large (nested virtualization) | 8 | 32 | [Standard_D8s_v4](../virtual-machines/dv4-dsv4-series.md) | Best suited for applications that need faster CPUs, better local disk performance, large databases, large memory caches. This size also supports nested virtualization. | +| Small GPU (Compute) | 6 | 112 | [Standard_NC6s_v3](../virtual-machines/ncv3-series.md) | Best suited for computer-intensive applications such as AI and deep learning. | | Small GPU (visualization) | 8 | 28 | [Standard_NVas_v4](../virtual-machines/nvv4-series.md) **Windows only* | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. | | Medium GPU (visualization) | 12 | 112 | [Standard_NV12s_v3](../virtual-machines/nvv3-series.md) | Best suited for remote visualization, streaming, gaming, and encoding using frameworks such as OpenGL and DirectX. | diff --git a/articles/machine-learning/how-to-configure-network-isolation-with-v2.md b/articles/machine-learning/how-to-configure-network-isolation-with-v2.md new file mode 100644 index 0000000000000..d3e3778f8d69d --- /dev/null +++ b/articles/machine-learning/how-to-configure-network-isolation-with-v2.md @@ -0,0 +1,102 @@ +--- +title: Network isolation change with our new API platform on Azure Resource Manager +titleSuffix: Azure Machine Learning +description: 'Explain network isolation changes with our new API platform on Azure Resource Manager and how to maintain network isolation' +services: machine-learning +ms.service: machine-learning +ms.subservice: enterprise-readiness +ms.topic: how-to +ms.author: jhirono +author: jhirono +ms.reviewer: larryfr +ms.date: 05/13/2022 +--- + +# Network Isolation Change with Our New API Platform on Azure Resource Manager + +In this article, you'll learn about network isolation changes with our new v2 API platform on Azure Resource Manager (ARM) and its effect on network isolation. + +## What is the new API platform on Azure Resource Manager (ARM) + +There are two types of operations used by the v1 and v2 APIs, __Azure Resource Manager (ARM)__ and __Azure Machine Learning workspace__. + +With the v1 API, most operations used the workspace. For v2, we've moved most operations to use public ARM. + +| API version | Public ARM | Workspace | +| ----- | ----- | ----- | +| v1 | Workspace and compute create, update, and delete (CRUD) operations. | Other operations such as experiments. | +| v2 | Most operations such as workspace, compute, datastore, dataset, job, environment, code, component, endpoints. | Remaining operations. | + + +The v2 API provides a consistent API in one place. You can more easily use Azure role-based access control and Azure Policy for resources with the v2 API because it's based on Azure Resource Manager. + +The Azure Machine Learning CLI v2 uses our new v2 API platform. New features such as [managed online endpoints](concept-endpoints.md) are only available using the v2 API platform. + +## What are the network isolation changes with V2 + +As mentioned in the previous section, there are two types of operations; with ARM and with the workspace. With the __legacy v1 API__, most operations used the workspace. With the v1 API, adding a private endpoint to the workspace provided network isolation for everything except CRUD operations on the workspace or compute resources. + +With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [create or update job](/rest/api/azureml/jobs/create-or-update) api sends metadata, and [parameters](/azure/machine-learning/reference-yaml-job-command). + +> [!TIP] +> * Public ARM operations do not surface data in your storage account on public networks. +> * Your communication with public ARM is encrypted using TLS 1.2. + +If you need time to evaluate the new v2 API before adopting it in your enterprise solutions, or have a company policy that prohibits sending communication over public networks, we'll provide a *v1_legacy_mode* parameter. When enabled, this parameter disables the v2 API for your workspace. + +> [!IMPORTANT] +> Enabling v1_legacy_mode may prevent you from using features provided by the v2 API. For example, some features of Azure Machine Learning studio may be unavailable. + +## Scenarios and Required Actions + +>[!WARNING] +>The *v1_legacy_mode* parameter is not implemented yet. It will be implemented the week of May 15th, 2022. + +* If you don't plan on using a private endpoint with your workspace, you don't need to enable parameter. + +* If you're OK with operations communicating with public ARM, you don't need to enable the parameter. + +* You only need to enable the parameter if you're using a private endpoint with the workspace _and_ don't want to allow operations with ARM over public networks. + +Once we implement the parameter, it will be retroactively applied to existing workspaces using the following logic: + +* If you have __an existing workspace with a private endpoint__, the flag will be __true__. + +* If you have __an existing workspace without a private endpoint__ (public workspace), the flag will be __false__. + +After the parameter has been implemented, the default value of the flag depends on the underlying REST API version used when you create a workspace (with a private endpoint): + +* If the API version is __older__ than `2022-05-01`, then the flag is __true__ by default. +* If the API version is `2022-05-01` or __newer__, then the flag is __false__ by default. + +> [!IMPORTANT] +> If you want to use the v2 API with your workspace, you must set the v1_legacy_mode parameter to false. + +## How to update v1_legacy_mode parameter + +>[!WARNING] +>This parameter is not implemented yet. It will be implemented the week of May 15th, 2022. + +To update v1_legacy_mode, use the following steps: + +# [Python](#tab/python) + +To disable v1_legacy_mode, use [Workspace.update](/python/api/azureml-core/azureml.core.workspace(class)#update-friendly-name-none--description-none--tags-none--image-build-compute-none--service-managed-resources-settings-none--primary-user-assigned-identity-none--allow-public-access-when-behind-vnet-none-) and set `v1_legacy_mode=false`. + +```python +from azureml.core import Workspace + +ws = Workspace.from_config() +ws.update(v1_legacy_mode=false) +``` + +# [Azure CLI extension v1](#tab/azurecliextensionv1) + +The Azure CLI [extension v1 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) command. To enable the parameter for a workspace, add the parameter `--set v1-legacy-mode=true`. + +--- + +## Next steps + +* [Use a private endpoint with Azure Machine Learning workspace](how-to-configure-private-link.md). +* [Create private link for managing Azure resources](/azure/azure-resource-manager/management/create-private-link-access-portal). \ No newline at end of file diff --git a/articles/machine-learning/how-to-configure-private-link.md b/articles/machine-learning/how-to-configure-private-link.md index d6b3889b34c70..aea12cbb0afdd 100644 --- a/articles/machine-learning/how-to-configure-private-link.md +++ b/articles/machine-learning/how-to-configure-private-link.md @@ -29,6 +29,7 @@ Azure Private Link enables you to connect to your workspace using a private endp > * [Secure training environments](how-to-secure-training-vnet.md). > * [Secure inference environments](how-to-secure-inferencing-vnet.md). > * [Use Azure Machine Learning studio in a VNet](how-to-enable-studio-virtual-network.md). +> * [API platform network isolation](how-to-configure-network-isolation-with-v2.md) ## Prerequisites @@ -77,7 +78,7 @@ ws = Workspace.create(name='myworkspace', # [Azure CLI extension 2.0 preview](#tab/azurecliextensionv2) -When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), a YAML document is used to configure the workspace. The following is an of creating a new workspace using a YAML configuration: +When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), a YAML document is used to configure the workspace. The following is an example of creating a new workspace using a YAML configuration: > [!TIP] > When using private link, your workspace cannot use Azure Container Registry tasks compute for image building. The `image_build_compute` property in this configuration specifies a CPU compute cluster name to use for Docker image environment building. You can also specify whether the private link workspace should be accessible over the internet using the `public_network_access` property. @@ -322,7 +323,7 @@ The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learn In some situations, you may want to allow someone to connect to your secured workspace over a public endpoint, instead of through the VNet. Or you may want to remove the workspace from the VNet and re-enable public access. > [!IMPORTANT] -> Enabling public access doesn't remove any private endpoints that exist. All communications between components behind the VNet that the private endpoint(s) connect to is still secured. It enables public access only to the workspace, in addition to the private access through any private endpoints. +> Enabling public access doesn't remove any private endpoints that exist. All communications between components behind the VNet that the private endpoint(s) connect to are still secured. It enables public access only to the workspace, in addition to the private access through any private endpoints. > [!WARNING] > When connecting over the public endpoint while the workspace uses a private endpoint to communicate with other resources: @@ -439,3 +440,5 @@ If you want to create an isolated Azure Kubernetes Service used by the workspace * For more information on securing your Azure Machine Learning workspace, see the [Virtual network isolation and privacy overview](how-to-network-security-overview.md) article. * If you plan on using a custom DNS solution in your virtual network, see [how to use a workspace with a custom DNS server](how-to-custom-dns.md). + +* [API platform network isolation](how-to-configure-network-isolation-with-v2.md) diff --git a/articles/machine-learning/how-to-network-security-overview.md b/articles/machine-learning/how-to-network-security-overview.md index a9115a219dd2b..095bdb9cdb12f 100644 --- a/articles/machine-learning/how-to-network-security-overview.md +++ b/articles/machine-learning/how-to-network-security-overview.md @@ -27,6 +27,7 @@ Secure Azure Machine Learning workspace resources and compute environments using > * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) +> * [API platform network isolation](how-to-configure-network-isolation-with-v2.md) > > For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md). @@ -237,4 +238,5 @@ This article is part of a series on securing an Azure Machine Learning workflow. * [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) -* [Use a firewall](how-to-access-azureml-behind-firewall.md) \ No newline at end of file +* [Use a firewall](how-to-access-azureml-behind-firewall.md) +* [API platform network isolation](how-to-configure-network-isolation-with-v2.md) \ No newline at end of file diff --git a/articles/machine-learning/how-to-secure-workspace-vnet.md b/articles/machine-learning/how-to-secure-workspace-vnet.md index 2259832d444de..face26f587cc4 100644 --- a/articles/machine-learning/how-to-secure-workspace-vnet.md +++ b/articles/machine-learning/how-to-secure-workspace-vnet.md @@ -27,6 +27,7 @@ In this article, you learn how to secure an Azure Machine Learning workspace and > * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) +> * [API platform network isolation](how-to-configure-network-isolation-with-v2.md) > > For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md). @@ -352,4 +353,5 @@ This article is part of a series on securing an Azure Machine Learning workflow. * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md) * [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) -* [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md) \ No newline at end of file +* [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md) +* [API platform network isolation](how-to-configure-network-isolation-with-v2.md) \ No newline at end of file diff --git a/articles/machine-learning/toc.yml b/articles/machine-learning/toc.yml index 1dfee4a31d060..5a6106c1c4efc 100644 --- a/articles/machine-learning/toc.yml +++ b/articles/machine-learning/toc.yml @@ -269,6 +269,8 @@ - name: Configure required network traffic href: how-to-access-azureml-behind-firewall.md displayName: firewall, user-defined route, udr + - name: Configure network isolation with v2 + href: how-to-configure-network-isolation-with-v2.md - name: Data protection items: - name: Failover & disaster recovery diff --git a/articles/mysql/flexible-server/whats-new.md b/articles/mysql/flexible-server/whats-new.md index c9b880835e442..b9ec14f80a487 100644 --- a/articles/mysql/flexible-server/whats-new.md +++ b/articles/mysql/flexible-server/whats-new.md @@ -23,7 +23,7 @@ This article summarizes new releases and features in Azure Database for MySQL - - **Minor version upgrade for Azure Database for MySQL - Flexible server to 8.0.28** Azure Database for MySQL - Flexible Server 8.0 now is running on minor version 8.0.28*, to learn more about changes coming in this minor version [visit Changes in MySQL 8.0.28 (2022-01-18, General Availability)](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html) -- **Minor version upgrade for Azure Database for MySQL - Single server to 5.7.37** +- **Minor version upgrade for Azure Database for MySQL - Flexible server to 5.7.37** Azure Database for MySQL - Flexible Server 5.7 now is running on minor version 5.7.37*, to learn more about changes coming in this minor version [visit Changes in MySQL 5.7.37 (2022-01-18, General Availability](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html) * Please note that some regions are still running older minor versions of the Azure Database for MySQL and will be patched by end of April 2022. diff --git a/articles/postgresql/TOC.yml b/articles/postgresql/TOC.yml index d535666c218be..08786eecb5fbb 100644 --- a/articles/postgresql/TOC.yml +++ b/articles/postgresql/TOC.yml @@ -14,6 +14,16 @@ items: - name: Migration items: + - name: Single Server to Flexible Server migration (preview) + items: + - name: Single to Flexible - Concepts + href: concepts-single-to-flexible.md + - name: Single to Flexible - Migrate using portal + href: how-to-migrate-single-to-flex-portal.md + - name: Single to Flexible - Migrate using CLI + href: how-to-migrate-single-to-flex-cli.md + - name: Single to Flexible - Set up Azure AD app using portal + href: how-to-setup-aad-app-portal.md - name: Migrate data with pg_dump and pg_restore href: howto-migrate-using-dump-and-restore.md displayName: pg_dump, pg_restore diff --git a/articles/postgresql/concepts-single-to-flexible.md b/articles/postgresql/concepts-single-to-flexible.md new file mode 100644 index 0000000000000..4df6194ae9fd4 --- /dev/null +++ b/articles/postgresql/concepts-single-to-flexible.md @@ -0,0 +1,184 @@ +--- +title: "Migrate from Azure Database for PostgreSQL Single Server to Flexible Server - Concepts" +titleSuffix: Azure Database for PostgreSQL Flexible Server +description: Concepts about migrating your Single server to Azure database for PostgreSQL Flexible server. +author: shriram-muthukrishnan +ms.author: shriramm +ms.service: postgresql +ms.topic: conceptual +ms.date: 05/11/2022 +ms.custom: "mvc, references_regions" +--- + +# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (Preview) + +>[!NOTE] +> Single Server to Flexible Server migration feature is in public preview. + +Azure Database for PostgreSQL Flexible Server provides zone redundant high availability, control over price, and control over maintenance window. Single to Flexible Server Migration feature enables customers to migrate their databases from Single server to Flexible. See this [documentation](./flexible-server/concepts-compare-single-server-flexible-server.md) to understand the differences between Single and Flexible servers. Customers can initiate migrations for multiple servers and databases in a repeatable fashion using this migration feature. This feature automates most of the steps needed to do the migration and thus making the migration journey across Azure platforms as seamless as possible. The feature is provided free of cost for customers. + +Single to Flexible server migration is enabled in **Preview** in Australia Southeast, Canada Central, Canada East, East Asia, North Central US, South Central US, Switzerland North, UAE North, UK South, UK West, West US, and Central US. + +## Overview + +Single to Flexible server migration feature provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target). + +You choose the source server and can select up to **8** databases from it. This limitation is per migration task. The migration feature automates the following steps: + +1. Creates the migration infrastructure in the region of the target flexible server +2. Creates public IP address and attaches it to the migration infrastructure +3. Allow-listing of migration infrastructure’s IP address on the firewall rules of both source and target servers +4. Creates a migration project with both source and target types as Azure database for PostgreSQL +5. Creates a migration activity to migrate the databases specified by the user from source to target. +6. Migrates schema from source to target +7. Creates databases with the same name on the target Flexible server +8. Migrates data from source to target + +Following is the flow diagram for Single to Flexible migration feature. +:::image type="content" source="./media/concepts-single-to-flex/concepts-flow-diagram.png" alt-text="Single to Flexible Server migration" lightbox="./media/concepts-single-to-flex/concepts-flow-diagram.png"::: + +**Steps:** +1. Create a Flex PG server +2. Invoke migration +3. Migration infrastructure provisioned (DMS) +4. Initiates migration – (4a) Initial dump/restore (online & offline) (4b) streaming the changes (online only) +5. Cutover to the target + +The migration feature is exposed through **Azure Portal** and via easy-to-use **Azure CLI** commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations + +## Migration modes comparison + +Single to Flexible Server migration supports online and offline mode of migrations. Online option provides reduced downtime migration with logical replication restrictions while the offline option offers a simple migration but may incur extended downtime depending on the size of databases. + +The following table summarizes the differences between these two modes of migration. + +| Capability | Online | Offline | +|:---------------|:-------------|:-----------------| +| Database availability for reads during migration | Available | Available | +| Database availability for writes during migration | Available | Generally, not recommended. Any writes initiated after the migration is not captured or migrated | +| Application Suitability | Applications that need maximum uptime | Applications that can afford a planned downtime window | +| Environment Suitability | Production environments | Usually Development, Testing environments and some production that can afford downtime | +| Suitability for Write-heavy workloads | Suitable but expected to reduce the workload during migration | Not Applicable. Writes at source after migration begins are not replicated to target. | +| Manual Cutover | Required | Not required | +| Downtime Required | Less | More | +| Logical replication limitations | Applicable | Not Applicable | +| Migration time required | Depends on Database size and the write activity until cutover | Depends on Database size | + +**Migration steps involved for Offline mode** = Dump of the source Single Server database followed by the Restore at the target Flexible server. + +The following table shows the approximate time taken to perform offline migrations for databases of various sizes. + +>[!NOTE] +> Add ~15 minutes for the migration infrastructure to get deployed for each migration task, where each task can migrate up to 8 databases. + +| Database Size | Approximate Time Taken (HH:MM) | +|:---------------|:-------------| +| 1 GB | 00:01 | +| 5 GB | 00:05 | +| 10 GB | 00:10 | +| 50 GB | 00:45 | +| 100 GB | 06:00 | +| 500 GB | 08:00 | +| 1000 GB | 09:30 | + +**Migration steps involved for Online mode** = Dump of the source Single Server database(s), Restore of that dump in the target Flexible server, followed by Replication of ongoing changes (change data capture using logical decoding). + +The time taken for an online migration to complete is dependent on the incoming writes to the source server. The higher the write workload is on the source, the more time it takes for the data to the replicated to the target flexible server. + +Based on the above differences, pick the mode that best works for your workloads. + + + +## Migration steps + +### Pre-requisites + +Follow the steps provided in this section before you get started with the single to flexible server migration feature. + +- **Target Server Creation** - You need to create the target PostgreSQL flexible server before using the migration feature. Use the creation [QuickStart guide](./flexible-server/quickstart-create-server-portal.md) to create one. + +- **Source Server pre-requisites** - You must [enable logical replication](./concepts-logical.md) on the source server. + + :::image type="content" source="./media/concepts-single-to-flex/logical-replication-support.png" alt-text="Logical replication from Azure portal" lightbox="./media/concepts-single-to-flex/logical-replication-support.png"::: + +>[!NOTE] +> Enabling logical replication will require a server reboot for the change to take effect. + +- **Azure Active Directory App set up** - It is a critical component of the migration feature. Azure AD App helps with role-based access control as the migration feature needs access to both the source and target servers. See [How to setup and configure Azure AD App](./how-to-setup-aad-app-portal.md) for step-by-step process. + +### Data and schema migration + +Once all these pre-requisites are taken care of, you can do the migration. This automated step involves schema and data migration using Azure portal or Azure CLI. + +- [Migrate using Azure portal](./how-to-migrate-single-to-flex-portal.md) +- [Migrate using Azure CLI](./how-to-migrate-single-to-flex-cli.md) + +### Post migration + +- All the resources created by this migration tool will be automatically cleaned up irrespective of whether the migration has **succeeded/failed/cancelled**. There is no action required from you. + +- If your migration has failed and you want to retry the migration, then you need to create a new migration task with a different name and retry the operation. + +- If you have more than eight databases on your single server and if you want to migrate them all, then it is recommended to create multiple migration tasks with each task migrating up to eight databases. + +- The migration does not move the database users and roles of the source server. This has to be manually created and applied to the target server post migration. + +- For security reasons, it is highly recommended to delete the Azure Active Directory app once the migration completes. + +- Post data validations and making your application point to flexible server, you can consider deleting your single server. + +## Limitations + +### Size limitations + +- Databases of sizes up to 1TB can be migrated using this feature. To migrate larger databases or heavy write workloads, reach out to your account team or reach us @ AskAzureDBforPGS2F@microsoft.com. + +- In one migration attempt, you can migrate up to eight user databases from a single server to flexible server. In case you have more databases to migrate, you can create multiple migrations between the same single and flexible servers. + +### Performance limitations + +- The migration infrastructure is deployed on a 4 vCore VM which may limit the migration performance. + +- The deployment of migration infrastructure takes ~10-15 minutes before the actual data migration starts - irrespective of the size of data or the migration mode (online or offline). + +### Replication limitations + +- Single to Flexible Server migration feature uses logical decoding feature of PostgreSQL to perform the online migration and it comes with the following limitations. See PostgreSQL documentation for [logical replication limitations](https://www.postgresql.org/docs/10/logical-replication-restrictions.html). + - **DDL commands** are not replicated. + - **Sequence** data is not replicated. + - **Truncate** commands are not replicated.(**Workaround**: use DELETE instead of TRUNCATE. To avoid accidental TRUNCATE invocations, you can revoke the TRUNCATE privilege from tables) + + - Views, Materialized views, partition root tables and foreign tables will not be migrated. + +- Logical decoding will use resources in the source single server. Consider reducing the workload or plan to scale CPU/memory resources at the Source Single Server during the migration. + +### Other limitations + +- The migration feature migrates only data and schema of the single server databases to flexible server. It does not migrate other features such as server parameters, connection security details, firewall rules, users, roles and permissions. In other words, everything except data and schema must be manually configured in the target flexible server. + +- It does not validate the data in flexible server post migration. The customers must manually do this. + +- The migration tool only migrates user databases including Postgres database and not system/maintenance databases. + +- For failed migrations, there is no option to retry the same migration task. A new migration task with a unique name can to be created. + +- The migration feature does not include assessment of your single server. + +## Best practices + +- As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations. +- Plan the mode of migration for each database. For less complex migrations and smaller databases, consider offline mode of migrations. +- Batch similar sized databases in a migration task. +- Perform large database migrations with one or two databases at a time to avoid source-side load and migration failures. +- Perform test migrations before migrating for production. + - **Testing migrations** is a very important aspect of database migration to ensure that all aspects of the migration are taken care of, including application testing. The best practice is to begin by running a migration entirely for testing purposes. Start a migration, and after it enters the continuous replication (CDC) phase with minimal lag, make your flexible server as the primary database server and use it for testing the application to ensure expected performance and results. If you are doing migration to a higher Postgres version, test for your application compatibility. + + - **Production migrations** - Once testing is completed, you can migrate the production databases. At this point you need to finalize the day and time of production migration. Ideally, there is low application use at this time. In addition, all stakeholders that need to be involved should be available and ready. The production migration would require close monitoring. It is important that for an online migration, the replication is completed before performing the cutover to prevent data loss. + +- Cut over all dependent applications to access the new primary database and open the applications for production usage. +- Once the application starts running on flexible server, monitor the database performance closely to see if performance tuning is required. + +## Next steps + +- [Migrate to Flexible server using Azure portal](./how-to-migrate-single-to-flex-portal.md). +- [Migrate to Flexible server using Azure CLI](./how-to-migrate-single-to-flex-cli.md) \ No newline at end of file diff --git a/articles/postgresql/flexible-server/overview.md b/articles/postgresql/flexible-server/overview.md index 69a4c032626c8..2369888dcc186 100644 --- a/articles/postgresql/flexible-server/overview.md +++ b/articles/postgresql/flexible-server/overview.md @@ -120,6 +120,7 @@ One advantage of running your workload in Azure is global reach. The flexible se | US Gov Virginia | :heavy_check_mark: | :heavy_check_mark: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :x: | | UK West | :heavy_check_mark: | :x: | :x: | +| West Central US | :heavy_check_mark: | :x: | :x: | | West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :x: | :x: | | West US 2 | :x: $$ | :x: $ | :x: | @@ -129,7 +130,7 @@ $ New Zone-redundant high availability deployments are temporarily blocked in th $$ New server deployments are temporarily blocked in these regions. Already provisioned servers are fully supported. -** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Pre-existing servers deployed in AZ with *no preference* (which you can check on the Azure Portal), the standby will be provisioned in the same AZ. To configure zone-redundant high availability, perform a point-in-time restore of the server and enable HA on the restored server. +** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Pre-existing servers deployed in AZ with *no preference* (which you can check on the Azure portal), the standby will be provisioned in the same AZ. To configure zone-redundant high availability, perform a point-in-time restore of the server and enable HA on the restored server. > [!NOTE] diff --git a/articles/postgresql/how-to-migrate-single-to-flex-cli.md b/articles/postgresql/how-to-migrate-single-to-flex-cli.md new file mode 100644 index 0000000000000..d1e903bd5a8d7 --- /dev/null +++ b/articles/postgresql/how-to-migrate-single-to-flex-cli.md @@ -0,0 +1,342 @@ +--- +title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure CLI" +titleSuffix: Azure Database for PostgreSQL Flexible Server +description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using CLI. +author: hariramt +ms.author: hariramt +ms.service: postgresql +ms.topic: conceptual +ms.date: 05/09/2022 +--- + +# Migrate Single Server to Flexible Server PostgreSQL using Azure CLI + +>[!NOTE] +> Single Server to Flexible Server migration feature is in public preview. + +This quick start article shows you how to use Single to Flexible Server migration feature to migrate databases from Azure database for PostgreSQL Single server to Flexible server. + +## Before you begin + +1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings. +2. Register your subscription for Azure Database Migration Service (DMS). If you have already done it, you can skip this step. Go to Azure portal homepage and navigate to your subscription as shown below. + + :::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-dms.png" alt-text="Screenshot of C L I DMS" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-dms.png"::: + +3. In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for "**Microsoft.DataMigration**"; as shown below and click on **Register**. + + :::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-dms-register.png" alt-text="Screenshot of C L I DMS register" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-dms-register.png"::: + +## Pre-requisites + +### Setup Azure CLI + +1. Install the latest Azure CLI for your corresponding operating system from the [Azure CLI install page](/cli/azure/install-azure-cli) +2. In case Azure CLI is already installed, check the version by issuing **az version** command. The version should be **2.28.0 or above** to use the migration CLI commands. If not, update your Azure CLI using this [link](/cli/azure/update-azure-cli.md). +3. Once you have the right Azure CLI version, run the **az login** command. A browser page is opened with Azure sign-in page to authenticate. Provide your Azure credentials to do a successful authentication. For other ways to sign with Azure CLI, visit this [link](/cli/azure/authenticate-azure-cli.md). + + ```bash + az login + ``` +1. Take care of the pre-requisites listed in this [**document**](./concepts-single-to-flexible.md#pre-requisites) which are necessary to get started with the Single to Flexible migration feature. + +## Migration CLI commands + +Single to Flexible Server migration feature comes with a list of easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with **az postgres flexible-server migration**. You can use the **help** parameter to help you with understanding the various options associated with a command and in framing the right syntax for the same. + +```azurecli-interactive +az postgres flexible-server migration --help +``` + + gives you the following output. + + :::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-help.png" alt-text="Screenshot of C L I help" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-help.png"::: + +It lists the set of migration commands that are supported along with their actions. Let us look into these commands in detail. + +### Create migration + +The create migration command helps in creating a migration from a source server to a target server + +```azurecli-interactive +az postgres flexible-server migration create -- help +``` + +gives the following result + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-create.png" alt-text="Screenshot of C L I create" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-create.png"::: + +It calls out the expected arguments and has an example syntax that needs to be used to create a successful migration from the source to target server. The CLI command to create a migration is given below + +```azurecli +az postgres flexible-server migration create [--subscription] + [--resource-group] + [--name] + [--migration-name] + [--properties] +``` + +| Parameter | Description | +| ---- | ---- | +|**subscription** | Subscription ID of the target flexible server | +| **resource-group** | Resource group of the target flexible server | +| **name** | Name of the target flexible server | +| **migration-name** | Unique identifier to migrations attempted to the flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **-**. The name cannot start with a **-** and no two migrations to a flexible server can have the same name. | +| **properties** | Absolute path to a JSON file, that has the information about the source single server | + +**For example:** + +```azurecli-interactive +az postgres flexible-server migration create --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --properties "C:\Users\Administrator\Documents\migrationBody.JSON" +``` + +The **migration-name** argument used in **create migration** command will be used in other CLI commands such as **update, delete, show** to uniquely identify the migration attempt and to perform the corresponding actions. + +The migration feature offers online and offline mode of migration. To know more about the migration modes and their differences, visit this [link](./concepts-single-to-flexible.md) + +Create a migration between a source and target server with a migration mode of your choice. The **create** command needs a JSON file to be passed as part of its **properties** argument. + +The structure of the JSON is given below. + +```bash +{ +"properties": { + "SourceDBServerResourceId":"subscriptions//resourceGroups//providers/Microsoft.DBforPostgreSQL/servers/", + +"SourceDBServerFullyQualifiedDomainName": "fqdn of the source server as per the custom DNS server", +"TargetDBServerFullyQualifiedDomainName": "fqdn of the target server as per the custom DNS server" + +"SecretParameters": { + "AdminCredentials": + { + "SourceServerPassword": "", + "TargetServerPassword": "" + }, +"AADApp": + { + "ClientId": "", + "TenantId": "", + "AadSecret": "" + } +}, + +"MigrationResourceGroup": + { + "ResourceId":"subscriptions//resourceGroups/", + "SubnetResourceId":"/subscriptions//resourceGroups//providers/Microsoft.Network/virtualNetworks//subnets/" + }, + +"DBsToMigrate": + [ + "","" + ], + +"SetupLogicalReplicationOnSourceDBIfNeeded": "true", + +"OverwriteDBsInTarget": "true" + +} + +} + +``` + +Create migration parameters: + +| Parameter | Type | Description | +| ---- | ---- | ---- | +| **SourceDBServerResourceId** | Required | Resource ID of the single server and is mandatory. | +| **SourceDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution for a virtual network. The FQDN of the single server as per the custom DNS server should be provided for this property. | +| **TargetDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution inside a virtual network. The FQDN of the flexible server as per the custom DNS server should be provided for this property.
**_SourceDBServerFullyQualifiedDomainName_**, **_TargetDBServerFullyQualifiedDomainName_** should be included as a part of the JSON only in the rare scenario of a custom DNS server being used for name resolution instead of Azure provided DNS. Otherwise, these parameters should not be included as a part of the JSON file. | +| **SecretParameters** | Required | Passwords for admin user for both single server and flexible server along with the Azure AD app credentials. They help to authenticate against the source and target servers and help in checking proper authorization access to the resources. +| **MigrationResourceGroup** | optional | This section consists of two properties.
**ResourceID (optional)** : The migration infrastructure and other network infrastructure components are created to migrate data and schema from the source to target. By default, all the components created by this feature are provisioned under the resource group of the target server. If you wish to deploy them under a different resource group, then you can assign the resource ID of that resource group to this property.
**SubnetResourceID (optional)** : In case if your source has public access turned OFF or if your target server is deployed inside a VNet, then specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. | +| **DBsToMigrate** | Required | Specify the list of databases you want to migrate to the flexible server. You can include a maximum of 8 database names at a time. | +| **SetupLogicalReplicationOnSourceDBIfNeeded** | Optional | Logical replication can be enabled on the source server automatically by setting this property to **true**. This change in the server settings requires a server restart with a downtime of few minutes (~ 2-3 mins). | +| **OverwriteDBsinTarget** | Optional | If the target server happens to have an existing database with the same name as the one you are trying to migrate, the migration will pause until you acknowledge that overwrites in the target DBs are allowed. This pause can be avoided by giving the migration feature, permission to automatically overwrite databases by setting the value of this property to **true** | + +### Mode of migrations + +The default migration mode for migrations created using CLI commands is **online**. With the above properties filled out in your JSON file, an online migration would be created from your single server to flexible server. + +If you want to migrate in **offline** mode, you need to add an additional property **"TriggerCutover":"true"** to your properties JSON file before initiating the create command. + +### List migrations + +The **list command** shows the migration attempts that were made to a flexible server. The CLI command to list migrations is given below + +```azurecli +az postgres flexible-server migration list [--subscription] + [--resource-group] + [--name] + [--filter] +``` + +There is a parameter called **filter** and it can take **Active** and **All** as values. + +- **Active** – Lists down the current active migration attempts for the target server. It does not include the migrations that have reached a failed/cancelled/succeeded state. +- **All** – Lists down all the migration attempts to the target server. This includes both the active and past migrations irrespective of the state. + +```azurecli-interactive +az postgres flexible-server migration list -- help +``` + +For any additional information. + +### Show Details + +The **list** gets the details of a specific migration. This includes information on the current state and substate of the migration. The CLI command to show the details of a migration is given below: + +```azurecli +az postgres flexible-server migration list [--subscription] + [--resource-group] + [--name] + [--migration-name] +``` + +The **migration_name** is the name assigned to the migration during the **create migration** command. Here is a snapshot of the sample response from the **Show Details** CLI command. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-migration-name.png" alt-text="Screenshot of C L I migration name" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-migration-name.png"::: + +Some important points to note on the command response: + +- As soon as the **create** migration command is triggered, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 15 minutes for the migration workflow to deploy the migration infrastructure, configure firewall rules with source and target servers, and to perform a few maintenance tasks. +- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place. +- Each DB being migrated has its own section with all migration details such as table count, incremental inserts, deletes, pending bytes, etc. +- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated. +- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** sub state completes successfully. If there is an issue at the **Migrating Data** substate, the migration moves into a **Failed** state. +- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and a substate of **WaitingForCutoverTrigger** after the **Migrating Data** state completes successfully. The details of **WaitingForUserAction** state are covered in detail in the next section. + +```azurecli-interactive + az postgres flexible-server migration show -- help + ``` + +for any additional information. + +### Update migration + +As soon as the infrastructure setup is complete, the migration activity will pause with appropriate messages seen in the **show details** CLI command response if some pre-requisites are missing or if the migration is at a state to perform a cutover. At this point, the migration goes into a state called **WaitingForUserAction**. The **update migration** command is used to set values for parameters, which helps the migration to move to the next stage in the process. Let us look at each of the sub-states. + +- **WaitingForLogicalReplicationSetupRequestOnSourceDB** - If the logical replication is not set at the source server or if it was not included as a part of the JSON file, the migration will wait for logical replication to be enabled at the source. A user can enable the logical replication setting manually by changing the replication flag to **Logical** on the portal. This would require a server restart. This can also be enabled by the following CLI command + +```azurecli +az postgres flexible-server migration update [--subscription] + [--resource-group] + [--name] + [--migration-name] + [--initiate-data-migration] +``` + +You need to pass the value **true** to the **initiate-data-migration** property to set logical replication on your source server. + +**For example:** + +```azurecli-interactive +az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --initiate-data-migration true" +``` + +In case you have enabled it manually, **you would still need to issue the above update command** for the migration to move out of the **WaitingForUserAction** state. The server does not need a reboot again since it was already done via the portal action. + +- **WaitingForTargetDBOverwriteConfirmation** - This is the state where migration is waiting for confirmation on target overwrite as data is already present in the target server for the database that is being migrated. This can be enabled by the following CLI command. + +```azurecli +az postgres flexible-server migration update [--subscription] + [--resource-group] + [--name] + [--migration-name] + [--overwrite-dbs] +``` + +You need to pass the value **true** to the **overwrite-dbs** property to give the permissions to the migration to overwrite any existing data in the target server. + +**For example:** + +```azurecli-interactive +az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --overwrite-dbs true" +``` + +- **WaitingForCutoverTrigger** - Migration gets to this state when the dump and restore of the databases have been completed and the ongoing writes at your source single server is being replicated to the target flexible server.You should wait for the replication to complete so that the target is in sync with the source. You can monitor the replication lag by using the response from the **show migration** command. There is a metric called **Pending Bytes** associated with each database that is being migrated and this gives you indication of the difference between the source and target database in bytes. This should be nearing zero over time. Once it reaches zero for all the databases, stop any further writes to your single server. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server. After completing the above steps, you can trigger **cutover** by using the following CLI command. + +```azurecli +az postgres flexible-server migration update [--subscription] + [--resource-group] + [--name] + [--migration-name] + [--cutover] +``` + +**For example:** + +```azurecli-interactive +az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --cutover" +``` + +After issuing the above command, use the **show details** command to monitor if the cutover has completed successfully. Upon successful cutover, migration will move to **Succeeded** state. Update your application to point to the new target flexible server. + +```azurecli-interactive + az postgres flexible-server migration update -- help + ``` + +for any additional information. + +### Delete/Cancel Migration + +Any ongoing migration attempts can be deleted or cancelled using the **delete migration** command. This command stops all migration activities in that task, but does not drop or rollback any changes on your target server. Below is the CLI command to delete a migration + +```azurecli +az postgres flexible-server migration delete [--subscription] + [--resource-group] + [--name] + [--migration-name] +``` + +**For example:** + +```azurecli-interactive +az postgres flexible-server migration delete --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1" +``` + +```azurecli-interactive + az postgres flexible-server migration delete -- help + ``` + +for any additional information. + +## Monitoring Migration + +The **create migration** command starts a migration between the source and target servers. The migration goes through a set of states and substates before eventually moving into the **completed** state. The **show command** helps to monitor ongoing migrations since it gives the current state and substate of the migration. + +Migration **states**: + +| Migration State | Description | +| ---- | ---- | +| **InProgress** | The migration infrastructure is being setup, or the actual data migration is in progress. | +| **Canceled** | The migration has been cancelled or deleted. | +| **Failed** | The migration has failed. | +| **Succeeded** | The migration has succeeded and is complete. | +| **WaitingForUserAction** | Migration is waiting on a user action. This state has a list of substates that were discussed in detail in the previous section. | + +Migration **substates**: + +| Migration substates | Description | +| ---- | ---- | +| **PerformingPreRequisiteSteps** | Infrastructure is being set up and is being prepped for data migration. | +| **MigratingData** | Data is being migrated. | +| **CompletingMigration** | Migration cutover in progress. | +| **WaitingForLogicalReplicationSetupRequestOnSourceDB** | Waiting for logical replication enablement. You can manually enable this manually or enable via the update migration CLI command covered in the next section. | +| **WaitingForCutoverTrigger** | Migration is ready for cutover. You can start the cutover when ready. | +| **WaitingForTargetDBOverwriteConfirmation** | Waiting for confirmation on target overwrite as data is present in the target server being migrated into.
You can enable this via the **update migration** CLI command. | +| **Completed** | Cutover was successful, and migration is complete. | + + +## How to find if custom DNS is used for name resolution? +Navigate to your Virtual network where you deployed your source or the target server and click on **DNS server**. It should indicate if it is using a custom DNS server or default Azure provided DNS server. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-cli-dns-server.png" alt-text="CLI dns server" lightbox="./media/concepts-single-to-flex/single-to-flex-cli-dns-server.png"::: + +## Post Migration Steps + +Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration. + +## Next steps + +- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md) diff --git a/articles/postgresql/how-to-migrate-single-to-flex-portal.md b/articles/postgresql/how-to-migrate-single-to-flex-portal.md new file mode 100644 index 0000000000000..bf681c144b3a9 --- /dev/null +++ b/articles/postgresql/how-to-migrate-single-to-flex-portal.md @@ -0,0 +1,211 @@ +--- +title: "Migrate PostgreSQL Single Server to Flexible Server using the Azure portal" +titleSuffix: Azure Database for PostgreSQL Flexible Server +description: Learn about migrating your Single server databases to Azure database for PostgreSQL Flexible server using Portal. +author: hariramt +ms.author: hariramt +ms.service: postgresql +ms.topic: conceptual +ms.date: 05/09/2022 +--- + +# Migrate Single Server to Flexible Server PostgreSQL using the Azure portal + +This guide shows you how to use Single to Flexible server migration feature to migrate databases from Azure database for PostgreSQL Single server to Flexible server. + +## Before you begin + +1. If you are new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings. +2. Register your subscription for the Azure Database Migration Service + +Go to Azure portal homepage and navigate to your subscription as shown below. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-azure-portal.png" alt-text="Azure Portal Subscription details" lightbox="./media/concepts-single-to-flex/single-to-flex-azure-portal.png"::: + +In your subscription, navigate to **Resource Providers** from the left navigation menu. Search for **Microsoft.DataMigration**; as shown below and click on **Register**. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-register-data-migration.png" alt-text="Register Data Migration Service." lightbox="./media/concepts-single-to-flex/single-to-flex-register-data-migration.png"::: + +## Pre-requisites + +Take care of the pre-requisites listed [here](./concepts-single-to-flexible.md#pre-requisites) to get started with the migration feature. + +## Configure migration task + +Single to Flexible server migration feature comes with a simple, wizard-based portal experience. Let us get started to know the steps needed to consume the tool from portal. + +- **Sign into the Azure portal -** Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard. +- Navigate to your Azure database for PostgreSQL flexible server.If you have not created an Azure database for PostgreSQL flexible server, create one using this [link](./flexible-server/quickstart-create-server-portal.md). + +- In the **Overview** tab of your flexible server, use the left navigation menu and scroll down to the option of **Migration (preview)** and click on it. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-preview.png" alt-text="Migration Preview Tab details are shown." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-preview.png"::: + +Click the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you are using the migration feature, you will see an empty grid with a prompt to begin your first migration. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migrate-single-server.png" alt-text="Migrate from Single Server screenshot." lightbox="./media/concepts-single-to-flex/single-to-flex-migrate-single-server.png"::: + +If you have already created migrations to your flexible server, you should see the grid populated with information of the list of migrations that were attempted to this flexible server from single servers. + +Click on the **Migrate from Single Server** button. You will be taken through a wizard-based user interface to create a migration to this flexible server from any single server. + +### Setup tab + +The first is the setup tab which has basic information about the migration and the list of pre-requisites that need to be taken care of to get started with migrations. The list of pre-requisites is the same as the ones listed in the pre-requisites section [here](./concepts-single-to-flexible.md). Click on the provided link to know more about the same. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-setup.png" alt-text="Setup Tab detais are shown." lightbox="./media/concepts-single-to-flex/single-to-flex-setup.png"::: + +- The **Migration name** is the unique identifier for each migration to this flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **'-'**. The name cannot start with a **'-'** and should be unique for a target server. No two migrations to the same flexible server can have the same name. +- The **Migration resource group** is where all the migration-related components will be created by this migration feature. + +By default, it is resource group of the target flexible server and all the components will be cleaned up automatically once the migration completes. If you want to create a temporary resource group for migration-related purposes, create a resource group and select the same from the dropdown. + +- For the **Azure Active Directory App**, click the **select** option and pick the app that was created as a part of the pre-requisite step. Once the Azure AD App is chosen, paste the client secret that was generated for the Azure AD app to the **Azure Active Directory Client Secret** field. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-client-secret.png" alt-text="Client Secret is entered." lightbox="./media/concepts-single-to-flex/single-to-flex-client-secret.png"::: + +Click on the **Next** button. + +### Source tab + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-source.png" alt-text="Select source for the migration." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-source.png"::: + +The source tab prompts you to give details related to the source single server from which databases needs to be migrated. As soon as you pick the **Subscription** and **Resource Group**, the dropdown for server names will have the list of single servers under that resource group across regions. It is recommended to migrate databases from a single server to flexible server in the same region. + +Choose the single server from which you want to migrate databases from, in the drop down. + +Once the single server is chosen, the fields such as **Location, PostgreSQL version, Server admin login name** are automatically pre-populated. The server admin login name is the admin username that was used to create the single server. Enter the password for the **server admin login name**. This is required for the migration feature to login into the single server to initiate the dump and migration. + +You should also see the list of user databases inside the single server that you can pick for migration. You can select up to eight databases that can be migrated in a single migration attempt. If there are more than eight user databases, create multiple migrations using the same experience between the source and target servers. + +The final property in the source tab is migration mode. The migration feature offers online and offline mode of migration. To know more about the migration modes and their differences, please visit this [link](./concepts-single-to-flexible.md). + +Once you pick the migration mode, the restrictions associated with the mode are displayed. + +After filling out all the fields, please click the **Next** button. + +### Target tab + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-target.png" alt-text="Select target for the migration." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-target.png"::: + +This tab displays metadata of the flexible server like the **Subscription**, **Resource Group**, **Server name**, **Location**, and **PostgreSQL version**. It displays **server admin login name** which is the admin username that was used during the creation of the flexible server.Enter the corresponding password for the admin user. This is required for the migration feature to login into the flexible server to perform restore operations. + +Choose an option **yes/no** for **Authorize DB overwrite**. + +- If you set the option to **Yes**, you give this migration service permission to overwrite existing data in case when a database that is being migrated to flexible server is already present. +- If set to **No**, it goes into a waiting state and asks you for permission either to overwrite the data or to cancel the migration. + +Click on the **Next** button + +### Networking tab + +The content on the Networking tab depends on the networking topology of your source and target servers. + +- If both source and target servers are in public access, then you are going to see the message below. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-networking.png" alt-text="Migration Networking configuration." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-networking.png"::: + +In this case, you need not do anything and can just click on the **Next** button. + +- If either the source or target server is configured in private access, then the content of the networking tab is going to be different. Let us try to understand what does private access mean for single server and flexible server: + +- **Single Server Private Access** – **Deny public network access** set to **Yes** and a private end point configured +- **Flexible Server Private Access** – When flexible server is deployed inside a Vnet. + +If either source or target is configured in private access, then the networking tab looks like the following + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-private.png" alt-text="Networking Private Access configuration." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-private.png"::: + +All the fields will be automatically populated with subnet details. This is the subnet in which the migration feature will deploy Azure DMS to move data between the source and target. + +You can go ahead with the suggested subnet or choose a different subnet. But make sure that the selected subnet can connect to both the source and target servers. + +After picking a subnet, click on **Next** button + +### Review + create tab + +This tab gives a summary of all the details for creating the migration. Review the details and click on the **Create** button to start the migration. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-review.png" alt-text="Migration review screenshot." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-review.png"::: + +## Monitoring migrations + +After clicking on the **Create** button, you should see a notification in a few seconds saying the migration was successfully created. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-monitoring.png" alt-text="Migration monitoring screenshot." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-monitoring.png"::: + +You should automatically be redirected to **Migrations (Preview)** page of flexible server that will have a new entry of the recently created migration + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-review-tab.png" alt-text="Migration review tab is shown." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-review-tab.png"::: + +The grid displaying the migrations has various columns including **Name**, **Status**, **Source server name**, **Region**, **Version**, **Database names**, and the **Migration start time**. By default, the grid shows the list of migrations in the decreasing order of migration start time. In other words, recent migrations appear on top of the grid. + +You can use the refresh button to refresh the status of the migrations. + +You can click on the migration name in the grid to see the details of that migration. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-grid.png" alt-text="Migration grid is shown." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-grid.png"::: + +- As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 10 minutes for the migration workflow to move out of this substate since it takes time to create and deploy DMS, add its IP on firewall list of source and target servers and to perform a few maintenance tasks. +- After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place. +- The time taken for **Migrating Data** substate to complete is dependent on the size of databases that are being migrated. +- You can click on each of the DBs that are being migrated and a fan-out blade appears that has all migration details such as table count, incremental inserts, deletes, pending bytes, etc. +- For **Offline** mode, the migration moves to **Succeeded** state as soon as the **Migrating Data** state completes successfully. If there is an issue at the **Migrating Data** state, the migration moves into a **Failed** state. +- For **Online** mode, the migration moves to the state of **WaitingForUserAction** and **WaitingForCutOver** substate after the **Migrating Data** substate completes successfully. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-status-wait.png" alt-text="Migration status showing wait." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-status-wait.png"::: + +You can click on the migration name to go into the migration details page and should see the substate of **WaitingForCutover**. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-cutover.png" alt-text="Migration ready for cutover." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-cutover.png"::: + +At this stage, the ongoing writes at your source single server will be replicated to the target flexible server using the logical decoding feature of PostgreSQL. You should wait until the replication reaches a state where the target is almost in sync with the source. You can monitor the replication lag by clicking on each of the databases that are being migrated. It opens a fan-out blade with a bunch of metrics. Look for the value of **Pending Bytes** metric and it should be nearing zero over time. Once it reaches to a few MB for all the databases, stop any further writes to your single server and wait until the metric reaches 0. This should be followed by the validation of data and schema on your flexible server to make sure it matches exactly with the source server. + +After completing the above steps, click on the **Cutover** button. You should see the following message + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-click-cutover.png" alt-text="Click cutover after migration." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-click-cutover.png"::: + +Click on the **Yes** button to start cutover. + +In a few seconds after starting cutover, you should see the following notification + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-successful-cutover.png" alt-text="Successful cutover after migration." lightbox="./media/concepts-single-to-flex/single-to-flex-migration-successful-cutover.png"::: + +Once the cutover is complete, the migration moves to **Succeeded** state and migration of schema data from your single server to flexible server is now complete. You can use the refresh button in the page to check if the cutover was successful. + +After completing the above steps, you can make changes to your application code to point database connection strings to the flexible server and start using it as the primary database server. + +Possible migration states include + +- **InProgress**: The migration infrastructure is being setup, or the actual data migration is in progress. +- **Canceled**: The migration has been cancelled or deleted. +- **Failed**: The migration has failed. +- **Succeeded**: The migration has succeeded and is complete. +- **WaitingForUserAction**: Migration is waiting on a user action.. + +Possible migration substates include + +- **PerformingPreRequisiteSteps**: Infrastructure is being set up and is being prepped for data migration +- **MigratingData**: Data is being migrated +- **CompletingMigration**: Migration cutover in progress +- **WaitingForLogicalReplicationSetupRequestOnSourceDB**: Waiting for logical replication enablement. +- **WaitingForCutoverTrigger**: Migration is ready for cutover. +- **WaitingForTargetDBOverwriteConfirmation**: Waiting for confirmation on target overwrite as data is present in the target server being migrated into. +- **Completed**: Cutover was successful, and migration is complete. + +## Cancel migrations + +You also have the option to cancel any ongoing migrations. For a migration to be canceled, it must be in **InProgress** or **WaitingForUserAction** state. You cannot cancel a migration that has either already **Succeeded** or **Failed**. + +You can choose multiple ongoing migrations at once and can cancel them. + +:::image type="content" source="./media/concepts-single-to-flex/single-to-flex-migration-multiple.png" alt-text="Migration multiple" lightbox="./media/concepts-single-to-flex/single-to-flex-migration-multiple.png"::: + +Note that **cancel migration** just stops any more further migration activity on your target server. It will not drop or roll back any changes on your target server that were done by the migration attempts. Make sure to drop the databases involved in a cancelled migration on your target server. + +## Post migration steps + +Make sure the post migration steps listed [here](./concepts-single-to-flexible.md) are followed for a successful end to end migration. + +## Next steps +- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md) \ No newline at end of file diff --git a/articles/postgresql/how-to-setup-aad-app-portal.md b/articles/postgresql/how-to-setup-aad-app-portal.md new file mode 100644 index 0000000000000..c08e29d89f14b --- /dev/null +++ b/articles/postgresql/how-to-setup-aad-app-portal.md @@ -0,0 +1,73 @@ +--- +title: "Setup Azure AD app to use with Single to Flexible migration" +titleSuffix: Azure Database for PostgreSQL Flexible Server +description: Learn about setting up Azure AD App to be used with Single to Flexible Server migration feature. +author: hariramt +ms.author: hariramt +ms.service: postgresql +ms.topic: conceptual +ms.date: 05/09/2022 +--- + +# Setup Azure AD app to use with Single to Flexible server Migration + +This quick start article shows you how to setup Azure Active Directory (Azure AD) app to use with Single to Flexible server migration. It's an important component of the Single to Flexible migration feature. See [Azure Active Directory app](../active-directory/develop/howto-create-service-principal-portal.md) for details. Azure AD App helps with role-based access control (RBAC) as the migration infrastructure requires access to both the source and target servers, and is restricted by the roles assigned to the Azure Active Directory App. The Azure AD app instance once created, can be used to manage multiple migrations. To get started, create a new Azure Active Directory Enterprise App by doing the following steps: + +## Create Azure AD App + +1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate our offerings. +2. Search for Azure Active Directory in the search bar on the top in the portal. +3. Within the Azure Active Directory portal, under **Manage** on the left, choose **App Registrations**. +4. Click on **New Registration** + :::image type="content" source="./media/concepts-single-to-flex/azure-ad-new-registration.png" alt-text="New Registration for Azure Active Directory App." lightbox="./media/concepts-single-to-flex/azure-ad-new-registration.png"::: + +5. Give the app registration a name, choose an option that suits your needs for account types and click register + :::image type="content" source="./media/concepts-single-to-flex/azure-ad-application-registration.png" alt-text="Azure AD App Name screen." lightbox="./media/concepts-single-to-flex/azure-ad-application-registration.png"::: + +6. Once the app is created, you can copy the client ID and tenant ID required for later steps in the migration. Next, click on **Add a certificate or secret**. + :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-secret-screen.png" alt-text="Add a certificate screen." lightbox="./media/concepts-single-to-flex/azure-ad-add-secret-screen.png"::: + +7. In the next screen, click on **New client secret**. + :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-new-client-secret.png" alt-text="New Client Secret screen." lightbox="./media/concepts-single-to-flex/azure-ad-add-new-client-secret.png"::: + +8. In the fan-out blade that opens, add a description, and select the drop-down to pick the life span of your Azure Active Directory App. Once all the migrations are complete, the Azure Active Directory App that was created for Role Based Access Control can be deleted. The default option is six months. If you don't need Azure Active Directory App for six months, choose three months and click **Add**. + :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-client-secret-description.png" alt-text="Client Secret Description." lightbox="./media/concepts-single-to-flex/azure-ad-add-client-secret-description.png"::: + +9. In the next screen, copy the **Value** column that has the details of the Azure Active Directory App secret. This can be copied only while creation. If you miss copying the secret, you will need to delete the secret and create another one for future tries. + :::image type="content" source="./media/concepts-single-to-flex/azure-ad-client-secret-value.png" alt-text="Copying client secret." lightbox="./media/concepts-single-to-flex/azure-ad-client-secret-value.png"::: + +10. Once Azure Active Directory App is created, you will need to add contributor privileges for this Azure Active Directory app to the following resources: + + | Resource | Type | Description | + | ---- | ---- | ---- | + | Single Server | Required | Source single server you're migrating from. | + | Flexible Server | Required | Target flexible server you're migrating into. | + | Azure Resource Group | Required | Resource group for the migration. By default, this is the target flexible server resource group. If you're using a temporary resource group to create the migration infrastructure, the Azure Active Directory App will require contributor privileges to this resource group. | + | VNET | Required (if used) | If the source or the target happens to have private access, then the Azure Active Directory App will require contributor privileges to corresponding VNet. If you're using public access, you can skip this step. | + + +## Add contributor privileges to an Azure resource + +Repeat the steps listed below for source single server, target flexible server, resource group and Vnet (if used). + +1. For the target flexible server, select the target flexible server in the Azure portal. Click on Access Control (IAM) on the top left. + :::image type="content" source="./media/concepts-single-to-flex/azure-ad-iam-screen.png" alt-text="Access Control I A M screen." lightbox="./media/concepts-single-to-flex/azure-ad-iam-screen.png"::: + +2. Click **Add** and choose **Add role assignment**. + :::image type="content" source="./media/concepts-single-to-flex/azure-ad-add-role-assignment.png" alt-text="Add role assignment here." lightbox="./media/concepts-single-to-flex/azure-ad-add-role-assignment.png"::: + +> [!NOTE] +> The Add role assignment capability is only enabled for users in the subscription with role type as **Owners**. Users with other roles do not have permission to add role assignments. + +3. Under the **Role** tab, click on **Contributor** and click Next button + :::image type="content" source="./media/concepts-single-to-flex/azure-ad-contributor-privileges.png" alt-text="Choosing Contributor Screen." lightbox="./media/concepts-single-to-flex/azure-ad-contributor-privileges.png"::: + +4. Under the Members tab, keep the default option of **Assign access to** User, group or service principal and click **Select Members**. Search for your Azure Active Directory App and click on **Select**. + :::image type="content" source="./media/concepts-single-to-flex/azure-ad-review-and-assign.png" alt-text="Review and Assign Screen." lightbox="./media/concepts-single-to-flex/azure-ad-review-and-assign.png"::: + + +## Next steps + +- [Single Server to Flexible migration concepts](./concepts-single-to-flexible.md) +- [Migrate to Flexible server using Azure portal](./how-to-migrate-single-to-flex-portal.md) +- [Migrate to Flexible server using Azure CLI](./how-to-migrate-single-to-flex-cli.md) \ No newline at end of file diff --git a/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-client-secret-description.png b/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-client-secret-description.png new file mode 100644 index 0000000000000..06937f45d1f2c Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-client-secret-description.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-new-client-secret.png b/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-new-client-secret.png new file mode 100644 index 0000000000000..fdb23cd7ec712 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-new-client-secret.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-role-assignment.png b/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-role-assignment.png new file mode 100644 index 0000000000000..0de75eb0993ca Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-role-assignment.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-secret-screen.png b/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-secret-screen.png new file mode 100644 index 0000000000000..cddd5d3678dda Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/azure-ad-add-secret-screen.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/azure-ad-application-registration.png b/articles/postgresql/media/concepts-single-to-flex/azure-ad-application-registration.png new file mode 100644 index 0000000000000..af2ed6ac23a6e Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/azure-ad-application-registration.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/azure-ad-client-secret-value.png b/articles/postgresql/media/concepts-single-to-flex/azure-ad-client-secret-value.png new file mode 100644 index 0000000000000..239dd3e0ad9ee Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/azure-ad-client-secret-value.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/azure-ad-contributor-privileges.png b/articles/postgresql/media/concepts-single-to-flex/azure-ad-contributor-privileges.png new file mode 100644 index 0000000000000..b3e397371c8e4 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/azure-ad-contributor-privileges.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/azure-ad-iam-screen.png b/articles/postgresql/media/concepts-single-to-flex/azure-ad-iam-screen.png new file mode 100644 index 0000000000000..4acdbcc5205c5 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/azure-ad-iam-screen.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/azure-ad-new-registration.png b/articles/postgresql/media/concepts-single-to-flex/azure-ad-new-registration.png new file mode 100644 index 0000000000000..1fe597b1a5e23 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/azure-ad-new-registration.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/azure-ad-review-and-assign.png b/articles/postgresql/media/concepts-single-to-flex/azure-ad-review-and-assign.png new file mode 100644 index 0000000000000..a88f85e791842 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/azure-ad-review-and-assign.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/concepts-flow-diagram.png b/articles/postgresql/media/concepts-single-to-flex/concepts-flow-diagram.png new file mode 100644 index 0000000000000..0be52b1cc540f Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/concepts-flow-diagram.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/logical-replication-support.png b/articles/postgresql/media/concepts-single-to-flex/logical-replication-support.png new file mode 100644 index 0000000000000..9ee0a2c91a5ea Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/logical-replication-support.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-azure-portal.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-azure-portal.png new file mode 100644 index 0000000000000..91e09bf841165 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-azure-portal.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-create.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-create.png new file mode 100644 index 0000000000000..6f6a2c4eb3660 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-create.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-dms-register.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-dms-register.png new file mode 100644 index 0000000000000..9ac3b1bc135c5 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-dms-register.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-dms.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-dms.png new file mode 100644 index 0000000000000..91e09bf841165 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-dms.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-dns-server.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-dns-server.png new file mode 100644 index 0000000000000..62091ac4a0fa6 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-dns-server.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-help.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-help.png new file mode 100644 index 0000000000000..d3c032dd09c89 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-help.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-migration-name.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-migration-name.png new file mode 100644 index 0000000000000..3700da0e57373 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-cli-migration-name.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-client-secret.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-client-secret.png new file mode 100644 index 0000000000000..331cc4da5a37f Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-client-secret.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migrate-single-server.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migrate-single-server.png new file mode 100644 index 0000000000000..370b3b146be5d Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migrate-single-server.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-click-cutover.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-click-cutover.png new file mode 100644 index 0000000000000..fe3463b1da848 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-click-cutover.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-cutover.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-cutover.png new file mode 100644 index 0000000000000..7a18d77b634c1 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-cutover.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-grid.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-grid.png new file mode 100644 index 0000000000000..e58dd0664712b Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-grid.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-monitoring.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-monitoring.png new file mode 100644 index 0000000000000..819432896e06f Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-monitoring.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-multiple.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-multiple.png new file mode 100644 index 0000000000000..7a5b057e70abe Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-multiple.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-networking.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-networking.png new file mode 100644 index 0000000000000..4d5a9fc6f390b Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-networking.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-preview.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-preview.png new file mode 100644 index 0000000000000..c7261f372df3d Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-preview.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-private.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-private.png new file mode 100644 index 0000000000000..4a47b735add1f Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-private.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-review-tab.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-review-tab.png new file mode 100644 index 0000000000000..3b1e6fc05ae24 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-review-tab.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-review.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-review.png new file mode 100644 index 0000000000000..a51c5e2925965 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-review.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-source.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-source.png new file mode 100644 index 0000000000000..a3f483ad32ffe Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-source.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-status-wait.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-status-wait.png new file mode 100644 index 0000000000000..e5e8bd3bd9210 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-status-wait.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-successful-cutover.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-successful-cutover.png new file mode 100644 index 0000000000000..afb9f750a4a44 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-successful-cutover.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-target.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-target.png new file mode 100644 index 0000000000000..3466633982461 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-migration-target.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-register-data-migration.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-register-data-migration.png new file mode 100644 index 0000000000000..9ac3b1bc135c5 Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-register-data-migration.png differ diff --git a/articles/postgresql/media/concepts-single-to-flex/single-to-flex-setup.png b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-setup.png new file mode 100644 index 0000000000000..c190ee56ed3ee Binary files /dev/null and b/articles/postgresql/media/concepts-single-to-flex/single-to-flex-setup.png differ diff --git a/articles/private-link/create-private-endpoint-bicep.md b/articles/private-link/create-private-endpoint-bicep.md index 77b452431a538..0105bcc1aa3cd 100644 --- a/articles/private-link/create-private-endpoint-bicep.md +++ b/articles/private-link/create-private-endpoint-bicep.md @@ -29,7 +29,7 @@ This Bicep file creates a private endpoint for an instance of Azure SQL Database The Bicep file that this quickstart uses is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/private-endpoint-sql/). -:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.sql/private-endpoint-sql/azuredeploy.json"::: +:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.sql/private-endpoint-sql/main.bicep"::: The Bicep file defines multiple Azure resources: diff --git a/articles/private-link/media/private-endpoint-static-ip-powershell/web-app-default-page.png b/articles/private-link/media/private-endpoint-static-ip-powershell/web-app-default-page.png new file mode 100644 index 0000000000000..7f54289e3e590 Binary files /dev/null and b/articles/private-link/media/private-endpoint-static-ip-powershell/web-app-default-page.png differ diff --git a/articles/private-link/private-endpoint-static-ip-powershell.md b/articles/private-link/private-endpoint-static-ip-powershell.md new file mode 100644 index 0000000000000..6239c9428a68d --- /dev/null +++ b/articles/private-link/private-endpoint-static-ip-powershell.md @@ -0,0 +1,303 @@ +--- +title: Create a private endpoint with a static IP address - PowerShell +titleSuffix: Azure Private Link +description: Learn how to create a private endpoint for an Azure service with a static private IP address. +author: asudbring +ms.author: allensu +ms.service: private-link +ms.topic: how-to +ms.date: 05/13/2022 +ms.custom: +--- + +# Create a private endpoint with a static IP address using PowerShell + + A private endpoint IP address is allocated by DHCP in your virtual network by default. In this article, you'll create a private endpoint with a static IP address. + +## Prerequisites + +- An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). + +- An Azure web app with a **PremiumV2-tier** or higher app service plan, deployed in your Azure subscription. + + - For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md). + + - The example webapp in this article is named **myWebApp1979**. Replace the example with your webapp name. + +If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. + +## Create a resource group + +An Azure resource group is a logical container where Azure resources are deployed and managed. + +Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup): + +```azurepowershell-interactive +New-AzResourceGroup -Name 'myResourceGroup' -Location 'eastus' +``` + +## Create a virtual network and bastion host + +A virtual network and subnet is required for to host the private IP address for the private endpoint. You'll create a bastion host to connect securely to the virtual machine to test the private endpoint. You'll create the virtual machine in a later section. + +In this section, you'll: + +- Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) + +- Create subnet configurations for the backend subnet and the bastion subnet with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) + +- Create a public IP address for the bastion host with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) + +- Create the bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion) + +```azurepowershell-interactive +## Configure the back-end subnet. ## +$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name myBackendSubnet -AddressPrefix 10.0.0.0/24 + +## Create the Azure Bastion subnet. ## +$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig -Name AzureBastionSubnet -AddressPrefix 10.0.1.0/24 + +## Create the virtual network. ## +$net = @{ + Name = 'MyVNet' + ResourceGroupName = 'myResourceGroup' + Location = 'eastus' + AddressPrefix = '10.0.0.0/16' + Subnet = $subnetConfig, $bastsubnetConfig +} +$vnet = New-AzVirtualNetwork @net + +## Create the public IP address for the bastion host. ## +$ip = @{ + Name = 'myBastionIP' + ResourceGroupName = 'myResourceGroup' + Location = 'eastus' + Sku = 'Standard' + AllocationMethod = 'Static' + Zone = 1,2,3 +} +$publicip = New-AzPublicIpAddress @ip + +## Create the bastion host. ## +$bastion = @{ + ResourceGroupName = 'myResourceGroup' + Name = 'myBastion' + PublicIpAddress = $publicip + VirtualNetwork = $vnet +} +New-AzBastion @bastion -AsJob +``` + +## Create a private endpoint + +An Azure service that supports private endpoints is required to setup the private endpoint and connection to the virtual network. For the examples in this article, we are using an Azure WebApp from the prerequisites. For more information on the Azure services that support a private endpoint, see [Azure Private Link availability](availability.md). + +> [!IMPORTANT] +> You must have a previously deployed Azure WebApp to proceed with the steps in this article. See [Prerequisites](#prerequisites) for more information. + +In this section, you'll: + +- Create a private link service connection with [New-AzPrivateLinkServiceConnection](/powershell/module/az.network/new-azprivatelinkserviceconnection). + +- Create the private endpoint static IP configuration with [New-AzPrivateEndpointIpConfiguration](/powershell/module/az.network/new-azprivateendpointipconfiguration). + +- Create the private endpoint with [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint). + +```azurepowershell-interactive +## Place the previously created webapp into a variable. ## +$webapp = Get-AzWebApp -ResourceGroupName myResourceGroup -Name myWebApp1979 + +## Create the private endpoint connection. ## +$pec = @{ + Name = 'myConnection' + PrivateLinkServiceId = $webapp.ID + GroupID = 'sites' +} +$privateEndpointConnection = New-AzPrivateLinkServiceConnection @pec + +## Place the virtual network you created previously into a variable. ## +$vnet = Get-AzVirtualNetwork -ResourceGroupName 'myResourceGroup' -Name 'myVNet' + +## Disable the private endpoint network policy. ## +$vnet.Subnets[0].PrivateEndpointNetworkPolicies = "Disabled" +$vnet | Set-AzVirtualNetwork + +## Create the static IP configuration. ## +$ip = @{ + Name = 'myIPconfig' + GroupId = 'sites' + MemberName = 'sites' + PrivateIPAddress = '10.0.0.10' +} +$ipconfig = New-AzPrivateEndpointIpConfiguration @ip + +## Create the private endpoint. ## +$pe = @{ + ResourceGroupName = 'myResourceGroup' + Name = 'myPrivateEndpoint' + Location = 'eastus' + Subnet = $vnet.Subnets[0] + PrivateLinkServiceConnection = $privateEndpointConnection + IpConfiguration = $ipconfig +} +New-AzPrivateEndpoint @pe + +``` + +## Configure the private DNS zone + +A private DNS zone is used to resolve the DNS name of the private endpoint in the virtual network. For this example, we are using the DNS information for an Azure WebApp, for more information on the DNS configuration of private endpoints, see [Azure Private Endpoint DNS configuration](private-endpoint-dns.md)]. + +In this section, you'll: + +- Create a new private Azure DNS zone with [New-AzPrivateDnsZone](/powershell/module/az.privatedns/new-azprivatednszone) + +- Link the DNS zone to the virtual network you created previously with [New-AzPrivateDnsVirtualNetworkLink](/powershell/module/az.privatedns/new-azprivatednsvirtualnetworklink) + +- Create a DNS zone configuration with [New-AzPrivateDnsZoneConfig](/powershell/module/az.network/new-azprivatednszoneconfig) + +- Create a DNS zone group with [New-AzPrivateDnsZoneGroup](/powershell/module/az.network/new-azprivatednszonegroup) + +```azurepowershell-interactive +## Place the virtual network into a variable. ## +$vnet = Get-AzVirtualNetwork -ResourceGroupName 'myResourceGroup' -Name 'myVNet' + +## Create the private DNS zone. ## +$zn = @{ + ResourceGroupName = 'myResourceGroup' + Name = 'privatelink.azurewebsites.net' +} +$zone = New-AzPrivateDnsZone @zn + +## Create a DNS network link. ## +$lk = @{ + ResourceGroupName = 'myResourceGroup' + ZoneName = 'privatelink.azurewebsites.net' + Name = 'myLink' + VirtualNetworkId = $vnet.Id +} +$link = New-AzPrivateDnsVirtualNetworkLink @lk + +## Configure the DNS zone. ## +$cg = @{ + Name = 'privatelink.azurewebsites.net' + PrivateDnsZoneId = $zone.ResourceId +} +$config = New-AzPrivateDnsZoneConfig @cg + +## Create the DNS zone group. ## +$zg = @{ + ResourceGroupName = 'myResourceGroup' + PrivateEndpointName = 'myPrivateEndpoint' + Name = 'myZoneGroup' + PrivateDnsZoneConfig = $config +} +New-AzPrivateDnsZoneGroup @zg + +``` + +## Create a test virtual machine + +To verify the static IP address and the functionality of the private endpoint, a test virtual machine connected to your virtual network is required. + +In this section, you'll: + +- Create a login credential for the virtual machine with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential) + +- Create a network interface for the virtual machine with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) + +- Create a virtual machine configuration with [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig), [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem), [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage), and [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface) + +- Create the virtual machine with [New-AzVM](/powershell/module/az.compute/new-azvm) + +```azurepowershell-interactive +## Create the credential for the virtual machine. Enter a username and password at the prompt. ## +$cred = Get-Credential + +## Place the virtual network into a variable. ## +$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup + +## Create a network interface for the virtual machine. ## +$nic = @{ + Name = 'myNicVM' + ResourceGroupName = 'myResourceGroup' + Location = 'eastus' + Subnet = $vnet.Subnets[0] +} +$nicVM = New-AzNetworkInterface @nic + +## Create the configuration for the virtual machine. ## +$vm1 = @{ + VMName = 'myVM' + VMSize = 'Standard_DS1_v2' +} +$vm2 = @{ + ComputerName = 'myVM' + Credential = $cred +} +$vm3 = @{ + PublisherName = 'MicrosoftWindowsServer' + Offer = 'WindowsServer' + Skus = '2019-Datacenter' + Version = 'latest' +} +$vmConfig = +New-AzVMConfig @vm1 | Set-AzVMOperatingSystem -Windows @vm2 | Set-AzVMSourceImage @vm3 | Add-AzVMNetworkInterface -Id $nicVM.Id + +## Create the virtual machine. ## +New-AzVM -ResourceGroupName 'myResourceGroup' -Location 'eastus' -VM $vmConfig + +``` + +[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)] + +## Test connectivity with the private endpoint + +Use the VM you created in the previous step to connect to the webapp across the private endpoint. + +1. Sign in to the [Azure portal](https://portal.azure.com). + +2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**. + +3. Select **myVM**. + +4. On the overview page for **myVM**, select **Connect**, and then select **Bastion**. + +5. Enter the username and password that you used when you created the VM. Select **Connect**. + +6. After you've connected, open PowerShell on the server. + +7. Enter `nslookup mywebapp1979.azurewebsites.net`. Replace **mywebapp1979** with the name of the web app that you created earlier. You'll receive a message that's similar to the following: + + ```powershell + Server: UnKnown + Address: 168.63.129.16 + + Non-authoritative answer: + Name: mywebapp1979.privatelink.azurewebsites.net + Address: 10.0.0.10 + Aliases: mywebapp1979.azurewebsites.net + ``` + + A static private IP address of *10.0.0.10* is returned for the web app name. + +8. In the bastion connection to **myVM**, open the web browser. + +9. Enter the URL of your web app, **https://mywebapp1979.azurewebsites.net**. + + If your web app hasn't been deployed, you'll get the following default web app page: + + :::image type="content" source="./media/private-endpoint-static-ip-powershell/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true"::: + +10. Close the connection to **myVM**. + +## Next steps + +To learn more about Private Link and Private endpoints, see + +- [What is Azure Private Link](private-link-overview.md) + +- [Private endpoint overview](private-endpoint-overview.md) + + + diff --git a/articles/private-link/toc.yml b/articles/private-link/toc.yml index 89969e99b2d1f..826cff3b477cc 100644 --- a/articles/private-link/toc.yml +++ b/articles/private-link/toc.yml @@ -65,6 +65,10 @@ href: /security/benchmark/azure/baselines/private-link-security-baseline?toc=/azure/private-link/toc.json - name: How-to items: + - name: Private endpoint with static IP address + items: + - name: PowerShell + href: private-endpoint-static-ip-powershell.md - name: Export private endpoint DNS records href: private-endpoint-export-dns.md - name: Manage network policies for private endpoints diff --git a/articles/purview/overview.md b/articles/purview/overview.md index bfbc57f281436..c1577316ef72c 100644 --- a/articles/purview/overview.md +++ b/articles/purview/overview.md @@ -10,7 +10,10 @@ ms.date: 12/06/2021 # What is Microsoft Purview? -Microsoft Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Enable data curators to manage and secure your data estate. Empower data consumers to find valuable, trustworthy data. +Microsoft Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Microsoft Purview allows you to: +- Create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. +- Enable data curators to manage and secure your data estate. +- Empower data consumers to find valuable, trustworthy data. :::image type="content" source="./media/overview/high-level-overview.png" alt-text="High-level architecture of Microsoft Purview, showing multi-cloud and on premises sources flowing into Microsoft Purview, and Microsoft Purview's apps (Data Catalog, Map, and Data Estate Insights) allowing data consumers and data curators to view and manage metadata. This metadata is also being ported to external analytics services from Microsoft Purview for more processing." lightbox="./media/overview/high-level-overview-large.png"::: @@ -28,14 +31,14 @@ Microsoft Purview automates data discovery by providing data scanning and classi ## Data Map -Microsoft Purview Data Map provides the foundation for data discovery and effective data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the Microsoft Purview Data Map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.0 APIs. +Microsoft Purview Data Map provides the foundation for data discovery and effective data governance. Microsoft Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Microsoft Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the Microsoft Purview Data Map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.2 APIs. Microsoft Purview Data Map powers the Microsoft Purview Data Catalog and Microsoft Purview Data Estate Insights as unified experiences within the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/). For more information, see our [introduction to Data Map](concept-elastic-data-map.md). ## Data Catalog -With the Microsoft Purview Data Catalog, business and technical users alike can quickly & easily find relevant data using a search experience with filters based on various lenses like glossary terms, classifications, sensitivity labels and more. For subject matter experts, data stewards and officers, the Microsoft Purview Data Catalog provides data curation features like business glossary management and ability to automate tagging of data assets with glossary terms. Data consumers and producers can also visually trace the lineage of data assets starting from the operational systems on-premises, through movement, transformation & enrichment with various data storage & processing systems in the cloud to consumption in an analytics system like Power BI. +With the Microsoft Purview Data Catalog, business and technical users can quickly and easily find relevant data using a search experience with filters based on lenses such as glossary terms, classifications, sensitivity labels and more. For subject matter experts, data stewards and officers, the Microsoft Purview Data Catalog provides data curation features such as business glossary management and the ability to automate tagging of data assets with glossary terms. Data consumers and producers can also visually trace the lineage of data assets: for example, starting from operational systems on-premises, through movement, transformation & enrichment with various data storage and processing systems in the cloud, to consumption in an analytics system like Power BI. For more information, see our [introduction to search using Data Catalog](how-to-search-catalog.md). ## Data Estate Insights @@ -50,7 +53,7 @@ Traditionally, discovering enterprise data sources has been an organic process b * Because there's no central location to register data sources, users might be unaware of a data source unless they come into contact with it as part of another process. * Unless users know the location of a data source, they can't connect to the data by using a client application. Data-consumption experiences require users to know the connection string or path. * The intended use of the data is hidden to users unless they know the location of a data source's documentation. Data sources and documentation might live in several places and be consumed through different kinds of experiences. -* If users have questions about an information asset, they must locate the expert or team that's responsible for the data and engage them offline. There's no explicit connection between data and the experts that have perspectives on its use. +* If users have questions about an information asset, they must locate the expert or team responsible for that data and engage them offline. There's no explicit connection between the data and the experts that understand the data's context. * Unless users understand the process for requesting access to the data source, discovering the data source and its documentation won't help them access the data. ## Discovery challenges for data producers @@ -68,9 +71,9 @@ When such challenges are combined, they present a significant barrier for compan Users who are responsible for ensuring the security of their organization's data may have any of the challenges listed above as data consumers and producers, and the following extra challenges: -* An organization's data is constantly growing, stored, and shared in new directions. The task of discovering, protecting, and governing your sensitive data is one that never ends. You want to make sure that your organization's content is being shared with the correct people, applications, and with the correct permissions. -* Understanding the risk levels in your organization's data requires diving deep into your content, looking for keywords, RegEx patterns, and sensitive data types. Sensitive data types can include Credit Card numbers, Social Security numbers, or Bank Account numbers, to name a few. You constantly monitor all data sources for sensitive content, as even the smallest amount of data loss can be critical to your organization. -* Ensuring that your organization continues to comply with corporate security policies is a challenging task as your content grows and changes, and as those requirements and policies are updated for changing digital realities. Security administrators are often tasked with ensuring data security in the quickest time possible. +* An organization's data is constantly growing and being stored and shared in new directions. The task of discovering, protecting, and governing your sensitive data is one that never ends. You need to ensure that your organization's content is being shared with the correct people, applications, and with the correct permissions. +* Understanding the risk levels in your organization's data requires diving deep into your content, looking for keywords, RegEx patterns, and sensitive data types. For example, sensitive data types might include Credit Card numbers, Social Security numbers or Bank Account numbers. You must constantly monitor all data sources for sensitive content, as even the smallest amount of data loss can be critical to your organization. +* Ensuring that your organization continues to comply with corporate security policies is a challenging task as your content grows and changes, and as those requirements and policies are updated for changing digital realities. Security administrators need to ensure data security in the quickest time possible. ## Microsoft Purview advantages @@ -78,9 +81,9 @@ Microsoft Purview is designed to address the issues mentioned in the previous se Microsoft Purview provides a cloud-based service into which you can register data sources. During registration, the data remains in its existing location, but a copy of its metadata is added to Microsoft Purview, along with a reference to the data source location. The metadata is also indexed to make each data source easily discoverable via search and understandable to the users who discover it. -After you register a data source, you can then enrich its metadata. Either the user who registered the data source or another user in the enterprise adds the metadata. Any user can annotate a data source by providing descriptions, tags, or other metadata for requesting data source access. This descriptive metadata supplements the structural metadata, such as column names and data types, that's registered from the data source. +After you register a data source, you can then enrich its metadata. Either the user who registered the data source or another user in the enterprise can add additional metadata. Any user can annotate a data source by providing descriptions, tags, or other metadata for requesting data source access. This descriptive metadata supplements the structural metadata, such as column names and data types that are registered from the data source. -Discovering and understanding data sources and their use is the primary purpose of registering the sources. Enterprise users might need data for business intelligence, application development, data science, or any other task where the right data is required. They use the data catalog discovery experience to quickly find data that matches their needs, understand the data to evaluate its fitness for the purpose, and consume the data by opening the data source in their tool of choice. +Discovering and understanding data sources and their use is the primary purpose of registering the sources. Enterprise users might need data for business intelligence, application development, data science, or any other task where the correct data is required. They can use the data catalog discovery experience to quickly find data that matches their needs, understand the data to evaluate its fitness for purpose, and consume the data by opening the data source in their tool of choice. At the same time, users can contribute to the catalog by tagging, documenting, and annotating data sources that have already been registered. They can also register new data sources, which are then discovered, understood, and consumed by the community of catalog users. diff --git a/articles/purview/register-scan-azure-sql-database.md b/articles/purview/register-scan-azure-sql-database.md index 7b766183092bb..afd7f7cba5119 100644 --- a/articles/purview/register-scan-azure-sql-database.md +++ b/articles/purview/register-scan-azure-sql-database.md @@ -367,13 +367,14 @@ Scans can be managed or run again on completion :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-full-inc.png" alt-text="full or incremental scan."::: -## Lineage(Preview) +## Lineage (Preview) + Microsoft Purview supports lineage from Azure SQL Database. At the time of setting up a scan, enable lineage extraction toggle button to extract lineage. ### Prerequisites for setting up scan with Lineage extraction -1. Follow steps under [authentication for a scan using Managed Identity](#authentication-for-a-scan) section to authorize Microsoft Purview scan your Azure SQL DataBase +1. Follow steps under [authentication for a scan using Managed Identity](#authentication-for-a-scan) section to authorize Microsoft Purview scan your Azure SQL Database 2. Sign in to Azure SQL Database with Azure AD account and assign proper permission (for example: db_owner) to Purview Managed identity. Use below example SQL syntax to create user and grant permission by replacing 'purview-account' with your Account name: @@ -403,18 +404,18 @@ Microsoft Purview supports lineage from Azure SQL Database. At the time of setti ### Search Azure SQL Database assets and view runtime lineage -You can [browse data catalog](how-to-browse-catalog.md) or [search data catalog](how-to-search-catalog.md) to view asset details for Azure SQL Database. Below steps describe how-to view runtime lineage details +You can [browse data catalog](how-to-browse-catalog.md) or [search data catalog](how-to-search-catalog.md) to view asset details for Azure SQL Database. The following steps describe how-to view runtime lineage details. -1. Go to asset -> lineage tab, you can see the asset lineage when applicable. Refer to the [supported capabilities](#supported-capabilities) section on the supported Azure SQL Database lineage scenarios. For more information about lineage in general, see [data lineage](concept-data-lineage.md) and [lineage user guide](catalog-lineage-user-guide.md) +1. Go to asset -> lineage tab, you can see the asset lineage when applicable. Refer to the [supported capabilities](#supported-capabilities) section on the supported Azure SQL Database lineage scenarios. For more information about lineage in general, see [data lineage](concept-data-lineage.md) and [lineage user guide](catalog-lineage-user-guide.md). :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-lineage.png" alt-text="Screenshot that shows the screen with lineage from stored procedures."::: -2. Go to stored procedure asset -> Properties -> Related assets to see the latest run details of stored procedures +2. Go to stored procedure asset -> Properties -> Related assets to see the latest run details of stored procedures. :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-properties.png" alt-text="Screenshot that shows the screen with stored procedure properties containing runs."::: -3. Select the stored procedure hyperlink next to Runs to see Azure SQL Stored Procedure Run Overview. Go to properties tab to see enhanced run time information from stored procedure. For example: executedTime, rowcount, Client Connection, and so on +3. Select the stored procedure hyperlink next to Runs to see Azure SQL Stored Procedure Run Overview. Go to properties tab to see enhanced run time information from stored procedure. For example: executedTime, rowcount, Client Connection, and so on. :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-run-properties.png" alt-text="Screenshot that shows the screen with stored procedure run properties."lightbox="media/register-scan-azure-sql-database/register-scan-azure-sql-db-stored-procedure-run-properties-expanded.png"::: diff --git a/articles/service-fabric/service-fabric-quickstart-containers-linux.md b/articles/service-fabric/service-fabric-quickstart-containers-linux.md index 0907112eac5fb..3a3d76d112aac 100644 --- a/articles/service-fabric/service-fabric-quickstart-containers-linux.md +++ b/articles/service-fabric/service-fabric-quickstart-containers-linux.md @@ -2,7 +2,7 @@ title: Create a Linux container app on Service Fabric in Azure description: In this quickstart, you will build a Docker image with your application, push the image to a container registry, and then deploy your container to a Service Fabric cluster. ms.topic: quickstart -ms.date: 07/22/2019 +ms.date: 05/12/2022 ms.custom: mvc, devx-track-azurecli, mode-other --- # Quickstart: Deploy Linux containers to Service Fabric @@ -42,6 +42,11 @@ cd service-fabric-containers/Linux/container-tutorial/Voting To deploy the application to Azure, you need a Service Fabric cluster to run the application. The following commands create a five-node cluster in Azure. The commands also create a self-signed certificate, adds it to a key vault and downloads the certificate locally. The new certificate is used to secure the cluster when it deploys and is used to authenticate clients. +If you wish, you can modify the variable values to your preference. For example, westus instead of eastus for the location. + +> [!NOTE] +> Key vault names should be universally unique, as they are accessed as https://{vault-name}.vault.azure.net. +> ```azurecli #!/bin/bash diff --git a/articles/site-recovery/azure-to-azure-common-questions.md b/articles/site-recovery/azure-to-azure-common-questions.md index 8f1d004d285a6..35832df7f0272 100644 --- a/articles/site-recovery/azure-to-azure-common-questions.md +++ b/articles/site-recovery/azure-to-azure-common-questions.md @@ -331,7 +331,7 @@ Yes, you can create a Capacity Reservation for your VM SKU in the disaster recov ### Why should I reserve capacity using Capacity Reservation at the destination location? -While Site Recovery makes a best effort to ensure that capacity is available in the recovery region, it does not guarantee the same. Site Recovery's best effort is backed by a 2-hour RTO SLA. But if you require further assurance and _guaranteed compute capacity,_ then we recommend you to purchase [Capacity Reservations](https://aka.ms/on-demand-ca.pacity-reservations-docs) +While Site Recovery makes a best effort to ensure that capacity is available in the recovery region, it does not guarantee the same. Site Recovery's best effort is backed by a 2-hour RTO SLA. But if you require further assurance and _guaranteed compute capacity,_ then we recommend you to purchase [Capacity Reservations](https://aka.ms/on-demand-capacity-reservations-docs) ### Does Site Recovery work with reserved instances? @@ -353,4 +353,4 @@ Yes, both encryption in transit and [encryption at rest in Azure](../storage/com - [Review Azure-to-Azure support requirements](azure-to-azure-support-matrix.md). - [Set up Azure-to-Azure replication](azure-to-azure-tutorial-enable-replication.md). -- If you have questions after reading this article, post them on the [Microsoft Q&A question page for Azure Recovery Services](/answers/topics/azure-site-recovery.html). \ No newline at end of file +- If you have questions after reading this article, post them on the [Microsoft Q&A question page for Azure Recovery Services](/answers/topics/azure-site-recovery.html). diff --git a/articles/site-recovery/index.yml b/articles/site-recovery/index.yml index eb25ce0b5f0c2..a68c3bd992e4f 100644 --- a/articles/site-recovery/index.yml +++ b/articles/site-recovery/index.yml @@ -10,8 +10,8 @@ metadata: ms.service: site-recovery ms.topic: landing-page ms.collection: collection - author: JYOTHIRMAISURI - ms.author: v-jysur + author: rishjai-msft + ms.author: rishjai ms.date: 11/19/2019 landingContent: @@ -35,8 +35,8 @@ landingContent: links: - text: Disaster recovery for Azure VMs url: ./azure-to-azure-tutorial-enable-replication.md - - text: Move Azure VMs to another region - url: ./azure-to-azure-tutorial-migrate.md + - text: Move Azure VMs to another region (Resource Mover) + url: ../resource-mover/tutorial-move-region-virtual-machines.md - linkListType: learn links: - text: Azure VM disaster recovery architecture diff --git a/articles/spring-cloud/how-to-prepare-app-deployment.md b/articles/spring-cloud/how-to-prepare-app-deployment.md index abd6f19ad4627..9c9ea198c4d3f 100644 --- a/articles/spring-cloud/how-to-prepare-app-deployment.md +++ b/articles/spring-cloud/how-to-prepare-app-deployment.md @@ -142,7 +142,7 @@ For details, see the [Java runtime and OS versions](./faq.md?pivots=programming- To prepare an existing Spring Boot application for deployment to Azure Spring Cloud, include the Spring Boot and Spring Cloud dependencies in the application POM file as shown in the following sections. -Azure Spring Cloud will support the latest Spring Boot or Spring Cloud release within one month after it’s been released. You can get supported Spring Boot versions from [Spring Boot Releases](https://github.com/spring-projects/spring-boot/wiki/Supported-Versions#releases) and Spring Cloud versions from [Spring Cloud Releases](https://github.com/spring-cloud/spring-cloud-release/wiki). +Azure Spring Cloud will support the latest Spring Boot or Spring Cloud major version starting from 30 days after its release. The latest minor version will be supported as soon as it is released. You can get supported Spring Boot versions from [Spring Boot Releases](https://github.com/spring-projects/spring-boot/wiki/Supported-Versions#releases) and Spring Cloud versions from [Spring Cloud Releases](https://github.com/spring-cloud/spring-cloud-release/wiki). The following table lists the supported Spring Boot and Spring Cloud combinations: diff --git a/articles/storage/common/storage-use-azcopy-v10.md b/articles/storage/common/storage-use-azcopy-v10.md index 36b0e5e8822ba..3583a64be0016 100644 --- a/articles/storage/common/storage-use-azcopy-v10.md +++ b/articles/storage/common/storage-use-azcopy-v10.md @@ -4,7 +4,7 @@ description: AzCopy is a command-line utility that you can use to copy data to, author: normesta ms.service: storage ms.topic: how-to -ms.date: 11/15/2021 +ms.date: 05/11/2022 ms.author: normesta ms.subservice: common ms.custom: contperf-fy21q2 @@ -33,6 +33,8 @@ First, download the AzCopy V10 executable file to any directory on your computer These files are compressed as a zip file (Windows and Mac) or a tar file (Linux). To download and decompress the tar file on Linux, see the documentation for your Linux distribution. +For detailed information on AzCopy releases see the [AzCopy release page](https://github.com/Azure/azure-storage-azcopy/releases). + > [!NOTE] > If you want to copy data to and from your [Azure Table storage](../tables/table-storage-overview.md) service, then install [AzCopy version 7.3](https://aka.ms/downloadazcopynet). diff --git a/articles/synapse-analytics/backuprestore/restore-sql-pool.md b/articles/synapse-analytics/backuprestore/restore-sql-pool.md index 398b34c525481..6cda69bbe24a4 100644 --- a/articles/synapse-analytics/backuprestore/restore-sql-pool.md +++ b/articles/synapse-analytics/backuprestore/restore-sql-pool.md @@ -140,7 +140,7 @@ Steps: 9. Verify that the restored dedicated SQL pool (formerly SQL DW) is online. -10. If the desired destination is a Synapse Workspace, uncomment the code to perform the additional restore step. +10. **If the desired destination is a Synapse Workspace, uncomment the code to perform the additional restore step.** 1. Create a restore point for the newly created data warehouse. 2. Retrieve the last restore point created by using the "Select -Last 1" syntax. 3. Perform the restore to the desired Synapse workspace. @@ -164,12 +164,12 @@ Get-AzSubscription Select-AzSubscription -SubscriptionName $SourceSubscriptionName # list all restore points -Get-AzSynapseSqlPoolRestorePoint -ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName -Name $SQLPoolName +Get-AzSynapseSqlPoolRestorePoint -ResourceGroupName $SourceResourceGroupName -WorkspaceName $SourceWorkspaceName -Name $SourceSQLPoolName # Pick desired restore point using RestorePointCreationDate "xx/xx/xxxx xx:xx:xx xx" $PointInTime="" # Get the specific SQL pool to restore -$SQLPool = Get-AzSynapseSqlPool -ResourceGroupName $ResourceGroupName -WorkspaceName $WorkspaceName -Name $SQLPoolName +$SQLPool = Get-AzSynapseSqlPool -ResourceGroupName $SourceResourceGroupName -WorkspaceName $SourceWorkspaceName -Name $SourceSQLPoolName # Transform Synapse SQL pool resource ID to SQL database ID because currently the restore command only accepts the SQL database ID format. $DatabaseID = $SQLPool.Id -replace "Microsoft.Synapse", "Microsoft.Sql" ` -replace "workspaces", "servers" ` @@ -180,7 +180,7 @@ Select-AzSubscription -SubscriptionName $TargetSubscriptionName # Restore database from a desired restore point of the source database to the target server in the desired subscription $RestoredDatabase = Restore-AzSqlDatabase –FromPointInTimeBackup –PointInTime $PointInTime -ResourceGroupName $TargetResourceGroupName ` - -ServerName $TargetServerName -TargetDatabaseName $TargetDatabaseName –ResourceId $Database.ID + -ServerName $TargetServerName -TargetDatabaseName $TargetDatabaseName –ResourceId $DatabaseID # Verify the status of restored database $RestoredDatabase.status @@ -198,7 +198,11 @@ $RestoredDatabase.status ``` - +## Troubleshooting +A restore operation can result in a deployment failure based on a "RequestTimeout" exception. +![Screenshot from resource group deployments dialog of a timeout exception.](../media/sql-pools/restore-sql-pool-troubleshooting-01.png) +This timeout can be ignored. Review the dedicated SQL pool blade in the portal and it may still have status of "Restoring" and eventually will transition to "Online". +![Screenshot of SQL pool dialog with the status that shows restoring.](../media/sql-pools/restore-sql-pool-troubleshooting-02.png) ## Next Steps diff --git a/articles/synapse-analytics/media/sql-pools/restore-sql-pool-troubleshooting-01.png b/articles/synapse-analytics/media/sql-pools/restore-sql-pool-troubleshooting-01.png new file mode 100644 index 0000000000000..a907c0135acf8 Binary files /dev/null and b/articles/synapse-analytics/media/sql-pools/restore-sql-pool-troubleshooting-01.png differ diff --git a/articles/synapse-analytics/media/sql-pools/restore-sql-pool-troubleshooting-02.png b/articles/synapse-analytics/media/sql-pools/restore-sql-pool-troubleshooting-02.png new file mode 100644 index 0000000000000..4e0e51f372c88 Binary files /dev/null and b/articles/synapse-analytics/media/sql-pools/restore-sql-pool-troubleshooting-02.png differ diff --git a/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md b/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md index 2a724a7095325..37927b0831b7d 100644 --- a/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md +++ b/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md @@ -1,14 +1,14 @@ --- title: Backup and restore - snapshots, geo-redundant description: Learn how backup and restore works in Azure Synapse Analytics dedicated SQL pool. Use backups to restore your data warehouse to a restore point in the primary region. Use geo-redundant backups to restore to a different geographical region. -author: joannapea -manager: craigg +author: realAngryAnalytics +manager: joannapea ms.service: synapse-analytics ms.topic: conceptual ms.subservice: sql-dw -ms.date: 11/13/2020 -ms.author: joanpo -ms.reviewer: igorstan +ms.date: 05/04/2022 +ms.author: stevehow +ms.reviewer: joanpo ms.custom: seo-lt-2019" --- @@ -58,9 +58,6 @@ The following lists details for restore point retention periods: When you drop a dedicated SQL pool, a final snapshot is created and saved for seven days. You can restore the dedicated SQL pool to the final restore point created at deletion. If the dedicated SQL pool is dropped in a paused state, no snapshot is taken. In that scenario, make sure to create a user-defined restore point before dropping the dedicated SQL pool. -> [!IMPORTANT] -> If you delete the server/workspace hosting a dedicated SQL pool, all databases that belong to the server/workspace are also deleted and cannot be recovered. You cannot restore a deleted server. - ## Geo-backups and disaster recovery A geo-backup is created once per day to a [paired data center](../../availability-zones/cross-region-replication-azure.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). The RPO for a geo-restore is 24 hours. You can restore the geo-backup to a server in any other region where dedicated SQL pool is supported. A geo-backup ensures you can restore data warehouse in case you cannot access the restore points in your primary region. @@ -94,11 +91,11 @@ You can either keep the restored data warehouse and the current one, or delete o To restore a data warehouse, see [Restore a dedicated SQL pool](sql-data-warehouse-restore-points.md#create-user-defined-restore-points-through-the-azure-portal). -To restore a deleted or paused data warehouse, you can [create a support ticket](sql-data-warehouse-get-started-create-support-ticket.md). +To restore a deleted data warehouse, see [Restore a deleted database](sql-data-warehouse-restore-deleted-dw.md), or if the entire server was deleted, see [Restore a data warehouse from a deleted server](sql-data-warehouse-restore-from-deleted-server.md). ## Cross subscription restore -If you need to directly restore across subscription, vote for this capability [here](https://feedback.azure.com/d365community/idea/dea9ea22-0a25-ec11-b6e6-000d3a4f07b8). Restore to a different server and ['Move'](../../azure-resource-manager/management/move-resource-group-and-subscription.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) the server across subscriptions to perform a cross subscription restore. +You can perform a cross-subscription restore by follow the guidance [here](sql-data-warehouse-restore-active-paused-dw.md#restore-an-existing-dedicated-sql-pool-formerly-sql-dw-to-a-different-subscription-through-powershell). ## Geo-redundant restore @@ -107,6 +104,10 @@ You can [restore your dedicated SQL pool](sql-data-warehouse-restore-from-geo-ba > [!NOTE] > To perform a geo-redundant restore you must not have opted out of this feature. +## Support Process + +You can [submit a support ticket](sql-data-warehouse-get-started-create-support-ticket.md) through the Azure portal for Azure Synapse Analytics. + ## Next steps For more information about restore points, see [User-defined restore points](sql-data-warehouse-restore-points.md) diff --git a/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md b/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md index 3808ec837a7f8..a23cc5fd62ae7 100644 --- a/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md +++ b/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md @@ -41,7 +41,7 @@ In this article, you learn how to restore a dedicated SQL pool (formerly SQL DW) ```powershell $SubscriptionID="" $ResourceGroupName="" -$ServereName="" # Without database.windows.net +$ServerName="" # Without database.windows.net $DatabaseName="" $TargetServerName="" $TargetDatabaseName="" diff --git a/articles/virtual-desktop/TOC.yml b/articles/virtual-desktop/TOC.yml index 9435fb48b5be2..42ee0199137d8 100644 --- a/articles/virtual-desktop/TOC.yml +++ b/articles/virtual-desktop/TOC.yml @@ -227,6 +227,8 @@ href: configure-host-pool-load-balancing.md - name: Personal desktop assignment type href: configure-host-pool-personal-desktop-assignment-type.md + - name: Move resources between regions + href: move-resources.md - name: Use Azure Virtual Desktop license href: apply-windows-license.md - name: Customize session host image @@ -348,7 +350,7 @@ - name: Connection quality issues href: troubleshoot-connection-quality.md - name: Sign-in screen is blank - href: https://docs.microsoft.com/troubleshoot/azure/virtual-desktop/windows-virtual-desktop-blank-screen + href: /troubleshoot/azure/virtual-desktop/windows-virtual-desktop-blank-screen - name: Reference items: - name: Desktop virtualization - Microsoft Cloud Adoption Framework for Azure diff --git a/articles/virtual-desktop/move-resources.md b/articles/virtual-desktop/move-resources.md new file mode 100644 index 0000000000000..1fc72a433cfcd --- /dev/null +++ b/articles/virtual-desktop/move-resources.md @@ -0,0 +1,94 @@ +--- +title: Move Azure Virtual Desktop resources between regions - Azure +description: How to move Azure Virtual Desktop resources between regions. +author: Heidilohr +ms.topic: how-to +ms.date: 05/13/2022 +ms.author: helohr +manager: femila +--- +# Move Azure Virtual Desktop resource between regions + +In this article, we'll tell you how to move Azure Virtual Desktop resources between Azure regions. + +## Important information + +When you move Azure Virtual Desktop resources between regions, these are some things you should keep in mind: + +- When exporting resources, you must move them as a set. All resources associated with a specific host pool have to stay together. A host pool and its associated app groups need to be in the same region. + +- Workspaces and their associated app groups also need to be in the same region. + +- All resources to be moved have to be in the same resource group. Template exports require having resources in the same group, so if you want them to be in a different location, you'll need to modify the exported template to change the location of its resources. + +- Once you're done moving your resources to a new region, you must delete the original resources. The resource ID of our resources won't change during the moving process, so there will be a name conflict with your old resources if you don't delete them. + +- Existing session hosts attached to a host pool that you move will stop working. You'll need to recreate the session hosts in the new region. + +## Export a template + +The first step to move your resources is to create a template that contains everything you want to move to the new region. + +To export a template: + +1. In the Azure portal, go to **Resource Groups**, then select the resource group that contains the resources you want to move. + +2. Once you've selected the resource group, go to **Overview** > **Resources** and select all the resources you want to move. + +3. Select the **...** button in the upper right-hand corner of the **Resources** tab. Once the drop-down menu opens, select **Export template**. + +4. Select **Download** to download a local copy of the generated template. + +5. Right-click the zip file and select **Extract All**. + +## Modify the exported template + +Next, you'll need to modify the template to include the region you're moving your resources to. + +To modify the template you exported: + +1. Open the template.json file you extracted from the zip folder and a text editor of your choice, such as Notepad. + +2. In each resource inside the template file, find the "location" property and modify it to the location you want to move them to. For example, if your deployment's currently in the East US region but you want to move it to the West US region, you'd change the "eastus" location to "westus." Learn more about which Azure regions you can use at [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/#geographies). + +3. For each host pool, remove the "publicNetworkAccess" parameter, if present. + +## Delete original resources + +Once you have the template ready, you'll need to delete the original resources to prevent name conflicts. + +To delete the original resources: + +1. Go back to the **Resources** tab mentioned in [Export a template](#export-a-template) and select all the resources you exported to the template. + +2. Next, select the **...** button again, then select **Delete** from the drop-down menu. + +3. If you see a message asking you to confirm the deletion, select **Confirm**. + +4. Wait a few minutes for the resources to finish deleting. Once you're done, they should disappear from the resource list. + +## Deploy the modified template + +Finally, you'll need to deploy your modified template in the new region. + +To deploy the template: + +1. In the Azure portal, search for and select **Deploy a custom template**. +2. In the custom deployment menu, select **Build your own template in the editor**. +3. Next, select **Load file** and upload your modified template file. + + >[!NOTE] + > Make sure to upload the template.json file, not the parameters.json file. + +4. When you're done uploading the template, select **Save**. +5. In the next menu, select **Review + create**. +6. Under **Instance details**, make sure the **Region** shows the region you changed the location to in [Modify the exported template](#modify-the-exported-template). If not, select the correct region from the drop-down menu. +7. If everything looks correct, select **Create**. +8. Wait a few minutes for the template to deploy. Once it's finished, the resources should appear in your resource list. + +## Next steps + +- Find out which Azure regions are currently available at [Azure Geographies](https://azure.microsoft.com/global-infrastructure/geographies/#overview). + +- See [our Azure Resource Manager templates for Azure Virtual Desktop](https://github.com/Azure/RDS-Templates/tree/master/wvd-templates) for more templates you can use in your deployments after you move your resources. + diff --git a/articles/virtual-network/nat-gateway/faq.yml b/articles/virtual-network/nat-gateway/faq.yml index ca26061566a06..82185d1070845 100644 --- a/articles/virtual-network/nat-gateway/faq.yml +++ b/articles/virtual-network/nat-gateway/faq.yml @@ -22,6 +22,10 @@ sections: /30 (4 addresses), /31 (2 addresses). + - question: Can I use a custom IP prefix (BYOIP) with Virtual Network NAT gateway? + answer: | + Yes, you can use a custom IP prefix (BYOIP) with your NAT gateway resource. See [Custom IP address prefix (BYOIP)](/azure/virtual-network/ip-services/custom-ip-address-prefix) to learn more. + - question: Can a zone-redundant public IP address be attached to a NAT gateway? answer: | A zone-redundant public IP address can be attached to a non-zonal NAT gateway only. A NAT gateway designated to a specific zone must be attached to a public IP address from the same zone. diff --git a/articles/virtual-network/nat-gateway/nat-overview.md b/articles/virtual-network/nat-gateway/nat-overview.md index 9f6070cb7cedc..25fa5770216ae 100644 --- a/articles/virtual-network/nat-gateway/nat-overview.md +++ b/articles/virtual-network/nat-gateway/nat-overview.md @@ -32,6 +32,8 @@ Virtual Network NAT is a fully managed and distributed service. It doesn't depen ### Scalability +Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale-out operation required. Azure manages the operation of Virtual Network NAT for you. A NAT gateway always has multiple fault domains and can sustain multiple failures without service outage. + A NAT gateway resource can be associated to a subnet and can be used by all compute resources in that subnet. All subnets in a virtual network can use the same resource. When a NAT gateway is associated to a public IP prefix, it automatically scales to the number of IP addresses needed for outbound. ### Performance @@ -40,35 +42,37 @@ Virtual Network NAT is a software defined networking service. A NAT gateway won' ## Virtual Network NAT basics -A NAT gateway can be created in a specific availability zone. Redundancy is built in within the specified zone. Virtual Network NAT is non-zonal by default. A non-zonal Virtual Network NAT isn't associated to a specific zone and is assigned to a specific zone by Azure. A NAT gateway can be isolated in a specific zone when you create [availability zones](../../availability-zones/az-overview.md) scenarios. This deployment is called a zonal deployment. +* A NAT gateway can be created in a specific availability zone. Redundancy is built in within the specified zone. Virtual Network NAT is non-zonal by default. A non-zonal Virtual Network NAT isn't associated to a specific zone and is assigned to a specific zone by Azure. A NAT gateway can be isolated in a specific zone when you create [availability zones](../../availability-zones/az-overview.md) scenarios. This deployment is called a zonal deployment. -Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale-out operation required. Azure manages the operation of Virtual Network NAT for you. A NAT gateway always has multiple fault domains and can sustain multiple failures without service outage. +* Outbound connectivity can be defined for each subnet with a NAT gateway. Multiple subnets within the same virtual network can have different NAT gateways associated. Multiple subnets within the same virtual network can use the same NAT gateway. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by the NAT gateway without any customer configuration. A NAT gateway takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet. + +* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more. -* Outbound connectivity can be defined for each subnet with a NAT gateway. Multiple subnets within the same virtual network can have different NAT gateways associated. Multiple subnets within the same virtual network can use the same NAT gateway. A subnet is configured by specifying which NAT gateway resource to use. All outbound traffic for the subnet is processed by the NAT gateway without any customer configuration. A NAT gateway takes precedence over other outbound scenarios and replaces the default Internet destination of a subnet +* Virtual Network NAT supports TCP and UDP protocols only. ICMP isn't supported. -* Presence of custom UDRs for virtual appliances and ExpressRoute override NAT gateway for directing internet bound traffic (route to the 0.0.0.0/0 address prefix). See [Troubleshooting NAT gateway](./troubleshoot-nat.md#virtual-appliance-udrs-and-vpn-expressroute-override-nat-gateway-for-routing-outbound-traffic) to learn more +* A NAT gateway resource can use up to 16 IP addresses in any combination of: -* Virtual Network NAT supports TCP and UDP protocols only. ICMP isn't supported + * Public IP addresses -* A NAT gateway resource can use a: + * Public IP prefixes - * Public IP + * Custom IP prefixes (BYOIP), to learn more, see [Custom IP address prefix (BYOIP)](/azure/virtual-network/ip-services/custom-ip-address-prefix) - * Public IP prefix +* Virtual Network NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. The NAT gateway will groom all traffic to the range of IP addresses of the prefix. -* Virtual Network NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. The NAT gateway will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as basic load balancer or basic public IPs aren't compatible with Virtual Network NAT. Basic resources must be placed on a subnet not associated to a NAT gateway. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway +* Basic resources, such as basic load balancer or basic public IPs aren't compatible with Virtual Network NAT. Basic resources must be placed on a subnet not associated to a NAT gateway. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway -* To upgrade a basic load balancer to standard, see [Upgrade a public basic Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md) + * To upgrade a basic load balancer to standard, see [Upgrade a public basic Azure Load Balancer](../../load-balancer/upgrade-basic-standard.md). -* To upgrade a basic public IP to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md) + * To upgrade a basic public IP to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md). -* Virtual Network NAT is the recommended method for outbound connectivity. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md) +* A NAT gateway can’t be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet, but will only be able to direct outbound traffic with an IPv4 address. - * To migrate outbound access to a NAT gateway from default outbound access or load balancer outbound rules, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md) +* Virtual Network NAT is the recommended method for outbound connectivity. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md). -* A NAT gateway can’t be associated to an IPv6 public IP address or IPv6 public IP prefix. It can be associated to a dual stack subnet + * To migrate outbound access to a NAT gateway from default outbound access or load balancer outbound rules, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md). -* A NAT gateway allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the Internet is only allowed in response to an active flow. Services outside your virtual network can’t initiate an inbound connection through NAT gateway +* A NAT gateway allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the internet is only allowed in response to an active flow. Services outside your virtual network can’t initiate an inbound connection through NAT gateway. * A NAT gateway can’t span multiple virtual networks @@ -76,9 +80,9 @@ Virtual Network NAT is scaled out from creation. There isn't a ramp up or scale- * A NAT gateway can’t be deployed in a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) -* Virtual machine instances or other compute resources, send TCP reset packets or attempt to communicate on a TCP connection that doesn't exist. An example is connections that have reached idle timeout. The next packet received will return a TCP reset to the private IP address to signal and force connection closure. The public side of a NAT gateway doesn't generate TCP reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted +* Virtual machine instances or other compute resources, send TCP reset packets or attempt to communicate on a TCP connection that doesn't exist. An example is connections that have reached idle timeout. The next packet received will return a TCP reset to the private IP address to signal and force connection closure. The public side of a NAT gateway doesn't generate TCP reset packets or any other traffic. Only traffic produced by the customer's virtual network is emitted. -* A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives +* A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives. ## Pricing and SLA @@ -88,10 +92,10 @@ For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr ## Next steps -* To create and validate a NAT gateway, see [Quickstart: Create a NAT gateway using the Azure portal](quickstart-create-nat-gateway-portal.md) +* To create and validate a NAT gateway, see [Quickstart: Create a NAT gateway using the Azure portal](quickstart-create-nat-gateway-portal.md). -* To view a video on more information about Azure Virtual Network NAT, see [How to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4) +* To view a video on more information about Azure Virtual Network NAT, see [How to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4). -* Learn about the [NAT gateway resource](./nat-gateway-resource.md) +* Learn about the [NAT gateway resource](./nat-gateway-resource.md). * [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat). diff --git a/docfx.json b/docfx.json index b89ac25805fc3..8878eb1e88216 100644 --- a/docfx.json +++ b/docfx.json @@ -225,7 +225,7 @@ "articles/automation/**/*.md": "sgsneha", "articles/update-center/**/*.md": "sgsneha", "articles/azure-arc/**/*.md": "JnHs", - "articles/azure-arc/servers/**/*.md": "JnHs", + "articles/azure-arc/servers/**/*.md": "johnmarco", "articles/azure-functions/**/*.md": "ggailey777", "articles/azure-fluid-relay/**/*.md": "hickeys", "articles/azure-government/**/*.md": "stevevi", @@ -589,7 +589,7 @@ "articles/automanage/*md": "daberry", "articles/automation/**/*.md": "sudhirsneha", "articles/azure-arc/**/*.md": "jenhayes", - "articles/azure-arc/servers/**/*.md": "jenhayes", + "articles/azure-arc/servers/**/*.md": "johnmarc", "articles/azure-functions/**/*.md": "glenga", "articles/azure-fluid-relay/**/*.md": "hickeys", "articles/azure-government/**/*.md": "stevevi", diff --git a/includes/container-apps-create-cli-steps.md b/includes/container-apps-create-cli-steps.md index a628cbf17b995..72191d9e43f58 100644 --- a/includes/container-apps-create-cli-steps.md +++ b/includes/container-apps-create-cli-steps.md @@ -59,6 +59,22 @@ az provider register --namespace Microsoft.App --- +Register the `Microsoft.OperationalInsights` provider for the [Azure Monitor Log Analytics Workspace](../articles/container-apps/observability.md?tabs=bash#azure-monitor-log-analytics) if you have not used it before. + +# [Bash](#tab/bash) + +```azurecli +az provider register --namespace Microsoft.OperationalInsights +``` + +# [PowerShell](#tab/powershell) + +```azurecli +az provider register --namespace Microsoft.OperationalInsights +``` + +--- + Next, set the following environment variables: # [Bash](#tab/bash) diff --git a/includes/storage-files-aad-auth-include.md b/includes/storage-files-aad-auth-include.md index f013b6df98d26..73788c6a058ff 100644 --- a/includes/storage-files-aad-auth-include.md +++ b/includes/storage-files-aad-auth-include.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: tamram + author: khdownie ms.service: storage ms.topic: include ms.date: 07/30/2019 - ms.author: tamram + ms.author: kendownie ms.custom: include file --- diff --git a/includes/storage-files-aad-permissions-and-mounting.md b/includes/storage-files-aad-permissions-and-mounting.md index e16ac38ab7bf3..cadb8a17158ca 100644 --- a/includes/storage-files-aad-permissions-and-mounting.md +++ b/includes/storage-files-aad-permissions-and-mounting.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 05/06/2022 - ms.author: rogara + ms.author: kendownie ms.custom: include file, devx-track-azurecli, devx-track-azurepowershell --- diff --git a/includes/storage-files-condition-headers.md b/includes/storage-files-condition-headers.md index cd2ede0bdb4bb..e3a0559299acc 100644 --- a/includes/storage-files-condition-headers.md +++ b/includes/storage-files-condition-headers.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 09/04/2019 - ms.author: rogarana + ms.author: kendownie ms.custom: include file --- diff --git a/includes/storage-files-encryption-at-rest.md b/includes/storage-files-encryption-at-rest.md index 6b9c1845eb22d..4ea4c73b69899 100644 --- a/includes/storage-files-encryption-at-rest.md +++ b/includes/storage-files-encryption-at-rest.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 12/27/2019 - ms.author: rogarana + ms.author: kendownie ms.custom: include file --- All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the SMB and NFS protocols. diff --git a/includes/storage-files-file-share-management-concepts.md b/includes/storage-files-file-share-management-concepts.md index 6b1a57d5b3630..4bb6ddc17ac6c 100644 --- a/includes/storage-files-file-share-management-concepts.md +++ b/includes/storage-files-file-share-management-concepts.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 12/26/2019 - ms.author: rogarana + ms.author: kendownie ms.custom: include file --- Azure file shares are deployed into *storage accounts*, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares, as well as other storage resources such as blob containers, queues, or tables. All storage resources that are deployed into a storage account share the limits that apply to that storage account. For current storage account limits, see [Azure Files scalability and performance targets](../articles/storage/files/storage-files-scale-targets.md). diff --git a/includes/storage-files-migration-configure-sync.md b/includes/storage-files-migration-configure-sync.md index ff27b1e77f4df..5e6ffef7d2092 100644 --- a/includes/storage-files-migration-configure-sync.md +++ b/includes/storage-files-migration-configure-sync.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage -author: fauhse +author: khdownie ms.service: storage ms.topic: include ms.date: 2/20/2020 -ms.author: fauhse +ms.author: kendownie ms.custom: include file --- diff --git a/includes/storage-files-migration-deploy-azure-file-sync-agent.md b/includes/storage-files-migration-deploy-azure-file-sync-agent.md index 0ed7d6bbfa335..f735b08ab243e 100644 --- a/includes/storage-files-migration-deploy-azure-file-sync-agent.md +++ b/includes/storage-files-migration-deploy-azure-file-sync-agent.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage -author: fauhse +author: khdownie ms.service: storage ms.topic: include ms.date: 2/20/2020 -ms.author: fauhse +ms.author: kendownie ms.custom: include file --- diff --git a/includes/storage-files-migration-deploy-azure-file-sync-storage-sync-service.md b/includes/storage-files-migration-deploy-azure-file-sync-storage-sync-service.md index 9758efb9fb7b5..095eb776b45c9 100644 --- a/includes/storage-files-migration-deploy-azure-file-sync-storage-sync-service.md +++ b/includes/storage-files-migration-deploy-azure-file-sync-storage-sync-service.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage -author: fauhse +author: khdownie ms.service: storage ms.topic: include ms.date: 2/20/2020 -ms.author: fauhse +ms.author: kendownie ms.custom: include file --- diff --git a/includes/storage-files-migration-generate-key.md b/includes/storage-files-migration-generate-key.md index 8bcede1dfff47..eb80caf4c8f09 100644 --- a/includes/storage-files-migration-generate-key.md +++ b/includes/storage-files-migration-generate-key.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage -author: fauhse +author: khdownie ms.service: storage ms.topic: include ms.date: 2/20/2020 -ms.author: fauhse +ms.author: kendownie ms.custom: include file --- Service data encryption keys are used to encrypt confidential customer data, such as storage account credentials, that are sent from your StorSimple Manager service to the StorSimple device. You will need to change these keys periodically if your IT organization has a key rotation policy on the storage devices. The key change process can be slightly different depending on whether there is a single device or multiple devices managed by the StorSimple Manager service. For more information, go to [StorSimple security and data protection](../articles/storsimple/storsimple-8000-security.md). diff --git a/includes/storage-files-migration-namespace-mapping.md b/includes/storage-files-migration-namespace-mapping.md index 792b23a0d4898..fdccde98e9e6e 100644 --- a/includes/storage-files-migration-namespace-mapping.md +++ b/includes/storage-files-migration-namespace-mapping.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage -author: fauhse +author: khdownie ms.service: storage ms.topic: include ms.date: 2/20/2020 -ms.author: fauhse +ms.author: kendownie ms.custom: include file --- diff --git a/includes/storage-files-migration-provision-azure-file-share.md b/includes/storage-files-migration-provision-azure-file-share.md index 4480cbc4798e2..35280f5b794ff 100644 --- a/includes/storage-files-migration-provision-azure-file-share.md +++ b/includes/storage-files-migration-provision-azure-file-share.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage -author: fauhse +author: khdownie ms.service: storage ms.topic: include ms.date: 2/20/2020 -ms.author: fauhse +ms.author: kendownie ms.custom: include file --- diff --git a/includes/storage-files-migration-robocopy-optimize.md b/includes/storage-files-migration-robocopy-optimize.md index 1b2ddd1219866..d205bfc91834f 100644 --- a/includes/storage-files-migration-robocopy-optimize.md +++ b/includes/storage-files-migration-robocopy-optimize.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage -author: fauhse +author: khdownie ms.service: storage ms.topic: include ms.date: 4/05/2021 -ms.author: fauhse +ms.author: kendownie ms.custom: include file --- diff --git a/includes/storage-files-migration-robocopy.md b/includes/storage-files-migration-robocopy.md index 74a36bc68f2d7..2919277268e59 100644 --- a/includes/storage-files-migration-robocopy.md +++ b/includes/storage-files-migration-robocopy.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage -author: fauhse +author: khdownie ms.service: storage ms.topic: include ms.date: 4/05/2021 -ms.author: fauhse +ms.author: kendownie ms.custom: include file --- diff --git a/includes/storage-files-networking-endpoints-private-cli.md b/includes/storage-files-networking-endpoints-private-cli.md index 1523591855c55..af6941ec2ad34 100644 --- a/includes/storage-files-networking-endpoints-private-cli.md +++ b/includes/storage-files-networking-endpoints-private-cli.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 5/11/2020 - ms.author: rogarana + ms.author: kendownie ms.custom: include file, devx-track-azurecli --- To create a private endpoint for your storage account, you first need to get a reference to your storage account and the virtual network subnet to which you want to add the private endpoint. Replace ``, ``, ``, ``, and `` below: diff --git a/includes/storage-files-networking-endpoints-private-portal.md b/includes/storage-files-networking-endpoints-private-portal.md index 8fc636ab5dba7..810842a375a22 100644 --- a/includes/storage-files-networking-endpoints-private-portal.md +++ b/includes/storage-files-networking-endpoints-private-portal.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 04/15/2021 - ms.author: rogarana + ms.author: kendownie ms.custom: include file --- Navigate to the storage account for which you would like to create a private endpoint. In the table of contents for the storage account, select **Networking**, **Private endpoint connections**, and then **+ Private endpoint** to create a new private endpoint. diff --git a/includes/storage-files-networking-endpoints-private-powershell.md b/includes/storage-files-networking-endpoints-private-powershell.md index cd49e4cb09228..027a5acae5997 100644 --- a/includes/storage-files-networking-endpoints-private-powershell.md +++ b/includes/storage-files-networking-endpoints-private-powershell.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 5/11/2020 - ms.author: rogarana + ms.author: kendownie ms.custom: include file, devx-track-azurepowershell --- To create a private endpoint for your storage account, you first need to get a reference to your storage account and the virtual network subnet to which you want to add the private endpoint. Replace ``, ``, ``, ``, and `` below: diff --git a/includes/storage-files-networking-endpoints-public-disable-cli.md b/includes/storage-files-networking-endpoints-public-disable-cli.md index 5c9f0df9ecc41..be540b3d172c6 100644 --- a/includes/storage-files-networking-endpoints-public-disable-cli.md +++ b/includes/storage-files-networking-endpoints-public-disable-cli.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 6/2/2020 - ms.author: rogarana + ms.author: kendownie ms.custom: include file --- The following CLI command will deny all traffic to the storage account's public endpoint. Note that this command has the `-bypass` parameter set to `AzureServices`. This will allow trusted first party services such as Azure File Sync to access the storage account via the public endpoint. diff --git a/includes/storage-files-networking-endpoints-public-disable-portal.md b/includes/storage-files-networking-endpoints-public-disable-portal.md index 4d6f196a3d5fe..a2884a75be499 100644 --- a/includes/storage-files-networking-endpoints-public-disable-portal.md +++ b/includes/storage-files-networking-endpoints-public-disable-portal.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 01/25/2021 - ms.author: rogarana + ms.author: kendownie ms.custom: include file --- Navigate to the storage account for which you would like to restrict all access to the public endpoint. In the table of contents for the storage account, select **Networking**. diff --git a/includes/storage-files-networking-endpoints-public-disable-powershell.md b/includes/storage-files-networking-endpoints-public-disable-powershell.md index 69d50774659ad..b48e4e6369c7e 100644 --- a/includes/storage-files-networking-endpoints-public-disable-powershell.md +++ b/includes/storage-files-networking-endpoints-public-disable-powershell.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 6/2/2020 - ms.author: rogarana + ms.author: kendownie ms.custom: include file, devx-track-azurepowershell --- The following PowerShell command will deny all traffic to the storage account's public endpoint. Note that this command has the `-Bypass` parameter set to `AzureServices`. This will allow trusted first party services such as Azure File Sync to access the storage account via the public endpoint. diff --git a/includes/storage-files-networking-endpoints-public-restrict-cli.md b/includes/storage-files-networking-endpoints-public-restrict-cli.md index 9899676471018..7bcdc7cc10c1f 100644 --- a/includes/storage-files-networking-endpoints-public-restrict-cli.md +++ b/includes/storage-files-networking-endpoints-public-restrict-cli.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 6/2/2020 - ms.author: rogarana + ms.author: kendownie ms.custom: include file, devx-track-azurecli --- diff --git a/includes/storage-files-networking-endpoints-public-restrict-portal.md b/includes/storage-files-networking-endpoints-public-restrict-portal.md index 73af67c8c8816..8ed9573a8cea3 100644 --- a/includes/storage-files-networking-endpoints-public-restrict-portal.md +++ b/includes/storage-files-networking-endpoints-public-restrict-portal.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 01/25/2021 - ms.author: rogarana + ms.author: kendownie ms.custom: include file --- diff --git a/includes/storage-files-networking-endpoints-public-restrict-powershell.md b/includes/storage-files-networking-endpoints-public-restrict-powershell.md index 988db1ffb2596..83626e855dcec 100644 --- a/includes/storage-files-networking-endpoints-public-restrict-powershell.md +++ b/includes/storage-files-networking-endpoints-public-restrict-powershell.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 6/2/2020 - ms.author: rogarana + ms.author: kendownie ms.custom: include file, devx-track-azurepowershell --- diff --git a/includes/storage-files-redundancy-overview.md b/includes/storage-files-redundancy-overview.md index 08ee72f5980f9..535eecf2a166a 100644 --- a/includes/storage-files-redundancy-overview.md +++ b/includes/storage-files-redundancy-overview.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 08/18/2021 - ms.author: rogarana + ms.author: kendownie ms.custom: include file --- To protect the data in your Azure file shares against data loss or corruption, all Azure file shares store multiple copies of each file as they are written. Depending on the requirements of your workload, you can select additional degrees of redundancy. Azure Files currently supports the following data redundancy options: diff --git a/includes/storage-files-sync-create-server-endpoint.md b/includes/storage-files-sync-create-server-endpoint.md index 33d0d5eb20af6..289efcbb8dd15 100644 --- a/includes/storage-files-sync-create-server-endpoint.md +++ b/includes/storage-files-sync-create-server-endpoint.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage -author: fauhse +author: khdownie ms.service: storage ms.topic: include ms.date: 6/01/2021 -ms.author: fauhse +ms.author: kendownie ms.custom: include file, devx-track-azurecli ms.devlang: azurecli --- diff --git a/includes/storage-files-tiers-enable-large-shares.md b/includes/storage-files-tiers-enable-large-shares.md index 8f2f0b4d76577..ec0dd0ae3e650 100644 --- a/includes/storage-files-tiers-enable-large-shares.md +++ b/includes/storage-files-tiers-enable-large-shares.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 02/03/2021 - ms.author: fauhse + ms.author: kendownie ms.custom: include file --- By default, standard file shares can span only up to 5 TiB, but you can increase the share limit to 100 TiB. To increase your share limit, enable **Large file share** on your storage account. Premium storage accounts (*FileStorage* storage accounts) don't have the large file share feature flag as all premium file shares are already enabled for provisioning up to the full 100-TiB capacity. diff --git a/includes/storage-files-tiers-large-file-share-availability.md b/includes/storage-files-tiers-large-file-share-availability.md index eaa69c1df06cf..44e451fe7c6c7 100644 --- a/includes/storage-files-tiers-large-file-share-availability.md +++ b/includes/storage-files-tiers-large-file-share-availability.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 12/27/2019 - ms.author: rogarana + ms.author: kendownie ms.custom: include file --- Standard file shares with 100 TiB capacity have certain limitations. diff --git a/includes/storage-files-tiers-overview.md b/includes/storage-files-tiers-overview.md index b8cc666b5e893..8ca4e7413e376 100644 --- a/includes/storage-files-tiers-overview.md +++ b/includes/storage-files-tiers-overview.md @@ -2,11 +2,11 @@ title: include file description: include file services: storage - author: roygara + author: khdownie ms.service: storage ms.topic: include ms.date: 08/28/2020 - ms.author: rogarana + ms.author: kendownie ms.custom: include file --- Azure Files offers four different tiers of storage, premium, transaction optimized, hot, and cool to allow you to tailor your shares to the performance and price requirements of your scenario: