diff --git a/.openpublishing.redirection.json b/.openpublishing.redirection.json
index 0248001462b10..bb7940634149b 100644
--- a/.openpublishing.redirection.json
+++ b/.openpublishing.redirection.json
@@ -22019,6 +22019,11 @@
"redirect_url": "/azure/scheduler/migrate-from-scheduler-to-logic-apps",
"redirect_document_id": ""
},
+ {
+ "source_path_from_root": "/articles/search/knowledge-store-view-storage-explorer.md",
+ "redirect_url": "/azure/search/knowledge-store-create-portal#view-kstore",
+ "redirect_document_id": false
+ },
{
"source_path_from_root": "/articles/search/cognitive-search-resources-documentation.md",
"redirect_url": "/azure/search/cognitive-search-concept-intro",
diff --git a/articles/active-directory-b2c/media/partner-gallery/haventec-logo.png b/articles/active-directory-b2c/media/partner-gallery/haventec-logo.png
index 898fc82c2cee2..6440a8000287a 100644
Binary files a/articles/active-directory-b2c/media/partner-gallery/haventec-logo.png and b/articles/active-directory-b2c/media/partner-gallery/haventec-logo.png differ
diff --git a/articles/active-directory-b2c/partner-asignio.md b/articles/active-directory-b2c/partner-asignio.md
index fc58fe226996a..31f0beaf64231 100644
--- a/articles/active-directory-b2c/partner-asignio.md
+++ b/articles/active-directory-b2c/partner-asignio.md
@@ -114,7 +114,7 @@ Follow the steps mentioned in [this tutorial](tutorial-register-applications.md?
| Property | Value |
|:--------|:-------------|
|Name | Login with Asignio *(or a name of your choice)*
- |Metadata URL | https://authorization.asignio.com/.well-known/openid-configuration|
+ |Metadata URL | `https://authorization.asignio.com/.well-known/openid-configuration`|
| Client ID | enter the client ID that you previously generated in [step 1](#step-1-configure-an-application-with-asignio)|
|Client Secret | enter the Client secret that you previously generated in [step 1](#step-1-configure-an-application-with-asignio)|
| Scope | openid email profile |
diff --git a/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md b/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md
index 589246e7cabd0..e6ee0f1b24256 100644
--- a/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md
+++ b/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md
@@ -5,7 +5,7 @@ services: active-directory
ms.service: active-directory
ms.subservice: authentication
ms.topic: tutorial
-ms.date: 10/25/2021
+ms.date: 05/31/2022
ms.author: justinha
author: justinha
ms.reviewer: tilarso
@@ -58,7 +58,7 @@ With password writeback enabled in Azure AD Connect cloud sync, now verify, and
To verify and enable password writeback in SSPR, complete the following steps:
-1. Sign into the Azure portal using a global administrator account.
+1. Sign into the Azure portal using a [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) account.
1. Navigate to Azure Active Directory, select **Password reset**, then choose **On-premises integration**.
1. Verify the Azure AD Connect cloud sync agent set up is complete.
1. Set **Write back passwords to your on-premises directory?** to **Yes**.
@@ -72,12 +72,12 @@ To verify and enable password writeback in SSPR, complete the following steps:
If you no longer want to use the SSPR password writeback functionality you have configured as part of this document, complete the following steps:
-1. Sign into the Azure portal using a global administrator account.
+1. Sign into the Azure portal using a [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator) account.
1. Search for and select Azure Active Directory, select **Password reset**, then choose **On-premises integration**.
1. Set **Write back passwords to your on-premises directory?** to **No**.
1. Set **Allow users to unlock accounts without resetting their password?** to **No**.
-From your Azure AD Connect cloud sync server, run `Set-AADCloudSyncPasswordWritebackConfiguration` using global administrator credentials to disable password writeback with Azure AD Connect cloud sync.
+From your Azure AD Connect cloud sync server, run `Set-AADCloudSyncPasswordWritebackConfiguration` using Hybrid Identity Administrator credentials to disable password writeback with Azure AD Connect cloud sync.
```powershell
Import-Module ‘C:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dll’
diff --git a/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md b/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
index fe68a51decb43..3df7fb2db0527 100644
--- a/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
+++ b/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
@@ -6,7 +6,7 @@ services: active-directory
ms.service: active-directory
ms.subservice: authentication
ms.topic: tutorial
-ms.date: 11/11/2021
+ms.date: 05/31/2022
ms.author: justinha
author: justinha
@@ -42,7 +42,7 @@ To complete this tutorial, you need the following resources and privileges:
* A working Azure AD tenant with at least an Azure AD Premium P1 or trial license enabled.
* If needed, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* For more information, see [Licensing requirements for Azure AD SSPR](concept-sspr-licensing.md).
-* An account with *global administrator* privileges.
+* An account with [Hybrid Identity Administrator](../roles/permissions-reference.md#hybrid-identity-administrator).
* Azure AD configured for self-service password reset.
* If needed, [complete the previous tutorial to enable Azure AD SSPR](tutorial-enable-sspr.md).
* An existing on-premises AD DS environment configured with a current version of Azure AD Connect.
@@ -118,7 +118,7 @@ With password writeback enabled in Azure AD Connect, now configure Azure AD SSPR
To enable password writeback in SSPR, complete the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) using a global administrator account.
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Hybrid Identity Administrator account.
1. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
1. Set the option for **Write back passwords to your on-premises directory?** to *Yes*.
1. Set the option for **Allow users to unlock accounts without resetting their password?** to *Yes*.
diff --git a/articles/active-directory/external-identities/whats-new-docs.md b/articles/active-directory/external-identities/whats-new-docs.md
index f4732de075cbd..4dd155126e238 100644
--- a/articles/active-directory/external-identities/whats-new-docs.md
+++ b/articles/active-directory/external-identities/whats-new-docs.md
@@ -1,7 +1,7 @@
---
title: "What's new in Azure Active Directory External Identities"
description: "New and updated documentation for the Azure Active Directory External Identities."
-ms.date: 05/02/2022
+ms.date: 06/01/2022
ms.service: active-directory
ms.subservice: B2B
ms.topic: reference
@@ -15,6 +15,32 @@ manager: CelesteDG
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+
+## May 2022
+
+### New articles
+
+- [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md)
+
+### Updated articles
+
+- [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md)
+- [Overview: Cross-tenant access with Azure AD External Identities (Preview)](cross-tenant-access-overview.md)
+- [Example: Configure SAML/WS-Fed based identity provider federation with AD FS](direct-federation-adfs.md)
+- [Federation with SAML/WS-Fed identity providers for guest users](direct-federation.md)
+- [External Identities documentation](index.yml)
+- [Quickstart: Add a guest user and send an invitation](b2b-quickstart-add-guest-users-portal.md)
+- [B2B collaboration overview](what-is-b2b.md)
+- [Leave an organization as a B2B collaboration user](leave-the-organization.md)
+- [Configure external collaboration settings](external-collaboration-settings-configure.md)
+- [B2B direct connect overview (Preview)](b2b-direct-connect-overview.md)
+- [Azure Active Directory External Identities: What's new](whats-new-docs.md)
+- [Configure cross-tenant access settings for B2B collaboration (Preview)](cross-tenant-access-settings-b2b-collaboration.md)
+- [Configure cross-tenant access settings for B2B direct connect (Preview)](cross-tenant-access-settings-b2b-direct-connect.md)
+- [Azure AD B2B in government and national clouds](b2b-government-national-clouds.md)
+- [External Identities in Azure Active Directory](external-identities-overview.md)
+- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
+
## April 2022
### Updated articles
@@ -58,22 +84,3 @@ Welcome to what's new in Azure Active Directory External Identities documentatio
- [Leave an organization as a B2B collaboration user](leave-the-organization.md)
- [Configure external collaboration settings](external-collaboration-settings-configure.md)
- [Reset redemption status for a guest user (Preview)](reset-redemption-status.md)
-
-## February 2022
-
-### Updated articles
-
-- [Add Google as an identity provider for B2B guest users](google-federation.md)
-- [External Identities in Azure Active Directory](external-identities-overview.md)
-- [Overview: Cross-tenant access with Azure AD External Identities (Preview)](cross-tenant-access-overview.md)
-- [B2B collaboration overview](what-is-b2b.md)
-- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
-- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)
-- [Tutorial: Bulk invite Azure AD B2B collaboration users](tutorial-bulk-invite.md)
-- [Azure Active Directory B2B best practices](b2b-fundamentals.md)
-- [Azure Active Directory B2B collaboration FAQs](faq.yml)
-- [Email one-time passcode authentication](one-time-passcode.md)
-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
-- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
-- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)
-- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)
diff --git a/articles/active-directory/fundamentals/whats-new.md b/articles/active-directory/fundamentals/whats-new.md
index 1db1d9edcea87..3c2e87fa9ea0f 100644
--- a/articles/active-directory/fundamentals/whats-new.md
+++ b/articles/active-directory/fundamentals/whats-new.md
@@ -782,201 +782,6 @@ We’re no longer publishing sign-in logs with the following error codes because
---
-## November 2021
-### Tenant enablement of combined security information registration for Azure Active Directory
-
-**Type:** Plan for change
-**Service category:** MFA
-**Product capability:** Identity Security & Protection
-
-We previously announced in April 2020, a new combined registration experience enabling users to register authentication methods for SSPR and multi-factor authentication at the same time was generally available for existing customer to opt in. Any Azure AD tenants created after August 2020 automatically have the default experience set to combined registration. Starting 2022, Microsoft will be enabling the MFA/SSPR combined registration experience for existing customers. [Learn more](../authentication/concept-registration-mfa-sspr-combined.md).
-
----
-
-### Windows users will see prompts more often when switching user accounts
-
-**Type:** Fixed
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-A problematic interaction between Windows and a local Active Directory Federation Services (ADFS) instance can result in users attempting to sign into another account, but be silently signed into their existing account instead, with no warning. For federated IdPs such as ADFS, that support the [prompt=login](/windows-server/identity/ad-fs/operations/ad-fs-prompt-login) pattern, Azure AD will now trigger a fresh login at ADFS when a user is directed to ADFS with a login hint. This ensures that the user is signed into the account they requested, rather than being silently signed into the account they're already signed in with.
-
-For more information, see the [change notice](../develop/reference-breaking-changes.md).
-
----
-
-### Public preview - Conditional Access Overview Dashboard
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Monitoring & Reporting
-
-The new Conditional Access overview dashboard enables all tenants to see insights about the impact of their Conditional Access policies without requiring an Azure Monitor subscription. This built-in dashboard provides tutorials to deploy policies, a summary of the policies in your tenant, a snapshot of your policy coverage, and security recommendations. [Learn more](../conditional-access/overview.md).
-
----
-
-### Public preview - SSPR writeback is now available for disconnected forests using Azure AD Connect cloud sync
-
-**Type:** New feature
-**Service category:** Azure AD Connect Cloud Sync
-**Product capability:** Identity Lifecycle Management
-
-The Public Preview feature for Azure AD Connect Cloud Sync Password writeback provides customers the capability to writeback a user’s password changes in the cloud to the on-premises directory in real time using the lightweight Azure AD cloud provisioning agent.[Learn more](../authentication/tutorial-enable-cloud-sync-sspr-writeback.md).
-
----
-
-### Public preview - Conditional Access for workload identities
-
-**Type:** New feature
-**Service category:** Conditional Access for workload identities
-**Product capability:** Identity Security & Protection
-
-Previously, Conditional Access policies applied only to users when they access apps and services like SharePoint online or the Azure portal. This preview adds support for Conditional Access policies applied to service principals owned by the organization. You can block service principals from accessing resources from outside trusted-named locations or Azure Virtual Networks. [Learn more](../conditional-access/workload-identity.md).
-
----
-
-### Public preview - Extra attributes available as claims
-
-**Type:** Changed feature
-**Service category:** Enterprise Apps
-**Product capability:** SSO
-
-Several user attributes have been added to the list of attributes available to map to claims to bring attributes available in claims more in line with what is available on the user object in Microsoft Graph. New attributes include mobilePhone and ProxyAddresses. [Learn more](../develop/reference-claims-mapping-policy-type.md#table-3-valid-id-values-per-source).
-
----
-
-### Public preview - "Session Lifetime Policies Applied" property in the sign-in logs
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** Identity Security & Protection
-
-We have recently added other property to the sign-in logs called "Session Lifetime Policies Applied". This property will list all the session lifetime policies that applied to the sign-in for example, Sign-in frequency, Remember multi-factor authentication and Configurable token lifetime. [Learn more](../reports-monitoring/concept-sign-ins.md#authentication-details).
-
----
-
-### Public preview - Enriched reviews on access packages in entitlement management
-
-**Type:** New feature
-**Service category:** User Access Management
-**Product capability:** Entitlement Management
-
-Entitlement Management’s enriched review experience allows even more flexibility on access packages reviews. Admins can now choose what happens to access if the reviewers don't respond, provide helper information to reviewers, or decide whether a justification is necessary. [Learn more](../governance/entitlement-management-access-reviews-create.md).
-
----
-
-### General availability - randomString and redact provisioning functions
-
-**Type:** New feature
-**Service category:** Provisioning
-**Product capability:** Outbound to SaaS Applications
-
-
-The Azure AD Provisioning service now supports two new functions, randomString() and Redact():
-- randomString - generate a string based on the length and characters you would like to include or exclude in your string.
-- redact - remove the value of the attribute from the audit and provisioning logs. [Learn more](../app-provisioning/functions-for-customizing-application-data.md#randomstring).
-
----
-
-### General availability - Now access review creators can select users and groups to receive notification on completion of reviews
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-Now access review creators can select users and groups to receive notification on completion of reviews. [Learn more](../governance/create-access-review.md).
-
----
-
-### General availability - Azure AD users can now view and report suspicious sign-ins and manage their accounts within Microsoft Authenticator
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** Identity Security & Protection
-
-This feature allows Azure AD users to manage their work or school accounts within the Microsoft Authenticator app. The management features will allow users to view sign-in history and sign-in activity. Users can also report any suspicious or unfamiliar activity, change their Azure AD account passwords, and update the account's security information.
-
-For more information on how to use this feature visit [View and search your recent sign-in activity from the My Sign-ins page](../user-help/my-account-portal-sign-ins-page.md).
-
----
-
-### General availability - New Microsoft Authenticator app icon
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** Identity Security & Protection
-
-New updates have been made to the Microsoft Authenticator app icon. To learn more about these updates, see the [Microsoft Authenticator app](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/microsoft-authenticator-app-easier-ways-to-add-or-manage/ba-p/2464408) blog post.
-
----
-
-### General availability - Azure AD single Sign-on and device-based Conditional Access support in Firefox on Windows 10/11
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** SSO
-
-We now support native single sign-on (SSO) support and device-based Conditional Access to Firefox browser on Windows 10 and Windows Server 2019 starting in Firefox version 91. [Learn more](../conditional-access/require-managed-devices.md#prerequisites).
-
----
-
-### New provisioning connectors in the Azure AD Application Gallery - November 2021
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
-
-- [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-provisioning-tutorial.md)
-- [BenQ IAM](../saas-apps/benq-iam-provisioning-tutorial.md)
-- [BIC Cloud Design](../saas-apps/bic-cloud-design-provisioning-tutorial.md)
-- [Chaos](../saas-apps/chaos-provisioning-tutorial.md)
-- [directprint.io](../saas-apps/directprint-io-provisioning-tutorial.md)
-- [Documo](../saas-apps/documo-provisioning-tutorial.md)
-- [Facebook Work Accounts](../saas-apps/facebook-work-accounts-provisioning-tutorial.md)
-- [introDus Pre and Onboarding Platform](../saas-apps/introdus-pre-and-onboarding-platform-provisioning-tutorial.md)
-- [Kisi Physical Security](../saas-apps/kisi-physical-security-provisioning-tutorial.md)
-- [Klaxoon](../saas-apps/klaxoon-provisioning-tutorial.md)
-- [Klaxoon SAML](../saas-apps/klaxoon-saml-provisioning-tutorial.md)
-- [MX3 Diagnostics](../saas-apps/mx3-diagnostics-connector-provisioning-tutorial.md)
-- [Netpresenter](../saas-apps/netpresenter-provisioning-tutorial.md)
-- [Peripass](../saas-apps/peripass-provisioning-tutorial.md)
-- [Real Links](../saas-apps/real-links-provisioning-tutorial.md)
-- [Sentry](../saas-apps/sentry-provisioning-tutorial.md)
-- [Teamgo](../saas-apps/teamgo-provisioning-tutorial.md)
-- [Zero](../saas-apps/zero-provisioning-tutorial.md)
-
-For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../manage-apps/user-provisioning.md).
-
----
-
-### New Federated Apps available in Azure AD Application gallery - November 2021
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In November 2021, we have added following 32 new applications in our App gallery with Federation support:
-
-[Tide - Connector](https://gallery.ctinsuretech-tide.com/), [Virtual Risk Manager - USA](../saas-apps/virtual-risk-manager-usa-tutorial.md), [Xorlia Policy Management](https://app.xoralia.com/), [WorkPatterns](https://app.workpatterns.com/oauth2/login?data_source_type=office_365_account_calendar_workspace_sync&utm_source=azure_sso), [GHAE](../saas-apps/ghae-tutorial.md), [Nodetrax Project](../saas-apps/nodetrax-project-tutorial.md), [Touchstone Benchmarking](https://app.touchstonebenchmarking.com/), [SURFsecureID - Azure MFA](../saas-apps/surfsecureid-azure-mfa-tutorial.md), [AiDEA](https://truebluecorp.com/en/prodotti/aidea-en/),[R and D Tax Credit Services: 10-wk Implementation](../saas-apps/r-and-d-tax-credit-services-tutorial.md), [Mapiq Essentials](../saas-apps/mapiq-essentials-tutorial.md), [Celtra Authentication Service](https://auth.celtra.com/login), [Compete HR](https://app.competewith.com/auth/login), [Snackmagic](../saas-apps/snackmagic-tutorial.md), [FileOrbis](../saas-apps/fileorbis-tutorial.md), [ClarivateWOS](../saas-apps/clarivatewos-tutorial.md), [RewardCo Engagement Cloud](https://cloud.live.rewardco.com/oauth/login), [ZoneVu](https://zonevu.ubiterra.com/onboarding/index), [V-Client](../saas-apps/v-client-tutorial.md), [Netpresenter Next](https://www.netpresenter.com/), [UserTesting](../saas-apps/usertesting-tutorial.md), [InfinityQS ProFicient on Demand](../saas-apps/infinityqs-proficient-on-demand-tutorial.md), [Feedonomics](https://auth.feedonomics.com/), [Customer Voice](https://cx.pobuca.com/), [Zanders Inside](https://home.zandersinside.com/), [Connecter](https://teamwork.connecterapp.com/azure_login), [Paychex Flex](https://login.flex.paychex.com/azfed-app/v1/azure/federation/admin), [InsightSquared](https://us2.insightsquared.com/#/boards/office365.com/settings/userconnection), [Kiteline Health](https://my.kitelinehealth.com/), [Fabrikam Enterprise Managed User (OIDC)](https://github.com/login), [PROXESS for Office365](https://www.proxess.de/office365), [Coverity Static Application Security Testing](../saas-apps/coverity-static-application-security-testing-tutorial.md)
-
-You can also find the documentation of all the applications [here](../saas-apps/tutorial-list.md).
-
-For listing your application in the Azure AD app gallery, read the details [here](../manage-apps/v2-howto-app-gallery-listing.md).
-
----
-
-### Updated "switch organizations" user experience in My Account.
-
-**Type:** Changed feature
-**Service category:** My Profile/Account
-**Product capability:** End User Experiences
-
-Updated "switch organizations" user interface in My Account. This visually improves the UI and provides the end-user with clear instructions. Added a manage organizations link to blade per customer feedback. [Learn more](https://support.microsoft.com/account-billing/switch-organizations-in-your-work-or-school-account-portals-c54c32c9-2f62-4fad-8c23-2825ed49d146).
-
----
-
diff --git a/articles/active-directory/identity-protection/overview-identity-protection.md b/articles/active-directory/identity-protection/overview-identity-protection.md
index bbfa4ae699945..fdee8be8e14d0 100644
--- a/articles/active-directory/identity-protection/overview-identity-protection.md
+++ b/articles/active-directory/identity-protection/overview-identity-protection.md
@@ -6,7 +6,7 @@ services: active-directory
ms.service: active-directory
ms.subservice: identity-protection
ms.topic: overview
-ms.date: 06/15/2021
+ms.date: 05/31/2022
ms.author: joflore
author: MicrosoftGuyJFlo
@@ -31,16 +31,12 @@ The signals generated by and fed to Identity Protection, can be further fed into
## Why is automation important?
-In his [blog post in October of 2018](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Eight-essentials-for-hybrid-identity-3-Securing-your-identity/ba-p/275843) Alex Weinert, who leads Microsoft's Identity Security and Protection team, explains why automation is so important when dealing with the volume of events:
+In the blog post *[Cyber Signals: Defending against cyber threats with the latest research, insights, and trends](https://www.microsoft.com/security/blog/2022/02/03/cyber-signals-defending-against-cyber-threats-with-the-latest-research-insights-and-trends/)* dated February 3, 2022 we shared a thread intelligence brief including the following statistics:
-> Each day, our machine learning and heuristic systems provide risk scores for 18 billion login attempts for over 800 million distinct accounts, 300 million of which are discernibly done by adversaries (entities like: criminal actors, hackers).
->
-> At Ignite last year, I spoke about the top 3 attacks on our identity systems. Here is the recent volume of these attacks
->
-> - **Breach replay**: 4.6BN attacks detected in May 2018
-> - **Password spray**: 350k in April 2018
-> - **Phishing**: This is hard to quantify exactly, but we saw 23M risk events in March 2018, many of which are phish related
+> * Analyzed ...24 trillion security signals combined with intelligence we track by monitoring more than 40 nation-state groups and over 140 threat groups...
+> * ...From January 2021 through December 2021, we’ve blocked more than 25.6 billion Azure AD brute force authentication attacks...
+This scale of signals and attacks requires some level of automation to be able to keep up.
## Risk detection and remediation
Identity Protection identifies risks of many types, including:
@@ -53,7 +49,7 @@ Identity Protection identifies risks of many types, including:
- Password spray
- and more...
-More detail on these and other risks including how or when they are calculated can be found in the article, [What is risk](concept-identity-protection-risks.md).
+More detail on these and other risks including how or when they're calculated can be found in the article, [What is risk](concept-identity-protection-risks.md).
The risk signals can trigger remediation efforts such as requiring users to: perform Azure AD Multi-Factor Authentication, reset their password using self-service password reset, or blocking until an administrator takes action.
@@ -69,9 +65,9 @@ More information can be found in the article, [How To: Investigate risk](howto-i
### Risk levels
-Identity Protection categorizes risk into three tiers: low, medium, and high.
+Identity Protection categorizes risk into tiers: low, medium, and high.
-While Microsoft does not provide specific details about how risk is calculated, we will say that each level brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
+While Microsoft doesn't provide specific details about how risk is calculated, we'll say that each level brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
## Exporting risk data
@@ -79,7 +75,7 @@ Data from Identity Protection can be exported to other tools for archive and fur
Information about integrating Identity Protection information with Microsoft Sentinel can be found in the article, [Connect data from Azure AD Identity Protection](../../sentinel/data-connectors-reference.md#azure-active-directory-identity-protection).
-Additionally, organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send RiskyUsers and UserRiskEvents data to a Log Analytics workspace, archive data to a storage account, stream data to an Event Hub, or send data to a partner solution. Detailed information about how to do so can be found in the article, [How To: Export risk data](howto-export-risk-data.md).
+Additionally, organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send RiskyUsers and UserRiskEvents data to a Log Analytics workspace, archive data to a storage account, stream data to Event Hubs, or send data to a partner solution. Detailed information about how to do so can be found in the article, [How To: Export risk data](howto-export-risk-data.md).
## Permissions
@@ -92,7 +88,7 @@ Identity Protection requires users be a Security Reader, Security Operator, Secu
| Security operator | View all Identity Protection reports and Overview blade
Dismiss user risk, confirm safe sign-in, confirm compromise | Configure or change policies
Reset password for a user
Configure alerts |
| Security reader | View all Identity Protection reports and Overview blade | Configure or change policies
Reset password for a user
Configure alerts
Give feedback on detections |
-Currently, the security operator role cannot access the Risky sign-ins report.
+Currently, the security operator role can't access the Risky sign-ins report.
Conditional Access administrators can also create policies that factor in sign-in risk as a condition. Find more information in the article [Conditional Access: Conditions](../conditional-access/concept-conditional-access-conditions.md#sign-in-risk).
@@ -100,17 +96,17 @@ Conditional Access administrators can also create policies that factor in sign-i
[!INCLUDE [Active Directory P2 license](../../../includes/active-directory-p2-license.md)]
-| Capability | Details | Azure AD Free / Microsoft 365 Apps | Azure AD Premium P1|Azure AD Premium P2 |
+| Capability | Details | Azure AD Free / Microsoft 365 Apps | Azure AD Premium P1 | Azure AD Premium P2 |
| --- | --- | --- | --- | --- |
-| Risk policies | User risk policy (via Identity Protection) | No | No |Yes |
-| Risk policies | Sign-in risk policy (via Identity Protection or Conditional Access) | No | No |Yes |
-| Security reports | Overview | No | No |Yes |
-| Security reports | Risky users | Limited Information. Only users with medium and high risk are shown. No details drawer or risk history. | Limited Information. Only users with medium and high risk are shown. No details drawer or risk history. | Full access|
-| Security reports | Risky sign-ins | Limited Information. No risk detail or risk level is shown. | Limited Information. No risk detail or risk level is shown. | Full access|
-| Security reports | Risk detections | No | Limited Information. No details drawer.| Full access|
-| Notifications | Users at risk detected alerts | No | No |Yes |
-| Notifications | Weekly digest| No | No | Yes |
-| | MFA registration policy | No | No | Yes |
+| Risk policies | User risk policy (via Identity Protection) | No | No | Yes |
+| Risk policies | Sign-in risk policy (via Identity Protection or Conditional Access) | No | No | Yes |
+| Security reports | Overview | No | No | Yes |
+| Security reports | Risky users | Limited Information. Only users with medium and high risk are shown. No details drawer or risk history. | Limited Information. Only users with medium and high risk are shown. No details drawer or risk history. | Full access|
+| Security reports | Risky sign-ins | Limited Information. No risk detail or risk level is shown. | Limited Information. No risk detail or risk level is shown. | Full access |
+| Security reports | Risk detections | No | Limited Information. No details drawer.| Full access |
+| Notifications | Users at risk detected alerts | No | No | Yes |
+| Notifications | Weekly digest | No | No | Yes |
+| MFA registration policy | | No | No | Yes |
More information on these rich reports can be found in the article, [How To: Investigate risk](howto-identity-protection-investigate-risk.md#navigating-the-reports).
diff --git a/articles/active-directory/manage-apps/access-panel-collections.md b/articles/active-directory/manage-apps/access-panel-collections.md
index 64c4580d90b77..83e0c12b2bfa6 100644
--- a/articles/active-directory/manage-apps/access-panel-collections.md
+++ b/articles/active-directory/manage-apps/access-panel-collections.md
@@ -1,6 +1,6 @@
---
title: Create collections for My Apps portals
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Use My Apps collections to Customize My Apps pages for a simpler My Apps experience for your users. Organize applications into groups with separate tabs.
services: active-directory
author: lnalepa
diff --git a/articles/active-directory/manage-apps/add-application-portal-assign-users.md b/articles/active-directory/manage-apps/add-application-portal-assign-users.md
index 8830cbc6d10fd..08c49d92e4b1c 100644
--- a/articles/active-directory/manage-apps/add-application-portal-assign-users.md
+++ b/articles/active-directory/manage-apps/add-application-portal-assign-users.md
@@ -1,6 +1,6 @@
---
title: 'Quickstart: Create and assign a user account'
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Create a user account in your Azure Active Directory tenant and assign it to an application.
services: active-directory
author: omondiatieno
diff --git a/articles/active-directory/manage-apps/add-application-portal-configure.md b/articles/active-directory/manage-apps/add-application-portal-configure.md
index f2cd611b8499a..cca186af32e90 100644
--- a/articles/active-directory/manage-apps/add-application-portal-configure.md
+++ b/articles/active-directory/manage-apps/add-application-portal-configure.md
@@ -1,6 +1,6 @@
---
title: 'Configure enterprise application properties'
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Configure the properties of an enterprise application in Azure Active Directory.
services: active-directory
author: omondiatieno
diff --git a/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md b/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
index ecaa16ef71010..64be79daa0284 100644
--- a/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
+++ b/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
@@ -1,7 +1,7 @@
---
title: 'Add an OpenID Connect-based single sign-on application'
description: Learn how to add OpenID Connect-based single sign-on application in Azure Active Directory.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: eringreenlee
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/add-application-portal-setup-sso.md b/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
index 03b94240cfb18..a775e3419ae1b 100644
--- a/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
+++ b/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
@@ -1,6 +1,6 @@
---
title: 'Quickstart: Enable single sign-on for an enterprise application'
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Enable single sign-on for an enterprise application in Azure Active Directory.
services: active-directory
author: omondiatieno
diff --git a/articles/active-directory/manage-apps/add-application-portal.md b/articles/active-directory/manage-apps/add-application-portal.md
index 8e614917483c6..adbf0d3c43374 100644
--- a/articles/active-directory/manage-apps/add-application-portal.md
+++ b/articles/active-directory/manage-apps/add-application-portal.md
@@ -1,7 +1,7 @@
---
title: 'Quickstart: Add an enterprise application'
description: Add an enterprise application in Azure Active Directory.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: omondiatieno
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/admin-consent-workflow-faq.md b/articles/active-directory/manage-apps/admin-consent-workflow-faq.md
index 613d6c774b25b..8fd1b3c8da561 100644
--- a/articles/active-directory/manage-apps/admin-consent-workflow-faq.md
+++ b/articles/active-directory/manage-apps/admin-consent-workflow-faq.md
@@ -1,6 +1,6 @@
---
title: Frequently asked questions about the admin consent workflow
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Find answers to frequently asked questions (FAQs) about the admin consent workflow.
services: active-directory
author: eringreenlee
@@ -8,8 +8,8 @@ manager: CelesteDG
ms.service: active-directory
ms.subservice: app-mgmt
ms.workload: identity
-ms.topic: how-to
-ms.date: 11/17/2021
+ms.topic: reference
+ms.date: 05/27/2022
ms.author: ergreenl
ms.reviewer: ergreenl
ms.collection: M365-identity-device-management
diff --git a/articles/active-directory/manage-apps/admin-consent-workflow-overview.md b/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
index 6cbe5226b4a65..e193ae1c8486b 100644
--- a/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
+++ b/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
@@ -1,6 +1,6 @@
---
title: Overview of admin consent workflow
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn about the admin consent workflow in Azure Active Directory
services: active-directory
author: eringreenlee
diff --git a/articles/active-directory/manage-apps/app-management-powershell-samples.md b/articles/active-directory/manage-apps/app-management-powershell-samples.md
index 3d467220ea59a..46dd97360935a 100644
--- a/articles/active-directory/manage-apps/app-management-powershell-samples.md
+++ b/articles/active-directory/manage-apps/app-management-powershell-samples.md
@@ -1,6 +1,6 @@
---
title: PowerShell samples in Application Management
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: These PowerShell samples are used for apps you manage in your Azure Active Directory tenant. You can use these sample scripts to find expiration information about secrets and certificates.
services: active-directory
author: davidmu1
diff --git a/articles/active-directory/manage-apps/app-management-videos.md b/articles/active-directory/manage-apps/app-management-videos.md
new file mode 100644
index 0000000000000..9a284e9d99a13
--- /dev/null
+++ b/articles/active-directory/manage-apps/app-management-videos.md
@@ -0,0 +1,104 @@
+---
+title: Application management videos
+description: A list of videos about app registrations, enterprise apps, consent and permissions, and app ownership and assignment in Azure AD
+services: azure AD
+author: omondiatieno
+manager: CelesteDG
+
+ms.service: active-directory
+ms.subservice: develop
+ms.topic: conceptual
+ms.workload: identity
+ms.date: 05/31/2022
+ms.author: jomondi
+ms.reviewer: celested
+---
+
+# Application management videos
+
+Learn about the key concepts of application management such as App registrations vs enterprise apps, consent and permissions framework and app ownership and, user assignment.
+
+## App registrations and Enterprise apps
+
+Learn about the different use cases and personas involved in App Registrations and Enterprise Apps and how developers and admins interact with each option to manage applications in Azure AD.
+___
+
+:::row:::
+ :::column:::
+ [What is the difference between app registrations and enterprise apps?](https://www.youtube.com/watch?v=JeahL9ZtGfQ&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=4&t=2s)(2:01)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/JeahL9ZtGfQ]
+ :::column-end:::
+:::row-end:::
+
+
+
+## Consent and permissions for admins
+
+Learn about the options available for managing consent to applications in a tenant. Learn how about delegated permissions and how to revoke previously consented permissions to mitigate risks posed by malicious applications.
+___
+
+:::row:::
+ :::column:::
+ 1 - [How do I turn on the admin consent workflow?](https://www.youtube.com/watch?v=19v7WSt9HwU&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=4)(1:04)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/19v7WSt9HwU]
+ :::column-end:::
+ :::column:::
+ 2 - [How do I grant admin consent in the Azure AD portal](https://www.youtube.com/watch?v=LSYcelwdhHI&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=5)(1:19)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/LSYcelwdhHI]
+ :::column-end:::
+:::row-end:::
+:::row:::
+ :::column:::
+ 3 - [How do delegated permissions work](https://www.youtube.com/watch?v=URTrOXCyH1s&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=7)(1:21)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/URTrOXCyH1s]
+ :::column-end:::
+ :::column:::
+ 4 - [How do I revoke permissions I've previously consented to for an app](https://www.youtube.com/watch?v=A88uh7ICNJU&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=6)(1:34)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/A88uh7ICNJU]
+ :::column-end:::
+:::row-end:::
+
+
+## Assigning owners and users to an enterprise app
+Learn about who can assign owners to service principals, how to assign these owners, permissions that owners have, and what to do when an owner leaves the organization.
+Learn how to assign users and, groups to an enterprise application and how and why an enterprise app may show up in a tenant.
+___
+
+:::row:::
+ :::column:::
+ 1 - [How can you ensure healthy ownership to manage your Azure AD app ecosystem?](https://www.youtube.com/watch?v=akOrP3mP4UQ&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=1)(2:13)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/akOrP3mP4UQ]
+ :::column-end:::
+ :::column:::
+ 2 - [How do I manage who can access the applications in my tenant](https://www.youtube.com/watch?v=IVRI9mSPDBA&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=2)(1:48)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/IVRI9mSPDBA]
+ :::column-end:::
+:::row-end:::
+:::row:::
+ :::column:::
+ 3 - [Why is this app in my tenant?](https://www.youtube.com/watch?v=NhbcVt5xOVI&list=PLlrxD0HtieHiBPIyUWkqVzoMrgfwKi4dY&index=8)(1:36)
+ :::column-end:::
+ :::column:::
+ >[!Video https://www.youtube.com/embed/NhbcVt5xOVI]
+ :::column-end:::
+ :::column:::
+
+ :::column-end:::
+ :::column:::
+
+ :::column-end:::
+:::row-end:::
diff --git a/articles/active-directory/manage-apps/application-list.md b/articles/active-directory/manage-apps/application-list.md
index 1484a1921c64e..d9a7b27827b1d 100644
--- a/articles/active-directory/manage-apps/application-list.md
+++ b/articles/active-directory/manage-apps/application-list.md
@@ -1,6 +1,6 @@
---
title: Viewing apps using your tenant for identity management
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Understand how to view all applications using your Azure Active Directory tenant for identity management.
services: active-directory
author: AllisonAm
diff --git a/articles/active-directory/manage-apps/application-management-certs-faq.md b/articles/active-directory/manage-apps/application-management-certs-faq.md
index 15d1301b4911d..2352014340034 100644
--- a/articles/active-directory/manage-apps/application-management-certs-faq.md
+++ b/articles/active-directory/manage-apps/application-management-certs-faq.md
@@ -1,6 +1,6 @@
---
title: Application Management certificates frequently asked questions
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn answers to frequently asked questions (FAQ) about managing certificates for apps using Azure Active Directory as an Identity Provider (IdP).
services: active-directory
author: davidmu1
diff --git a/articles/active-directory/manage-apps/application-properties.md b/articles/active-directory/manage-apps/application-properties.md
index dd94c83522d5e..f7f59aa9e94cc 100644
--- a/articles/active-directory/manage-apps/application-properties.md
+++ b/articles/active-directory/manage-apps/application-properties.md
@@ -1,6 +1,6 @@
---
title: 'Properties of an enterprise application'
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn about the properties of an enterprise application in Azure Active Directory.
services: active-directory
author: eringreenlee
diff --git a/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md b/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
index d069ce3b67504..cec7ef4501bb0 100644
--- a/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
+++ b/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
@@ -1,6 +1,6 @@
---
title: Troubleshoot problems signing in to an application from My Apps portal
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Troubleshoot problems signing in to an application from Azure AD My Apps
services: active-directory
author: lnalepa
diff --git a/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md b/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
index 35cbcb21a70fe..469b2841a53c3 100644
--- a/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
+++ b/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
@@ -1,6 +1,6 @@
---
title: Error message appears on app page after you sign in
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: How to resolve issues with Azure AD sign in when the app returns an error message.
services: active-directory
author: eringreenlee
diff --git a/articles/active-directory/manage-apps/application-sign-in-problem-first-party-microsoft.md b/articles/active-directory/manage-apps/application-sign-in-problem-first-party-microsoft.md
index 693777d2e5cf1..d41e6befe41f6 100644
--- a/articles/active-directory/manage-apps/application-sign-in-problem-first-party-microsoft.md
+++ b/articles/active-directory/manage-apps/application-sign-in-problem-first-party-microsoft.md
@@ -1,6 +1,6 @@
---
title: Problems signing in to a Microsoft application
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Troubleshoot common problems faced when signing in to first-party Microsoft Applications using Azure AD (like Microsoft 365).
services: active-directory
author: AlAmaral
diff --git a/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md b/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md
index 22d508003a600..250738bbed4d8 100644
--- a/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md
+++ b/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md
@@ -1,6 +1,6 @@
---
title: Unexpected error when performing consent to an application
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Discusses errors that can occur during the process of consenting to an application and what you can do about them
services: active-directory
author: eringreenlee
diff --git a/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md b/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
index 2c75c6b25a67d..4429fb633d826 100644
--- a/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
+++ b/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
@@ -1,6 +1,6 @@
---
title: Unexpected consent prompt when signing in to an application
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: How to troubleshoot when a user sees a consent prompt for an application you have integrated with Azure AD that you did not expect
services: active-directory
author: eringreenlee
diff --git a/articles/active-directory/manage-apps/assign-app-owners.md b/articles/active-directory/manage-apps/assign-app-owners.md
index edcc5fc53487e..04e2951b4ab2b 100644
--- a/articles/active-directory/manage-apps/assign-app-owners.md
+++ b/articles/active-directory/manage-apps/assign-app-owners.md
@@ -1,6 +1,6 @@
---
title: Assign enterprise application owners
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to assign owners to applications in Azure Active Directory
services: active-directory
documentationcenter: ''
diff --git a/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md b/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
index 0f6b1b4673afb..af4314b5c5bc7 100644
--- a/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
+++ b/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
@@ -1,6 +1,6 @@
---
title: Assign users and groups
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to assign and unassign users, and groups, for an app using Azure Active Directory for identity management.
services: active-directory
author: eringreenlee
diff --git a/articles/active-directory/manage-apps/certificate-signing-options.md b/articles/active-directory/manage-apps/certificate-signing-options.md
index 9ec8ab2ffecf9..7223c0b406c4c 100644
--- a/articles/active-directory/manage-apps/certificate-signing-options.md
+++ b/articles/active-directory/manage-apps/certificate-signing-options.md
@@ -1,6 +1,6 @@
---
title: Advanced certificate signing options in a SAML token
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to use advanced certificate signing options in the SAML token for pre-integrated apps in Azure Active Directory
services: active-directory
author: davidmu1
diff --git a/articles/active-directory/manage-apps/cloud-app-security.md b/articles/active-directory/manage-apps/cloud-app-security.md
index 780568bd799a4..62ed643d35d88 100644
--- a/articles/active-directory/manage-apps/cloud-app-security.md
+++ b/articles/active-directory/manage-apps/cloud-app-security.md
@@ -1,6 +1,6 @@
---
title: App visibility and control with Microsoft Defender for Cloud Apps
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn ways to identify app risk levels, stop breaches and leaks in real time, and use app connectors to take advantage of provider APIs for visibility and governance.
services: active-directory
author: davidmu1
diff --git a/articles/active-directory/manage-apps/configure-admin-consent-workflow.md b/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
index dc2d010d2165f..6ec6a2918050c 100755
--- a/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
+++ b/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
@@ -1,6 +1,6 @@
---
title: Configure the admin consent workflow
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to configure a way for end users to request access to applications that require admin consent.
services: active-directory
author: eringreenlee
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: app-mgmt
ms.workload: identity
ms.topic: how-to
-ms.date: 03/22/2021
+ms.date: 05/27/2022
ms.author: ergreenl
ms.reviewer: davidmu
ms.collection: M365-identity-device-management
diff --git a/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md b/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
index b91db52225358..1da59b523703d 100644
--- a/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
+++ b/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
@@ -1,6 +1,6 @@
---
title: Configure sign-in auto-acceleration using Home Realm Discovery
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to force federated IdP acceleration for an application using Home Realm Discovery policy.
services: active-directory
author: nickludwig
diff --git a/articles/active-directory/manage-apps/configure-linked-sign-on.md b/articles/active-directory/manage-apps/configure-linked-sign-on.md
index eb7a1149aa927..4df6c2bc3683c 100644
--- a/articles/active-directory/manage-apps/configure-linked-sign-on.md
+++ b/articles/active-directory/manage-apps/configure-linked-sign-on.md
@@ -1,7 +1,7 @@
---
title: Add linked single sign-on to an application
description: Add linked single sign-on to an application in Azure Active Directory.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: AllisonAm
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md b/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
index edcf1d1ac6849..b8ae337b0cc06 100644
--- a/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
+++ b/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
@@ -1,7 +1,7 @@
---
title: Add password-based single sign-on to an application
description: Add password-based single sign-on to an application in Azure Active Directory.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: AllisonAm
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/configure-permission-classifications.md b/articles/active-directory/manage-apps/configure-permission-classifications.md
index 71f68c4f7552a..d650acc9a7fc3 100644
--- a/articles/active-directory/manage-apps/configure-permission-classifications.md
+++ b/articles/active-directory/manage-apps/configure-permission-classifications.md
@@ -1,6 +1,6 @@
---
title: Configure permission classifications
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to manage delegated permission classifications.
services: active-directory
author: jackson-woods
diff --git a/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md b/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md
index 3b77349bbdfa0..767748df8de83 100644
--- a/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md
+++ b/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md
@@ -1,6 +1,6 @@
---
title: Configure risk-based step-up consent
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to disable and enable risk-based step-up consent to reduce user exposure to malicious apps that make illicit consent requests.
services: active-directory
author: psignoret
diff --git a/articles/active-directory/manage-apps/configure-user-consent-groups.md b/articles/active-directory/manage-apps/configure-user-consent-groups.md
index 17b4604358a94..cc861f8365222 100644
--- a/articles/active-directory/manage-apps/configure-user-consent-groups.md
+++ b/articles/active-directory/manage-apps/configure-user-consent-groups.md
@@ -1,6 +1,6 @@
---
title: Configure group owner consent to apps accessing group data
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn manage whether group and team owners can consent to applications that will have access to the group or team's data.
services: active-directory
author: eringreenlee
diff --git a/articles/active-directory/manage-apps/configure-user-consent.md b/articles/active-directory/manage-apps/configure-user-consent.md
index 5bb33408f81d3..4edb36aecafe2 100755
--- a/articles/active-directory/manage-apps/configure-user-consent.md
+++ b/articles/active-directory/manage-apps/configure-user-consent.md
@@ -1,6 +1,6 @@
---
title: Configure how users consent to applications
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to manage how and when users can consent to applications that will have access to your organization's data.
services: active-directory
author: psignoret
diff --git a/articles/active-directory/manage-apps/consent-and-permissions-overview.md b/articles/active-directory/manage-apps/consent-and-permissions-overview.md
index 2a87abf1d5740..af0a0477b4569 100644
--- a/articles/active-directory/manage-apps/consent-and-permissions-overview.md
+++ b/articles/active-directory/manage-apps/consent-and-permissions-overview.md
@@ -1,6 +1,6 @@
---
title: Overview of consent and permissions
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn about the fundamental concepts of consents and permissions in Azure AD
services: active-directory
author: psignoret
diff --git a/articles/active-directory/manage-apps/datawiza-with-azure-ad.md b/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
index 718404ef1aa4f..1bcaeb21e4a1f 100644
--- a/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
+++ b/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
@@ -1,6 +1,6 @@
---
title: Secure hybrid access with Datawiza
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to integrate Datawiza with Azure AD. See how to use Datawiza and Azure AD to authenticate users and give them access to on-premises and cloud apps.
services: active-directory
author: gargi-sinha
diff --git a/articles/active-directory/manage-apps/debug-saml-sso-issues.md b/articles/active-directory/manage-apps/debug-saml-sso-issues.md
index ab9a045fca08d..c16c3e209c5ee 100644
--- a/articles/active-directory/manage-apps/debug-saml-sso-issues.md
+++ b/articles/active-directory/manage-apps/debug-saml-sso-issues.md
@@ -1,6 +1,6 @@
---
title: Debug SAML-based single sign-on
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Debug SAML-based single sign-on to applications in Azure Active Directory.
services: active-directory
ms.author: alamaral
@@ -10,7 +10,7 @@ ms.service: active-directory
ms.subservice: app-mgmt
ms.topic: troubleshooting
ms.workload: identity
-ms.date: 02/18/2019
+ms.date: 05/27/2022
---
# Debug SAML-based single sign-on to applications
@@ -19,7 +19,7 @@ Learn how to find and fix [single sign-on](what-is-single-sign-on.md) issues for
## Before you begin
-We recommend installing the [My Apps Secure Sign-in Extension](https://support.microsoft.com/account-billing/troubleshoot-problems-with-the-my-apps-portal-d228da80-fcb7-479c-b960-a1e2535cbdff#im-having-trouble-installing-the-my-apps-secure-sign-in-extension). This browser extension makes it easy to gather the SAML request and SAML response information that you need to resolving issues with single sign-on. In case you cannot install the extension, this article shows you how to resolve issues both with and without the extension installed.
+We recommend installing the [My Apps Secure Sign-in Extension](https://support.microsoft.com/account-billing/troubleshoot-problems-with-the-my-apps-portal-d228da80-fcb7-479c-b960-a1e2535cbdff#im-having-trouble-installing-the-my-apps-secure-sign-in-extension). This browser extension makes it easy to gather the SAML request and SAML response information that you need to resolve issues with single sign-on. In case you can't install the extension, this article shows you how to resolve issues both with and without the extension installed.
To download and install the My Apps Secure Sign-in Extension, use one of the following links.
@@ -38,7 +38,7 @@ To test SAML-based single sign-on between Azure AD and a target application:
![Screenshot showing the test SAML SSO page](./media/debug-saml-sso-issues/test-single-sign-on.png)
-If you are successfully signed in, the test has passed. In this case, Azure AD issued a SAML response token to the application. The application used the SAML token to successfully sign you in.
+If you're successfully signed in, the test has passed. In this case, Azure AD issued a SAML response token to the application. The application used the SAML token to successfully sign you in.
If you have an error on the company sign-in page or the application's page, use one of the next sections to resolve the error.
@@ -55,7 +55,7 @@ To debug this error, you need the error message and the SAML request. The My App
1. When an error occurs, the extension redirects you back to the Azure AD **Test single sign-on** blade.
1. On the **Test single sign-on** blade, select **Download the SAML request**.
1. You should see specific resolution guidance based on the error and the values in the SAML request.
-1. You will see a **Fix it** button to automatically update the configuration in Azure AD to resolve the issue. If you don't see this button, then the sign-in issue is not due to a misconfiguration on Azure AD.
+1. You'll see a **Fix it** button to automatically update the configuration in Azure AD to resolve the issue. If you don't see this button, then the sign-in issue isn't due to a misconfiguration on Azure AD.
If no resolution is provided for the sign-in error, we suggest that you use the feedback textbox to inform us.
@@ -66,21 +66,21 @@ If no resolution is provided for the sign-in error, we suggest that you use the
- A statement identifying the root cause of the problem.
1. Go back to Azure AD and find the **Test single sign-on** blade.
1. In the text box above **Get resolution guidance**, paste the error message.
-1. Click **Get resolution guidance** to display steps for resolving the issue. The guidance might require information from the SAML request or SAML response. If you're not using the My Apps Secure Sign-in Extension, you might need a tool such as [Fiddler](https://www.telerik.com/fiddler) to retrieve the SAML request and response.
-1. Verify that the destination in the SAML request corresponds to the SAML Single Sign-On Service URL obtained from Azure AD.
-1. Verify the issuer in the SAML request is the same identifier you have configured for the application in Azure AD. Azure AD uses the issuer to find an application in your directory.
+1. Select **Get resolution guidance** to display steps for resolving the issue. The guidance might require information from the SAML request or SAML response. If you're not using the My Apps Secure Sign-in Extension, you might need a tool such as [Fiddler](https://www.telerik.com/fiddler) to retrieve the SAML request and response.
+1. Verify that the destination in the SAML request corresponds to the SAML Single Sign-on Service URL obtained from Azure AD.
+1. Verify the issuer in the SAML request is the same identifier you've configured for the application in Azure AD. Azure AD uses the issuer to find an application in your directory.
1. Verify AssertionConsumerServiceURL is where the application expects to receive the SAML token from Azure AD. You can configure this value in Azure AD, but it's not mandatory if it's part of the SAML request.
## Resolve a sign-in error on the application page
-You might sign in successfully and then see an error on the application's page. This occurs when Azure AD issued a token to the application, but the application does not accept the response.
+You might sign in successfully and then see an error on the application's page. This occurs when Azure AD issued a token to the application, but the application doesn't accept the response.
To resolve the error, follow these steps, or watch this [short video about how to use Azure AD to troubleshoot SAML SSO](https://www.youtube.com/watch?v=poQCJK0WPUk&list=PLLasX02E8BPBm1xNMRdvP6GtA6otQUqp0&index=8):
1. If the application is in the Azure AD Gallery, verify that you've followed all the steps for integrating the application with Azure AD. To find the integration instructions for your application, see the [list of SaaS application integration tutorials](../saas-apps/tutorial-list.md).
1. Retrieve the SAML response.
- - If the My Apps Secure Sign-in extension is installed, from the **Test single sign-on** blade, click **download the SAML response**.
- - If the extension is not installed, use a tool such as [Fiddler](https://www.telerik.com/fiddler) to retrieve the SAML response.
+ - If the My Apps Secure Sign-in extension is installed, from the **Test single sign-on** blade, select **download the SAML response**.
+ - If the extension isn't installed, use a tool such as [Fiddler](https://www.telerik.com/fiddler) to retrieve the SAML response.
1. Notice these elements in the SAML response token:
- User unique identifier of NameID value and format
- Claims issued in the token
@@ -88,7 +88,7 @@ To resolve the error, follow these steps, or watch this [short video about how t
For more information on the SAML response, see [Single Sign-on SAML protocol](../develop/single-sign-on-saml-protocol.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json).
-1. Now that you have reviewed the SAML response, see [Error on an application's page after signing in](application-sign-in-problem-application-error.md) for guidance on how to resolve the problem.
+1. Now that you've reviewed the SAML response, see [Error on an application's page after signing in](application-sign-in-problem-application-error.md) for guidance on how to resolve the problem.
1. If you're still not able to sign in successfully, you can ask the application vendor what is missing from the SAML response.
## Next steps
diff --git a/articles/active-directory/manage-apps/delete-application-portal.md b/articles/active-directory/manage-apps/delete-application-portal.md
index bbadfefbba94a..4079e347ab9f3 100644
--- a/articles/active-directory/manage-apps/delete-application-portal.md
+++ b/articles/active-directory/manage-apps/delete-application-portal.md
@@ -1,7 +1,7 @@
---
title: 'Quickstart: Delete an enterprise application'
description: Delete an enterprise application in Azure Active Directory.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: davidmu1
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/disable-user-sign-in-portal.md b/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
index a436dc709cf1b..1519243bdd815 100644
--- a/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
+++ b/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
@@ -1,6 +1,6 @@
---
title: Disable how a how a user signs in
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: How to disable an enterprise application so that no users may sign in to it in Azure Active Directory
services: active-directory
author: eringreenlee
diff --git a/articles/active-directory/manage-apps/end-user-experiences.md b/articles/active-directory/manage-apps/end-user-experiences.md
index 01a6839979ed0..08281c1cd366c 100644
--- a/articles/active-directory/manage-apps/end-user-experiences.md
+++ b/articles/active-directory/manage-apps/end-user-experiences.md
@@ -1,6 +1,6 @@
---
title: End-user experiences for applications
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Azure Active Directory (Azure AD) provides several customizable ways to deploy applications to end users in your organization.
services: active-directory
author: lnalepa
diff --git a/articles/active-directory/manage-apps/f5-aad-integration.md b/articles/active-directory/manage-apps/f5-aad-integration.md
index ee96472fa0926..1455f6c5591a3 100644
--- a/articles/active-directory/manage-apps/f5-aad-integration.md
+++ b/articles/active-directory/manage-apps/f5-aad-integration.md
@@ -1,6 +1,6 @@
---
title: Secure hybrid access with F5
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: F5 BIG-IP Access Policy Manager and Azure Active Directory integration for Secure Hybrid Access
author: gargi-sinha
manager: martinco
diff --git a/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md b/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
index fc997fb758f19..75138c4434b99 100644
--- a/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
+++ b/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
@@ -1,6 +1,6 @@
---
title: Configure F5 BIG-IP SSL-VPN solution in Azure AD
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Tutorial to configure F5’s BIG-IP based Secure socket layer Virtual private network (SSL-VPN) solution with Azure Active Directory (AD) for Secure Hybrid Access (SHA)
services: active-directory
author: gargi-sinha
diff --git a/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md b/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
index 0003a1308448c..cbbde00cbbfb6 100644
--- a/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
+++ b/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
@@ -1,6 +1,6 @@
---
title: Secure hybrid access with F5 deployment guide
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Tutorial to deploy F5 BIG-IP Virtual Edition (VE) VM in Azure IaaS for Secure hybrid access
services: active-directory
author: gargi-sinha
diff --git a/articles/active-directory/manage-apps/grant-admin-consent.md b/articles/active-directory/manage-apps/grant-admin-consent.md
index 236cf95825f4c..425401969bde2 100755
--- a/articles/active-directory/manage-apps/grant-admin-consent.md
+++ b/articles/active-directory/manage-apps/grant-admin-consent.md
@@ -1,6 +1,6 @@
---
title: Grant tenant-wide admin consent to an application
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to grant tenant-wide consent to an application so that end-users are not prompted for consent when signing in to an application.
services: active-directory
author: eringreenlee
diff --git a/articles/active-directory/manage-apps/grant-consent-single-user.md b/articles/active-directory/manage-apps/grant-consent-single-user.md
index dd5819567b866..cfc51682e32f9 100644
--- a/articles/active-directory/manage-apps/grant-consent-single-user.md
+++ b/articles/active-directory/manage-apps/grant-consent-single-user.md
@@ -1,7 +1,7 @@
---
title: Grant consent on behalf of a single user
description: Learn how to grant consent on behalf of a single user when user consent is disabled or restricted.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: psignoret
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/hide-application-from-user-portal.md b/articles/active-directory/manage-apps/hide-application-from-user-portal.md
index f408cfdf26f4e..c6b1b2f688deb 100644
--- a/articles/active-directory/manage-apps/hide-application-from-user-portal.md
+++ b/articles/active-directory/manage-apps/hide-application-from-user-portal.md
@@ -1,6 +1,6 @@
---
title: Hide an Enterprise application
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: How to hide an Enterprise application from user's experience in Azure Active Directory access portals or Microsoft 365 launchers.
services: active-directory
author: lnalepa
diff --git a/articles/active-directory/manage-apps/home-realm-discovery-policy.md b/articles/active-directory/manage-apps/home-realm-discovery-policy.md
index 68f5c87fcceca..315333d0aa20d 100644
--- a/articles/active-directory/manage-apps/home-realm-discovery-policy.md
+++ b/articles/active-directory/manage-apps/home-realm-discovery-policy.md
@@ -1,6 +1,6 @@
---
title: Home Realm Discovery policy
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to manage Home Realm Discovery policy for Azure Active Directory authentication for federated users, including auto-acceleration and domain hints.
services: active-directory
author: nickludwig
diff --git a/articles/active-directory/manage-apps/howto-saml-token-encryption.md b/articles/active-directory/manage-apps/howto-saml-token-encryption.md
index bf5c8b391ce4f..aa3c275f8241e 100644
--- a/articles/active-directory/manage-apps/howto-saml-token-encryption.md
+++ b/articles/active-directory/manage-apps/howto-saml-token-encryption.md
@@ -1,7 +1,7 @@
---
title: SAML token encryption
description: Learn how to configure Azure Active Directory SAML token encryption.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: AllisonAm
manager: CelesteDG
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: app-mgmt
ms.workload: identity
ms.topic: conceptual
-ms.date: 03/13/2020
+ms.date: 05/27/2022
ms.author: alamaral
ms.collection: M365-identity-device-management
---
@@ -134,7 +134,7 @@ When you configure a keyCredential using Graph, PowerShell, or in the applicatio
1. From the Azure portal, go to **Azure Active Directory > App registrations**.
-1. Select **All apps** from the dropdown to show all apps, and then select the enterprise application that you want to configure.
+1. Select the **All apps** tab to show all apps, and then select the application that you want to configure.
1. In the application's page, select **Manifest** to edit the [application manifest](../develop/reference-app-manifest.md).
diff --git a/articles/active-directory/manage-apps/manage-app-consent-policies.md b/articles/active-directory/manage-apps/manage-app-consent-policies.md
index 4f43ff8ac54d2..560da475cb029 100644
--- a/articles/active-directory/manage-apps/manage-app-consent-policies.md
+++ b/articles/active-directory/manage-apps/manage-app-consent-policies.md
@@ -1,7 +1,7 @@
---
title: Manage app consent policies
description: Learn how to manage built-in and custom app consent policies to control when consent can be granted.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: psignoret
ms.service: active-directory
diff --git a/articles/active-directory/manage-apps/manage-application-permissions.md b/articles/active-directory/manage-apps/manage-application-permissions.md
index 862ee0d43ae25..709076f64bb56 100644
--- a/articles/active-directory/manage-apps/manage-application-permissions.md
+++ b/articles/active-directory/manage-apps/manage-application-permissions.md
@@ -1,6 +1,6 @@
---
title: Review permissions granted to applications
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to review and manage permissions for an application in Azure Active Directory.
services: active-directory
author: Jackson-Woods
diff --git a/articles/active-directory/manage-apps/manage-consent-requests.md b/articles/active-directory/manage-apps/manage-consent-requests.md
index 81b3dab0f4dd4..f3fb766b187df 100755
--- a/articles/active-directory/manage-apps/manage-consent-requests.md
+++ b/articles/active-directory/manage-apps/manage-consent-requests.md
@@ -1,7 +1,7 @@
---
title: Manage consent to applications and evaluate consent requests
description: Learn how to manage consent requests when user consent is disabled or restricted, and how to evaluate a request for tenant-wide admin consent to an application in Azure Active Directory.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: psignoret
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/manage-self-service-access.md b/articles/active-directory/manage-apps/manage-self-service-access.md
index 29f9238f1dadb..113b1938c294e 100644
--- a/articles/active-directory/manage-apps/manage-self-service-access.md
+++ b/articles/active-directory/manage-apps/manage-self-service-access.md
@@ -1,6 +1,6 @@
---
title: How to enable self-service application assignment
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Enable self-service application access to allow users to find their own applications from their My Apps portal
services: active-directory
author: omondiatieno
diff --git a/articles/active-directory/manage-apps/media/configure-admin-consent-workflow/review-consent-requests.png b/articles/active-directory/manage-apps/media/configure-admin-consent-workflow/review-consent-requests.png
new file mode 100644
index 0000000000000..ba24bf8533aa1
Binary files /dev/null and b/articles/active-directory/manage-apps/media/configure-admin-consent-workflow/review-consent-requests.png differ
diff --git a/articles/active-directory/manage-apps/migrate-adfs-application-activity.md b/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
index ee4e2859e10da..862b0d79ffd01 100644
--- a/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
+++ b/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
@@ -1,7 +1,7 @@
---
title: Use the activity report to move AD FS apps to Azure Active Directory
description: The Active Directory Federation Services (AD FS) application activity report lets you quickly migrate applications from AD FS to Azure Active Directory (Azure AD). This migration tool for AD FS identifies compatibility with Azure AD and gives migration guidance.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: omondiatieno
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md b/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
index 5a388d891e1f4..b25167d8890cc 100644
--- a/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
+++ b/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
@@ -1,7 +1,7 @@
---
title: Moving application authentication from AD FS to Azure Active Directory
description: Learn how to use Azure Active Directory to replace Active Directory Federation Services (AD FS), giving users single sign-on to all their applications.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: omondiatieno
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/migration-resources.md b/articles/active-directory/manage-apps/migration-resources.md
index a7a0441ebc8cb..0b116d89cee7f 100644
--- a/articles/active-directory/manage-apps/migration-resources.md
+++ b/articles/active-directory/manage-apps/migration-resources.md
@@ -1,7 +1,7 @@
---
title: Resources for migrating apps to Azure Active Directory
description: Resources to help you migrate application access and authentication to Azure Active Directory (Azure AD).
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: omondiatieno
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/myapps-overview.md b/articles/active-directory/manage-apps/myapps-overview.md
index daec7973e0f55..1b7cb6fd3938e 100644
--- a/articles/active-directory/manage-apps/myapps-overview.md
+++ b/articles/active-directory/manage-apps/myapps-overview.md
@@ -1,7 +1,7 @@
---
title: My Apps portal overview
description: Learn about how to manage applications in the My Apps portal.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: saipradeepb23
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/one-click-sso-tutorial.md b/articles/active-directory/manage-apps/one-click-sso-tutorial.md
index f3d059010028b..f752e0c95455b 100644
--- a/articles/active-directory/manage-apps/one-click-sso-tutorial.md
+++ b/articles/active-directory/manage-apps/one-click-sso-tutorial.md
@@ -1,7 +1,7 @@
---
title: One-click, single sign-on (SSO) configuration of your Azure Marketplace application
description: Steps for one-click configuration of SSO for your application from the Azure Marketplace.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: AllisonAm
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/overview-application-gallery.md b/articles/active-directory/manage-apps/overview-application-gallery.md
index 686fbade09537..2d53968323245 100644
--- a/articles/active-directory/manage-apps/overview-application-gallery.md
+++ b/articles/active-directory/manage-apps/overview-application-gallery.md
@@ -1,7 +1,7 @@
---
title: Overview of the Azure Active Directory application gallery
description: An overview of using the Azure Active Directory application gallery.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: eringreenlee
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/overview-assign-app-owners.md b/articles/active-directory/manage-apps/overview-assign-app-owners.md
index 3fc082a5ead7e..f540f99bcf1ab 100644
--- a/articles/active-directory/manage-apps/overview-assign-app-owners.md
+++ b/articles/active-directory/manage-apps/overview-assign-app-owners.md
@@ -1,6 +1,6 @@
---
title: Overview of enterprise application ownership
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn about enterprise application ownership in Azure Active Directory
services: active-directory
author: saipradeepb23
diff --git a/articles/active-directory/manage-apps/plan-an-application-integration.md b/articles/active-directory/manage-apps/plan-an-application-integration.md
index a23ad1a62ad17..14e1db27043ab 100644
--- a/articles/active-directory/manage-apps/plan-an-application-integration.md
+++ b/articles/active-directory/manage-apps/plan-an-application-integration.md
@@ -2,7 +2,7 @@
title: Get started integrating Azure Active Directory with apps
description: This article is a getting started guide for integrating Azure Active Directory (AD) with on-premises applications, and cloud applications.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: eringreenlee
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/plan-sso-deployment.md b/articles/active-directory/manage-apps/plan-sso-deployment.md
index 81b677c5f8fa3..852de9f962978 100644
--- a/articles/active-directory/manage-apps/plan-sso-deployment.md
+++ b/articles/active-directory/manage-apps/plan-sso-deployment.md
@@ -1,7 +1,7 @@
---
title: Plan a single sign-on deployment
description: Plan the deployment of single sign-on in Azure Active Directory.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: AllisonAm
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md b/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
index 8f8cf7a4ae3e6..e6f5dff39d100 100644
--- a/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
+++ b/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
@@ -1,6 +1,6 @@
---
title: Prevent sign-in auto-acceleration using Home Realm Discovery policy
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to prevent domain_hint auto-acceleration to federated IDPs.
services: active-directory
author: nickludwig
diff --git a/articles/active-directory/manage-apps/protect-against-consent-phishing.md b/articles/active-directory/manage-apps/protect-against-consent-phishing.md
index f2b5e0b6b7390..d04cf24bcb468 100644
--- a/articles/active-directory/manage-apps/protect-against-consent-phishing.md
+++ b/articles/active-directory/manage-apps/protect-against-consent-phishing.md
@@ -1,6 +1,6 @@
---
title: Protecting against consent phishing
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn ways of mitigating against app-based consent phishing attacks using Azure AD.
services: active-directory
author: Chrispine-Chiedo
diff --git a/articles/active-directory/manage-apps/review-admin-consent-requests.md b/articles/active-directory/manage-apps/review-admin-consent-requests.md
index b6a1935f72d25..6127d4bc7dfcc 100644
--- a/articles/active-directory/manage-apps/review-admin-consent-requests.md
+++ b/articles/active-directory/manage-apps/review-admin-consent-requests.md
@@ -1,6 +1,6 @@
---
title: Review and take action on admin consent requests
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Learn how to review and take action on admin consent requests that were created after you were designated as a reviewer.
services: active-directory
author: eringreenlee
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: app-mgmt
ms.workload: identity
ms.topic: how-to
-ms.date: 03/22/2021
+ms.date: 05/27/2022
ms.author: ergreenl
ms.reviewer: ergreenl
@@ -18,7 +18,7 @@ ms.reviewer: ergreenl
---
# Review admin consent requests
-In this article, you learn how to review and take action on admin consent requests. To review and act on consent requests, you must be designated as a reviewer. As a reviewer, you only see admin consent requests that were created after you were designated as a reviewer.
+In this article, you learn how to review and take action on admin consent requests. To review and act on consent requests, you must be designated as a reviewer. As a reviewer, you can view all admin consent requests but you can only act on those requests that were created after you were designated as a reviewer.
## Prerequisites
@@ -36,12 +36,20 @@ To review the admin consent requests and take action:
1. In the filter search box, type and select **Azure Active Directory**.
1. From the navigation menu, select **Enterprise applications**.
1. Under **Activity**, select **Admin consent requests**.
-1. Select the application that is being requested.
-1. Review details about the request:
+1. Select **My Pending** tab to view and act on the pending requests.
+1. Select the application that is being requested from the list.
+1. Review details about the request:
+ - To view the application details, select the **App details** tab.
- To see who is requesting access and why, select the **Requested by** tab.
- To see what permissions are being requested by the application, select **Review permissions and consent**.
+ :::image type="content" source="media/configure-admin-consent-workflow/review-consent-requests.png" alt-text="Screenshot of the admin consent requests in the portal.":::
+
1. Evaluate the request and take the appropriate action:
- **Approve the request**. To approve a request, grant admin consent to the application. Once a request is approved, all requestors are notified that they have been granted access. Approving a request allows all users in your tenant to access the application unless otherwise restricted with user assignment.
- - **Deny the request**. To deny a request, you must provide a justification that will be provided to all requestors. Once a request is denied, all requestors are notified that they have been denied access to the application. Denying a request won't prevent users from requesting admin consent to the app again in the future.
+ - **Deny the request**. To deny a request, you must provide a justification that will be provided to all requestors. Once a request is denied, all requestors are notified that they have been denied access to the application. Denying a request won't prevent users from requesting admin consent to the application again in the future.
- **Block the request**. To block a request, you must provide a justification that will be provided to all requestors. Once a request is blocked, all requestors are notified they've been denied access to the application. Blocking a request creates a service principal object for the application in your tenant in a disabled state. Users won't be able to request admin consent to the application in the future.
+
+## Next steps
+- [Review permissions granted to apps](manage-application-permissions.md)
+- [Grant tenant-wide admin consent](grant-admin-consent.md)
diff --git a/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md b/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
index a8240cafc6050..4d90590d632dd 100644
--- a/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
+++ b/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
@@ -1,7 +1,7 @@
---
title: Secure hybrid access with Azure AD partner integration
description: Help customers discover and migrate SaaS applications into Azure AD and connect apps that use legacy authentication methods with Azure AD.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: gargi-sinha
manager: martinco
diff --git a/articles/active-directory/manage-apps/secure-hybrid-access.md b/articles/active-directory/manage-apps/secure-hybrid-access.md
index 041d8cd216639..6a081ff3f8625 100644
--- a/articles/active-directory/manage-apps/secure-hybrid-access.md
+++ b/articles/active-directory/manage-apps/secure-hybrid-access.md
@@ -1,7 +1,7 @@
---
title: Secure hybrid access
description: This article describes partner solutions for integrating your legacy on-premises, public cloud, or private cloud applications with Azure AD.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: gargi-sinha
manager: martinco
diff --git a/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md b/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
index 278617e72e30a..b52587408cff5 100644
--- a/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
+++ b/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
@@ -1,7 +1,7 @@
---
title: Secure hybrid access with Azure AD and Silverfort
description: In this tutorial, learn how to integrate Silverfort with Azure AD for secure hybrid access
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: gargi-sinha
manager: martinco
diff --git a/articles/active-directory/manage-apps/tenant-restrictions.md b/articles/active-directory/manage-apps/tenant-restrictions.md
index e3746717cd6cc..e826c52fadffb 100644
--- a/articles/active-directory/manage-apps/tenant-restrictions.md
+++ b/articles/active-directory/manage-apps/tenant-restrictions.md
@@ -1,7 +1,7 @@
---
title: Use tenant restrictions to manage access to SaaS apps
description: How to use tenant restrictions to manage which users can access apps based on their Azure AD tenant.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
author: vimrang
manager: CelesteDG
ms.service: active-directory
diff --git a/articles/active-directory/manage-apps/toc.yml b/articles/active-directory/manage-apps/toc.yml
index 6a1f8f96037b6..95f0f819cb7e9 100644
--- a/articles/active-directory/manage-apps/toc.yml
+++ b/articles/active-directory/manage-apps/toc.yml
@@ -246,6 +246,8 @@
href: methods-for-removing-user-access.md
- name: Resources
items:
+ - name: Video learning
+ href: app-management-videos.md
- name: Support and help options for developers
href: ../develop/developer-support-help-options.md?context=%2fazure%2factive-directory%2fmanage-apps%2fcontext%2fmanage-apps-context
- name: Azure feedback forum
diff --git a/articles/active-directory/manage-apps/troubleshoot-app-publishing.md b/articles/active-directory/manage-apps/troubleshoot-app-publishing.md
index dfda9ea42def7..2de92d99abbb0 100644
--- a/articles/active-directory/manage-apps/troubleshoot-app-publishing.md
+++ b/articles/active-directory/manage-apps/troubleshoot-app-publishing.md
@@ -1,7 +1,7 @@
---
title: Your sign-in was blocked
description: Troubleshoot a blocked sign-in to the Microsoft Application Network portal.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: davidmu1
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md b/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
index cf63c9046f518..b7d1ba655c347 100644
--- a/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
+++ b/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
@@ -1,7 +1,7 @@
---
title: Troubleshoot password-based single sign-on
description: Troubleshoot issues with an Azure AD app that's configured for password-based single sign-on.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
author: AllisonAm
manager: CelesteDG
ms.service: active-directory
diff --git a/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md b/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md
index fb7f73c417118..a5e05275e4144 100644
--- a/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md
+++ b/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md
@@ -1,7 +1,7 @@
---
title: Troubleshoot SAML-based single sign-on
description: Troubleshoot issues with an Azure AD app that's configured for SAML-based single sign-on.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: AllisonAm
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/tutorial-govern-monitor.md b/articles/active-directory/manage-apps/tutorial-govern-monitor.md
index b760d1a4c28ee..f7a3191464441 100644
--- a/articles/active-directory/manage-apps/tutorial-govern-monitor.md
+++ b/articles/active-directory/manage-apps/tutorial-govern-monitor.md
@@ -1,6 +1,6 @@
---
title: "Tutorial: Govern and monitor applications"
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: In this tutorial, you learn how to govern and monitor an application in Azure Active Directory.
author: omondiatieno
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/tutorial-manage-access-security.md b/articles/active-directory/manage-apps/tutorial-manage-access-security.md
index c2c9d4af917d9..261f5d98ebe3d 100644
--- a/articles/active-directory/manage-apps/tutorial-manage-access-security.md
+++ b/articles/active-directory/manage-apps/tutorial-manage-access-security.md
@@ -1,6 +1,6 @@
---
title: "Tutorial: Manage application access and security"
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: In this tutorial, you learn how to manage access to an application in Azure Active Directory and make sure it's secure.
author: omondiatieno
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md b/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md
index 1669c4bf876b6..a6ea249e5cd82 100644
--- a/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md
+++ b/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md
@@ -1,7 +1,7 @@
---
title: "Tutorial: Manage federation certificates"
description: In this tutorial, you'll learn how to customize the expiration date for your federation certificates, and how to renew certificates that will soon expire.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: davidmu1
manager: CelesteDG
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: app-mgmt
ms.workload: identity
ms.topic: tutorial
-ms.date: 03/31/2022
+ms.date: 05/27/2022
ms.author: davidmu
ms.reviewer: jeedes
ms.collection: M365-identity-device-management
@@ -19,10 +19,25 @@ ms.collection: M365-identity-device-management
# Tutorial: Manage certificates for federated single sign-on
-In this article, we cover common questions and information related to certificates that Azure Active Directory (Azure AD) creates to establish federated single sign-on (SSO) to your software as a service (SaaS) applications. Add applications from the Azure AD app gallery or by using a non-gallery application template. Configure the application by using the federated SSO option.
+In this article, we cover common questions and information related to certificates that Azure Active Directory (Azure AD) creates to establish federated single sign-on (SSO) to your software as a service (SaaS) applications. Add applications from the Azure AD application gallery or by using a non-gallery application template. Configure the application by using the federated SSO option.
This tutorial is relevant only to apps that are configured to use Azure AD SSO through [Security Assertion Markup Language](https://wikipedia.org/wiki/Security_Assertion_Markup_Language) (SAML) federation.
+Using the information in this tutorial, an administrator of the application learns how to:
+
+> [!div class="checklist"]
+> * Generate certificates for gallery and non-gallery applications
+> * Customize the expiration dates for certificates
+> * Add email notification address for certificate expiration dates
+> * Renew certificates
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- One of the following roles: Global Administrator, Privileged Role Administrator, Cloud Application Administrator, or Application Administrator.
+- An enterprise application that has been configured in your Azure AD tenant.
+
+
## Auto-generated certificate for gallery and non-gallery applications
When you add a new application from the gallery and configure a SAML-based sign-on (by selecting **Single sign-on** > **SAML** from the application overview page), Azure AD generates a certificate for the application that is valid for three years. To download the active certificate as a security certificate (**.cer**) file, return to that page (**SAML-based sign-on**) and select a download link in the **SAML Signing Certificate** heading. You can choose between the raw (binary) certificate or the Base64 (base 64-encoded text) certificate. For gallery applications, this section might also show a link to download the certificate as federation metadata XML (an **.xml** file), depending on the requirement of the application.
@@ -75,7 +90,7 @@ Next, download the new certificate in the correct format, upload it to the appli
1. When you want to roll over to the new certificate, go back to the **SAML Signing Certificate** page, and in the newly saved certificate row, select the ellipsis (**...**) and select **Make certificate active**. The status of the new certificate changes to **Active**, and the previously active certificate changes to a status of **Inactive**.
1. Continue following the application's SAML sign-on configuration instructions that you displayed earlier, so that you can upload the SAML signing certificate in the correct encoding format.
-If your application doesn't have any validation for the certificate's expiration, and the certificate matches in both Azure Active Directory and your application, your app is still accessible despite having an expired certificate. Ensure your application can validate the certificate's expiration date.
+If your application doesn't have any validation for the certificate's expiration, and the certificate matches in both Azure Active Directory and your application, your application is still accessible despite having an expired certificate. Ensure your application can validate the certificate's expiration date.
## Add email notification addresses for certificate expiration
@@ -101,15 +116,14 @@ If a certificate is about to expire, you can renew it using a procedure that res
1. In the newly saved certificate row, select the ellipsis (**...**) and then select **Make certificate active**.
1. Skip the next two steps.
-1. If the app can only handle one certificate at a time, pick a downtime interval to perform the next step. (Otherwise, if the application doesn’t automatically pick up the new certificate but can handle more than one signing certificate, you can perform the next step anytime.)
-1. Before the old certificate expires, follow the instructions in the [Upload and activate a certificate](#upload-and-activate-a-certificate) section earlier. If your application certificate isn't updated after a new certificate is updated in Azure Active Directory, authentication on your app may fail.
+1. If the application can only handle one certificate at a time, pick a downtime interval to perform the next step. (Otherwise, if the application doesn’t automatically pick up the new certificate but can handle more than one signing certificate, you can perform the next step anytime.)
+1. Before the old certificate expires, follow the instructions in the [Upload and activate a certificate](#upload-and-activate-a-certificate) section earlier. If your application certificate isn't updated after a new certificate is updated in Azure Active Directory, authentication on your application may fail.
1. Sign in to the application to make sure that the certificate works correctly.
-If your application doesn't validate the certificate expiration configured in Azure Active Directory, and the certificate matches in both Azure Active Directory and your application, your app is still accessible despite having an expired certificate. Ensure your application can validate certificate expiration.
+If your application doesn't validate the certificate expiration configured in Azure Active Directory, and the certificate matches in both Azure Active Directory and your application, your application is still accessible despite having an expired certificate. Ensure your application can validate certificate expiration.
## Related articles
-- [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md)
- [Application management with Azure Active Directory](what-is-application-management.md)
- [Single sign-on to applications in Azure Active Directory](what-is-single-sign-on.md)
- [Debug SAML-based single sign-on to applications in Azure Active Directory](./debug-saml-sso-issues.md)
diff --git a/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md b/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
index 629e3a5042d64..6fc1b88da9c03 100644
--- a/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
+++ b/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
@@ -1,7 +1,7 @@
---
title: Publish your application
description: Learn how to publish your application in the Azure Active Directory application gallery.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: eringreenlee
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/view-applications-portal.md b/articles/active-directory/manage-apps/view-applications-portal.md
index 8a2444317fe74..620db4e78733a 100644
--- a/articles/active-directory/manage-apps/view-applications-portal.md
+++ b/articles/active-directory/manage-apps/view-applications-portal.md
@@ -1,7 +1,7 @@
---
title: 'Quickstart: View enterprise applications'
description: View the enterprise applications that are registered to use your Azure Active Directory tenant.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: AllisonAm
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md b/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
index e27608712b976..693d46b74bbcb 100644
--- a/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
+++ b/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
@@ -1,7 +1,7 @@
---
title: Understand how users are assigned to apps
description: Understand how users get assigned to an app that is using Azure Active Directory for identity management.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: eringreenlee
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/what-is-access-management.md b/articles/active-directory/manage-apps/what-is-access-management.md
index a9f62a71390a3..47428849626f8 100644
--- a/articles/active-directory/manage-apps/what-is-access-management.md
+++ b/articles/active-directory/manage-apps/what-is-access-management.md
@@ -1,6 +1,6 @@
---
title: Manage access to apps
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
description: Describes how Azure Active Directory enables organizations to specify the apps to which each user has access.
services: active-directory
author: eringreenlee
diff --git a/articles/active-directory/manage-apps/what-is-application-management.md b/articles/active-directory/manage-apps/what-is-application-management.md
index 1b81ea5ba6eac..2523930cd7e16 100644
--- a/articles/active-directory/manage-apps/what-is-application-management.md
+++ b/articles/active-directory/manage-apps/what-is-application-management.md
@@ -1,7 +1,7 @@
---
title: What is application management?
description: An overview of managing the lifecycle of an application in Azure Active Directory.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: omondiatieno
manager: CelesteDG
diff --git a/articles/active-directory/manage-apps/what-is-single-sign-on.md b/articles/active-directory/manage-apps/what-is-single-sign-on.md
index 3e8a31a3725bf..97ba75fb5da99 100644
--- a/articles/active-directory/manage-apps/what-is-single-sign-on.md
+++ b/articles/active-directory/manage-apps/what-is-single-sign-on.md
@@ -1,7 +1,7 @@
---
title: What is single sign-on?
description: Learn about single sign-on for enterprise applications in Azure Active Directory.
-titleSuffix: Azure AD
+titleSuffix: Microsoft Entra
services: active-directory
author: omondiatieno
manager: CelesteDG
diff --git a/articles/active-directory/saas-apps/github-provisioning-tutorial.md b/articles/active-directory/saas-apps/github-provisioning-tutorial.md
index f5860f54f4130..450f88c714113 100644
--- a/articles/active-directory/saas-apps/github-provisioning-tutorial.md
+++ b/articles/active-directory/saas-apps/github-provisioning-tutorial.md
@@ -48,7 +48,7 @@ For more information, see [Assign a user or group to an enterprise app](../manag
## Configuring user provisioning to GitHub
-This section guides you through connecting your Azure AD to GitHub's SCIM provisioning API to automate provisioning of GitHub organization membership. This integration, which leverages an [OAuth app](https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/authorizing-oauth-apps#oauth-apps-and-organizations), automatically adds, manages, and removes members' access to a GitHub Enterprise Cloud organization based on user and group assignment in Azure AD. When users are [provisioned to a GitHub organization via SCIM](https://docs.github.com/en/free-pro-team@latest/rest/reference/scim#provision-and-invite-a-scim-user), an email invitation is sent to the user's email address.
+This section guides you through connecting your Azure AD to GitHub's SCIM provisioning API to automate provisioning of GitHub organization membership. This integration, which leverages an [OAuth app](https://docs.github.com/en/free-pro-team@latest/github/authenticating-to-github/authorizing-oauth-apps#oauth-apps-and-organizations), automatically adds, manages, and removes members' access to a GitHub Enterprise Cloud organization based on user and group assignment in Azure AD. When users are [provisioned to a GitHub organization via SCIM](https://docs.github.com/en/rest/enterprise-admin/scim), an email invitation is sent to the user's email address.
### Configure automatic user account provisioning to GitHub in Azure AD
diff --git a/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md b/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
index f9b0cddbbf7e0..e420a1c03b24b 100644
--- a/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
+++ b/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
@@ -108,7 +108,7 @@ There are three primary actors in the verifiable credential solution. In the fol
- **Step 1**, the **user** requests a verifiable credential from an issuer.
- **Step 2**, the **issuer** of the credential attests that the proof the user provided is accurate and creates a verifiable credential signed with their DID and the user’s DID is the subject.
-- **In Step 3**, the user signs a verifiable presentation (VP) with their DID and sends to the **verifier.** The verifier then validates of the credential by matching with the public key placed in the DPKI.
+- **In Step 3**, the user signs a verifiable presentation (VP) with their DID and sends to the **verifier.** The verifier then validates the credential by matching with the public key placed in the DPKI.
The roles in this scenario are:
diff --git a/articles/aks/azure-ad-integration-cli.md b/articles/aks/azure-ad-integration-cli.md
index ef2efa84c07ad..633d248fd9068 100644
--- a/articles/aks/azure-ad-integration-cli.md
+++ b/articles/aks/azure-ad-integration-cli.md
@@ -260,7 +260,6 @@ For best practices on identity and resource control, see [Best practices for aut
[kubernetes-webhook]:https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[complete-script]: https://github.com/Azure-Samples/azure-cli-samples/tree/master/aks/azure-ad-integration/azure-ad-integration.sh
[az-aks-create]: /cli/azure/aks#az_aks_create
diff --git a/articles/aks/control-kubeconfig-access.md b/articles/aks/control-kubeconfig-access.md
index 119958571cb7f..890301f35b4a5 100644
--- a/articles/aks/control-kubeconfig-access.md
+++ b/articles/aks/control-kubeconfig-access.md
@@ -156,7 +156,7 @@ For enhanced security on access to AKS clusters, [integrate Azure Active Directo
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: /learn/quick-kubernetes-deploy-powershell.md
+[aks-quickstart-powershell]: /azure/aks/learn/quick-kubernetes-deploy-powershell
[azure-cli-install]: /cli/azure/install-azure-cli
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
[azure-rbac]: ../role-based-access-control/overview.md
diff --git a/articles/aks/csi-secrets-store-nginx-tls.md b/articles/aks/csi-secrets-store-nginx-tls.md
index 4c652d9fd4c37..3ad97c8fa3426 100644
--- a/articles/aks/csi-secrets-store-nginx-tls.md
+++ b/articles/aks/csi-secrets-store-nginx-tls.md
@@ -5,7 +5,7 @@ author: nickomang
ms.author: nickoman
ms.service: container-service
ms.topic: how-to
-ms.date: 10/19/2021
+ms.date: 05/26/2022
ms.custom: template-how-to
---
@@ -15,8 +15,8 @@ This article walks you through the process of securing an NGINX Ingress Controll
Importing the ingress TLS certificate to the cluster can be accomplished using one of two methods:
-- **Application** - The application deployment manifest declares and mounts the provider volume. Only when the application is deployed is the certificate made available in the cluster, and when the application is removed the secret is removed as well. This scenario fits development teams who are responsible for the application’s security infrastructure and their integration with the cluster.
-- **Ingress Controller** - The ingress deployment is modified to declare and mount the provider volume. The secret is imported when ingress pods are created. The application’s pods have no access to the TLS certificate. This scenario fits scenarios where one team (i.e. IT) manages and provisions infrastructure and networking components (including HTTPS TLS certificates) and other teams manage application lifecycle. In this case, ingress is specific to a single namespace/workload and is deployed in the same namespace as the application.
+- **Application** - The application deployment manifest declares and mounts the provider volume. Only when the application is deployed, is the certificate made available in the cluster, and when the application is removed the secret is removed as well. This scenario fits development teams who are responsible for the application’s security infrastructure and their integration with the cluster.
+- **Ingress Controller** - The ingress deployment is modified to declare and mount the provider volume. The secret is imported when ingress pods are created. The application’s pods have no access to the TLS certificate. This scenario fits scenarios where one team (for example, IT) manages and creates infrastructure and networking components (including HTTPS TLS certificates) and other teams manage application lifecycle. In this case, ingress is specific to a single namespace/workload and is deployed in the same namespace as the application.
## Prerequisites
@@ -28,18 +28,18 @@ Importing the ingress TLS certificate to the cluster can be accomplished using o
## Generate a TLS certificate
```bash
-export CERT_NAME=ingresscert
+export CERT_NAME=aks-ingress-cert
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
- -out ingress-tls.crt \
- -keyout ingress-tls.key \
- -subj "/CN=demo.test.com/O=ingress-tls"
+ -out aks-ingress-tls.crt \
+ -keyout aks-ingress-tls.key \
+ -subj "/CN=demo.azure.com/O=aks-ingress-tls"
```
### Import the certificate to AKV
```bash
export AKV_NAME="[YOUR AKV NAME]"
-openssl pkcs12 -export -in ingress-tls.crt -inkey ingress-tls.key -out $CERT_NAME.pfx
+openssl pkcs12 -export -in aks-ingress-tls.crt -inkey aks-ingress-tls.key -out $CERT_NAME.pfx
# skip Password prompt
```
@@ -52,11 +52,11 @@ az keyvault certificate import --vault-name $AKV_NAME -n $CERT_NAME -f $CERT_NAM
First, create a new namespace:
```bash
-export NAMESPACE=ingress-test
+export NAMESPACE=ingress-basic
```
```azurecli-interactive
-kubectl create ns $NAMESPACE
+kubectl create namespace $NAMESPACE
```
Select a [method to provide an access identity][csi-ss-identity-access] and configure your SecretProviderClass YAML accordingly. Additionally:
@@ -64,7 +64,7 @@ Select a [method to provide an access identity][csi-ss-identity-access] and conf
- Be sure to use `objectType=secret`, which is the only way to obtain the private key and the certificate from AKV.
- Set `kubernetes.io/tls` as the `type` in your `secretObjects` section.
-See the following for an example of what your SecretProviderClass might look like:
+See the following example of what your SecretProviderClass might look like:
```yml
apiVersion: secrets-store.csi.x-k8s.io/v1
@@ -83,6 +83,8 @@ spec:
key: tls.crt
parameters:
usePodIdentity: "false"
+ useVMManagedIdentity: "true"
+ userAssignedIdentityID:
keyvaultName: $AKV_NAME # the name of the AKV instance
objects: |
array:
@@ -119,9 +121,9 @@ The application’s deployment will reference the Secrets Store CSI Driver's Azu
helm install ingress-nginx/ingress-nginx --generate-name \
--namespace $NAMESPACE \
--set controller.replicaCount=2 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux
```
#### Bind certificate to ingress controller
@@ -135,8 +137,8 @@ The ingress controller’s deployment will reference the Secrets Store CSI Drive
helm install ingress-nginx/ingress-nginx --generate-name \
--namespace $NAMESPACE \
--set controller.replicaCount=2 \
- --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
- --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
--set controller.podLabels.aadpodidbinding=$AAD_POD_IDENTITY_NAME \
-f - < 80/TCP 19m
```
@@ -338,14 +438,46 @@ nginx-ingress-1588032400-default-backend ClusterIP 10.0.223.214
Use `curl` to verify your ingress has been properly configured with TLS. Be sure to use the external IP you've obtained from the previous step:
```bash
-curl -v -k --resolve demo.test.com:443:52.xx.xx.xx https://demo.test.com
+curl -v -k --resolve demo.azure.com:443:EXTERNAL_IP https://demo.azure.com
+```
-# You should see output similar to the following
-* subject: CN=demo.test.com; O=ingress-tls
-* start date: Oct 15 04:23:46 2021 GMT
-* expire date: Oct 15 04:23:46 2022 GMT
-* issuer: CN=demo.test.com; O=ingress-tls
+No additional path was provided with the address, so the ingress controller defaults to the */* route. The first demo application is returned, as shown in the following condensed example output:
+
+```console
+[...]
+
+
+
+
+ Welcome to Azure Kubernetes Service (AKS)
+[...]
+```
+
+The *-v* parameter in our `curl` command outputs verbose information, including the TLS certificate received. Half-way through your curl output, you can verify that your own TLS certificate was used. The *-k* parameter continues loading the page even though we're using a self-signed certificate. The following example shows that the *issuer: CN=demo.azure.com; O=aks-ingress-tls* certificate was used:
+
+```
+[...]
+* Server certificate:
+* subject: CN=demo.azure.com; O=aks-ingress-tls
+* start date: Oct 22 22:13:54 2021 GMT
+* expire date: Oct 22 22:13:54 2022 GMT
+* issuer: CN=demo.azure.com; O=aks-ingress-tls
* SSL certificate verify result: self signed certificate (18), continuing anyway.
+[...]
+```
+
+Now add */hello-world-two* path to the address, such as `https://demo.azure.com/hello-world-two`. The second demo application with the custom title is returned, as shown in the following condensed example output:
+
+```
+curl -v -k --resolve demo.azure.com:443:EXTERNAL_IP https://demo.azure.com/hello-world-two
+
+[...]
+
+
+
+
+ AKS Ingress Demo
+[...]
```
diff --git a/articles/aks/open-service-mesh-troubleshoot.md b/articles/aks/open-service-mesh-troubleshoot.md
index 219ff4d5e89ca..782db5a337786 100644
--- a/articles/aks/open-service-mesh-troubleshoot.md
+++ b/articles/aks/open-service-mesh-troubleshoot.md
@@ -103,7 +103,7 @@ aks-osm-webhook-osm 1 102m
### Check for the service and the CA bundle of the Validating webhook
```azurecli-interactive
-kubectl get ValidatingWebhookConfiguration aks-osm-webhook-osm -o json | jq '.webhooks[0].clientConfig.service'
+kubectl get ValidatingWebhookConfiguration aks-osm-validator-mesh-osm -o json | jq '.webhooks[0].clientConfig.service'
```
A well configured Validating Webhook Configuration would look exactly like this:
diff --git a/articles/aks/policy-reference.md b/articles/aks/policy-reference.md
index 4b73dc93c0862..d73984a6d03c9 100644
--- a/articles/aks/policy-reference.md
+++ b/articles/aks/policy-reference.md
@@ -25,10 +25,6 @@ the link in the **Version** column to view the source on the
[!INCLUDE [azure-policy-reference-rp-aks-containerservice](../../includes/policy/reference/byrp/microsoft.containerservice.md)]
-### AKS Engine
-
-[!INCLUDE [azure-policy-reference-rp-aks-aksengine](../../includes/policy/reference/byrp/aks-engine.md)]
-
## Next steps
- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
diff --git a/articles/aks/release-tracker.md b/articles/aks/release-tracker.md
index 9a41a3ab03919..5cf412edaa53c 100644
--- a/articles/aks/release-tracker.md
+++ b/articles/aks/release-tracker.md
@@ -4,12 +4,17 @@ description: Learn how to determine which Azure regions have the weekly AKS rele
services: container-service
ms.topic: overview
ms.date: 05/24/2022
+ms.author: nickoman
+author: nickomang
ms.custom: mvc
---
# AKS release tracker
+> [!NOTE]
+> The AKS release tracker is currently not accessible. When the feature is fully released, this article will be updated to include access instructions.
+
AKS releases weekly rounds of fixes and feature and component updates that affect all clusters and customers. However, these releases can take up to two weeks to roll out to all regions from the initial time of shipping due to Azure Safe Deployment Practices (SDP). It is important for customers to know when a particular AKS release is hitting their region, and the AKS release tracker provides these details in real time by versions and regions.
## Why release tracker?
diff --git a/articles/aks/start-stop-cluster.md b/articles/aks/start-stop-cluster.md
index d759901888e92..fca38f486433f 100644
--- a/articles/aks/start-stop-cluster.md
+++ b/articles/aks/start-stop-cluster.md
@@ -150,7 +150,7 @@ If the `ProvisioningState` shows `Starting` that means your cluster hasn't fully
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: /learn/quick-kubernetes-deploy-powershell.md
+[aks-quickstart-powershell]: /azure/aks/learn/quick-kubernetes-deploy-powershell
[install-azure-cli]: /cli/azure/install-azure-cli
[az-extension-add]: /cli/azure/extension#az_extension_add
[az-extension-update]: /cli/azure/extension#az_extension_update
diff --git a/articles/aks/use-network-policies.md b/articles/aks/use-network-policies.md
index 46236fe55142f..7a78a97532f7c 100644
--- a/articles/aks/use-network-policies.md
+++ b/articles/aks/use-network-policies.md
@@ -18,15 +18,6 @@ This article shows you how to install the network policy engine and create Kuber
You need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-> [!TIP]
-> If you used the network policy feature during preview, we recommend that you [create a new cluster](#create-an-aks-cluster-and-enable-network-policy).
->
-> If you wish to continue using existing test clusters that used network policy during preview, upgrade your cluster to a new Kubernetes versions for the latest GA release and then deploy the following YAML manifest to fix the crashing metrics server and Kubernetes dashboard. This fix is only required for clusters that used the Calico network policy engine.
->
-> As a security best practice, [review the contents of this YAML manifest][calico-aks-cleanup] to understand what is deployed into the AKS cluster.
->
-> `kubectl delete -f https://raw.githubusercontent.com/Azure/aks-engine/master/docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml`
-
## Overview of network policy
All pods in an AKS cluster can send and receive traffic without limitations, by default. To improve security, you can define rules that control the flow of traffic. Back-end applications are often only exposed to required front-end services, for example. Or, database components are only accessible to the application tiers that connect to them.
diff --git a/articles/app-service/quickstart-python.md b/articles/app-service/quickstart-python.md
index a77ef4b0bc174..5a29ae2d26ead 100644
--- a/articles/app-service/quickstart-python.md
+++ b/articles/app-service/quickstart-python.md
@@ -3,8 +3,8 @@ title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure'
description: Get started with Azure App Service by deploying your first Python app to Azure App Service.
ms.topic: quickstart
ms.date: 03/22/2022
-author: DavidCBerry13
-ms.author: daberry
+author: mijacobs
+ms.author: mijacobs
ms.devlang: python
ms.custom: devx-azure-cli, devx-azure-portal, devx-vscode-azure-extension, devdivchpfy22
---
diff --git a/articles/app-service/tutorial-connect-msi-sql-database.md b/articles/app-service/tutorial-connect-msi-sql-database.md
index 755462b8a8fed..052ec53770fcd 100644
--- a/articles/app-service/tutorial-connect-msi-sql-database.md
+++ b/articles/app-service/tutorial-connect-msi-sql-database.md
@@ -144,10 +144,9 @@ The steps you follow for your project depends on whether you're using [Entity Fr
1. In Visual Studio, open the Package Manager Console and add the NuGet package [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) and update Entity Framework:
```powershell
- Install-Package Azure.Identity -Version 1.5.0
+ Install-Package Azure.Identity
Update-Package EntityFramework
```
-
1. In your DbContext object (in *Models/MyDbContext.cs*), add the following code to the default constructor.
```csharp
diff --git a/articles/azure-app-configuration/concept-soft-delete.md b/articles/azure-app-configuration/concept-soft-delete.md
index 28d6ebd4a8b45..8d5879caf90a9 100644
--- a/articles/azure-app-configuration/concept-soft-delete.md
+++ b/articles/azure-app-configuration/concept-soft-delete.md
@@ -38,13 +38,18 @@ Purge is the operation to permanently delete the stores in a soft deleted state,
## Purge protection
With Purge protection enabled, soft deleted stores can't be purged in the retention period. If disabled, the soft deleted store can be purged before the retention period expires. Once purge protection is enabled on a store, it can't be disabled.
-## Permissions to recover or purge store
+## Permissions to recover a deleted store
-A user has to have below permissions to recover or purge a soft-deleted app configuration store. The built-in Contributor and Owner roles already have the required permissions to recover and purge.
+- `Microsoft.AppConfiguration/configurationStores/write`
-- Permission to recover - `Microsoft.AppConfiguration/configurationStores/write`
+To recover a deleted App Configuration store the `Microsoft.AppConfiguration/configurationStores/write` permission is needed. The built-in "Owner" and "Contributor" roles contain this permission by default. The permission can be assigned at the subscription or resource group scope.
-- Permission to purge - `Microsoft.AppConfiguration/configurationStores/action`
+## Permissions to read and purge deleted stores
+
+* Read: `Microsoft.AppConfiguration/locations/deletedConfigurationStores/read`
+* Purge: `Microsoft.AppConfiguration/locations/deletedConfigurationStores/purge/action`
+
+To list deleted App Configuration stores, or get an individual store by name the `Microsoft.AppConfiguration/locations/deletedConfigurationStores/read` permission is needed. To purge a deleted App Configuration store the `Microsoft.AppConfiguration/locations/deletedConfigurationStores/purge/action` permission is needed. The built-in "Owner" and "Contributor" roles contain these permissions by default. Permissions for reading and purging deleted App Configuration stores must be assigned at the subscription level. This is because deleted configuration stores exist outside of individual resource groups.
## Billing implications
diff --git a/articles/azure-app-configuration/howto-recover-deleted-stores-in-azure-app-configuration.md b/articles/azure-app-configuration/howto-recover-deleted-stores-in-azure-app-configuration.md
index 7847d8fb43e9a..8374871985111 100644
--- a/articles/azure-app-configuration/howto-recover-deleted-stores-in-azure-app-configuration.md
+++ b/articles/azure-app-configuration/howto-recover-deleted-stores-in-azure-app-configuration.md
@@ -19,7 +19,7 @@ To learn more about the concept of soft delete feature, see [Soft-Delete in Azur
* An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
-* Refer to the [Soft-Delete in Azure App Configuration](./concept-soft-delete.md#permissions-to-recover-or-purge-store) for permissions requirements.
+* Refer to the [Soft-Delete in Azure App Configuration](./concept-soft-delete.md#permissions-to-recover-a-deleted-store) section for permissions requirements.
## Set retention policy and enable purge protection at store creation
diff --git a/articles/azure-arc/data/upload-logs.md b/articles/azure-arc/data/upload-logs.md
index ee2a0fa633437..38f21f3ad6c53 100644
--- a/articles/azure-arc/data/upload-logs.md
+++ b/articles/azure-arc/data/upload-logs.md
@@ -4,10 +4,10 @@ description: Upload logs for Azure Arc-enabled data services to Azure Monitor
services: azure-arc
ms.service: azure-arc
ms.subservice: azure-arc-data
-author: twright-msft
-ms.author: twright
+author: dnethi
+ms.author: dinethi
ms.reviewer: mikeray
-ms.date: 11/03/2021
+ms.date: 05/27/2022
ms.topic: how-to
---
@@ -149,9 +149,9 @@ echo $WORKSPACE_SHARED_KEY
With the environment variables set, you can upload logs to the log workspace.
-## Upload logs to Azure Log Analytics Workspace in direct mode
+## Configure automatic upload of logs to Azure Log Analytics Workspace in direct mode using `az` CLI
-In the **direct** connected mode, Logs upload can only be setup in **automatic** mode. This automatic upload of metrics can be setup either during deployment or post deployment of Azure Arc data controller.
+In the **direct** connected mode, Logs upload can only be set up in **automatic** mode. This automatic upload of metrics can be set up either during deployment or post deployment of Azure Arc data controller.
### Enable automatic upload of logs to Azure Log Analytics Workspace
@@ -163,7 +163,7 @@ az arcdata dc update --name --resource-group --auto-upload-logs true
```
-### Disable automatic upload of logs to Azure Log Analytics Workspace
+### Enable automatic upload of logs to Azure Log Analytics Workspace
If the automatic upload of logs was enabled during Azure Arc data controller deployment, run the below command to disable automatic upload of logs.
```
@@ -172,7 +172,54 @@ az arcdata dc update --name --resource-group --auto-upload-logs false
```
-## Upload logs to Azure Monitor in indirect mode
+## Configure automatic upload of logs to Azure Log Analytics Workspace in **direct** mode using `kubectl` CLI
+
+### Enable automatic upload of logs to Azure Log Analytics Workspace
+
+To configure automatic upload of logs using ```kubectl```:
+
+- ensure the Log Analytics Workspace is created as described in the earlier section
+- create a Kubernetes secret for the Log Analytics workspace using the ```WorkspaceID``` and `SharedAccessKey` as follows:
+
+```
+apiVersion: v1
+data:
+ primaryKey:
+ workspaceId:
+kind: Secret
+metadata:
+ name: log-workspace-secret
+ namespace:
+type: Opaque
+```
+
+- To create the secret, run:
+
+ ```console
+ kubectl apply -f --namespace
+ ```
+
+- To open the settings as a yaml file in the default editor, run:
+
+ ```console
+ kubectl edit datacontroller --name
+ ```
+
+- update the autoUploadLogs property to ```"true"```, and save the file
+
+
+
+### Enable automatic upload of logs to Azure Log Analytics Workspace
+
+To disable automatic upload of logs, run:
+
+```console
+kubectl edit datacontroller --name
+```
+
+- update the autoUploadLogs property to `"false"`, and save the file
+
+## Upload logs to Azure Monitor in **indirect** mode
To upload logs for your Azure Arc-enabled SQL managed instances and Azure Arc-enabled PostgreSQL Hyperscale server groups run the following CLI commands-
@@ -210,7 +257,7 @@ Once your logs are uploaded, you should be able to query them using the log quer
If you want to upload metrics and logs on a scheduled basis, you can create a script and run it on a timer every few minutes. Below is an example of automating the uploads using a Linux shell script.
-In your favorite text/code editor, add the following script to the file and save as a script executable file such as .sh (Linux/Mac) or .cmd, .bat, .ps1.
+In your favorite text/code editor, add the following script to the file and save as a script executable file - such as .sh for Linux/Mac, or .cmd, .bat, or .ps1 for Windows.
```azurecli
az arcdata dc export --type logs --path logs.json --force --k8s-namespace arc
diff --git a/articles/azure-arc/servers/breadcrumb/toc.yml b/articles/azure-arc/servers/breadcrumb/toc.yml
index be9eac111ef3d..ea45194ad7197 100644
--- a/articles/azure-arc/servers/breadcrumb/toc.yml
+++ b/articles/azure-arc/servers/breadcrumb/toc.yml
@@ -9,3 +9,6 @@
- name: Arc-enabled servers
tocHref: /azure/defender-for-cloud
topicHref: /azure/azure-arc/servers
+ - name: Azure Arc
+ tocHref: /windows-server/manage/windows-admin-center/azure
+ topicHref: /azure/azure-arc/servers
diff --git a/articles/azure-arc/servers/toc.yml b/articles/azure-arc/servers/toc.yml
index 5aab1e49c98ac..aa04e799d45d0 100644
--- a/articles/azure-arc/servers/toc.yml
+++ b/articles/azure-arc/servers/toc.yml
@@ -108,7 +108,7 @@
- name: Onboard to Microsoft Defender for Cloud
href: ../../defender-for-cloud/quickstart-onboard-machines.md?toc=/azure/azure-arc/servers/toc.json&bc=/azure/azure-arc/servers/breadcrumb/toc.json
- name: Manage with Windows Admin Center
- href: /windows-server/manage/windows-admin-center/azure/manage-arc-hybrid-machines
+ href: /windows-server/manage/windows-admin-center/azure/manage-arc-hybrid-machines?toc=/azure/azure-arc/servers/toc.json&bc=/azure/azure-arc/servers/breadcrumb/toc.json
- name: Connect via SSH
items:
- name: SSH access to Azure Arc-enabled servers
diff --git a/articles/azure-functions/functions-bindings-storage-blob.md b/articles/azure-functions/functions-bindings-storage-blob.md
index 47bc759545748..418a605d8a655 100644
--- a/articles/azure-functions/functions-bindings-storage-blob.md
+++ b/articles/azure-functions/functions-bindings-storage-blob.md
@@ -146,7 +146,7 @@ Functions 1.x apps automatically have a reference to the extension.
## host.json settings
-This section describes the function app configuration settings available for functions that this binding. These settings only apply when using extension version 5.0.0 and higher. The example host.json file below contains only the version 2.x+ settings for this binding. For more information about function app configuration settings in versions 2.x and later versions, see [host.json reference for Azure Functions](functions-host-json.md).
+This section describes the function app configuration settings available for functions that use this binding. These settings only apply when using extension version 5.0.0 and higher. The example host.json file below contains only the version 2.x+ settings for this binding. For more information about function app configuration settings in versions 2.x and later versions, see [host.json reference for Azure Functions](functions-host-json.md).
> [!NOTE]
> This section doesn't apply to extension versions before 5.0.0. For those earlier versions, there aren't any function app-wide configuration settings for blobs.
diff --git a/articles/azure-monitor/agents/agents-overview.md b/articles/azure-monitor/agents/agents-overview.md
index 13c0c124ffb2f..51c461bed0c1c 100644
--- a/articles/azure-monitor/agents/agents-overview.md
+++ b/articles/azure-monitor/agents/agents-overview.md
@@ -169,6 +169,10 @@ The following tables list the operating systems that are supported by the Azure
2 Using the Azure Monitor agent [client installer (preview)](./azure-monitor-agent-windows-client.md)
### Linux
+> [!NOTE]
+> For Dependency Agent, please additionally check for supported kernel versions. See "Dependency agent Linux kernel support" table below for details
+
+
| Operating system | Azure Monitor agent 1 | Log Analytics agent 1 | Dependency agent | Diagnostics extension 2|
|:---|:---:|:---:|:---:|:---:
| AlmaLinux | X | | | |
@@ -200,7 +204,7 @@ The following tables list the operating systems that are supported by the Azure
| SUSE Linux Enterprise Server 12 SP5 | X | X | X | X |
| SUSE Linux Enterprise Server 12 | X | X | X | X |
| Ubuntu 22.04 LTS | X | | | |
-| Ubuntu 20.04 LTS | X | X | X | X |
+| Ubuntu 20.04 LTS | X | X | X | X 4 |
| Ubuntu 18.04 LTS | X | X | X | X |
| Ubuntu 16.04 LTS | X | X | X | X |
| Ubuntu 14.04 LTS | | X | | X |
@@ -209,6 +213,8 @@ The following tables list the operating systems that are supported by the Azure
3 Known issue collecting Syslog events in versions prior to 1.9.0.
+4 Not all kernel versions are supported, check supported kernel versions below.
+
#### Dependency agent Linux kernel support
Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. As of Dependency agent version 9.10.* the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
diff --git a/articles/azure-monitor/alerts/alerts-prepare-migration.md b/articles/azure-monitor/alerts/alerts-prepare-migration.md
index 7ff93cad64001..88e2ab2fe006b 100644
--- a/articles/azure-monitor/alerts/alerts-prepare-migration.md
+++ b/articles/azure-monitor/alerts/alerts-prepare-migration.md
@@ -23,7 +23,7 @@ The following table is a reference to the programmatic interfaces for both class
| Deployment script type | Classic alerts | New metric alerts |
| ---------------------- | -------------- | ----------------- |
|REST API | [microsoft.insights/alertrules](/rest/api/monitor/alertrules) | [microsoft.insights/metricalerts](/rest/api/monitor/metricalerts) |
-|Azure CLI | [az monitor alert](/cli/azure/monitor/alert) | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
+|Azure CLI | [az monitor alert](/cli/monitor/alert) | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
|PowerShell | [Reference](/powershell/module/az.monitor/add-azmetricalertrule) | [Reference](/powershell/module/az.monitor/add-azmetricalertrulev2) |
| Azure Resource Manager template | [For classic alerts](./alerts-enable-template.md)|[For new metric alerts](./alerts-metric-create-templates.md)|
@@ -157,4 +157,4 @@ If you're using a partner integration that's not listed here, confirm with the p
## Next steps
- [How to use the migration tool](alerts-using-migration-tool.md)
-- [Understand how the migration tool works](alerts-understand-migration.md)
\ No newline at end of file
+- [Understand how the migration tool works](alerts-understand-migration.md)
diff --git a/articles/azure-monitor/app/live-stream.md b/articles/azure-monitor/app/live-stream.md
index 586c9aaf32da7..e423397a64b6f 100644
--- a/articles/azure-monitor/app/live-stream.md
+++ b/articles/azure-monitor/app/live-stream.md
@@ -2,7 +2,7 @@
title: Diagnose with Live Metrics Stream - Azure Application Insights
description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events.
ms.topic: conceptual
-ms.date: 10/12/2021
+ms.date: 05/31/2022
ms.reviewer: sdash
ms.devlang: csharp
---
@@ -16,7 +16,7 @@ Monitor your live, in-production web application by using Live Metrics Stream (a
With Live Metrics Stream, you can:
-* Validate a fix while it is released, by watching performance and failure counts.
+* Validate a fix while it's released, by watching performance and failure counts.
* Watch the effect of test loads, and diagnose issues live.
* Focus on particular test sessions or filter out known issues, by selecting and filtering the metrics you want to watch.
* Get exception traces as they happen.
@@ -52,7 +52,7 @@ Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
### Enable LiveMetrics using code for any .NET application
-Even though LiveMetrics is enabled by default when onboarding using recommended instructions for .NET Applications, the following shows how to setup Live Metrics
+Even though LiveMetrics is enabled by default when onboarding using recommended instructions for .NET Applications, the following shows how to set up Live Metrics
manually.
1. Install the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector)
@@ -114,7 +114,7 @@ namespace LiveMetricsDemo
}
```
-While the above sample is for a console app, the same code can be used in any .NET applications. If any other TelemetryModules are enabled which auto-collects telemetry, it is important to ensure the same configuration used for initializing those modules is used for Live Metrics module as well.
+While the above sample is for a console app, the same code can be used in any .NET applications. If any other TelemetryModules are enabled which auto-collects telemetry, it's important to ensure the same configuration used for initializing those modules is used for Live Metrics module as well.
## How does Live Metrics Stream differ from Metrics Explorer and Analytics?
@@ -123,7 +123,7 @@ While the above sample is for a console app, the same code can be used in any .N
|**Latency**|Data displayed within one second|Aggregated over minutes|
|**No retention**|Data persists while it's on the chart, and is then discarded|[Data retained for 90 days](./data-retention-privacy.md#how-long-is-the-data-kept)|
|**On demand**|Data is only streamed while the Live Metrics pane is open |Data is sent whenever the SDK is installed and enabled|
-|**Free**|There is no charge for Live Stream data|Subject to [pricing](../logs/cost-logs.md#application-insights-billing)
+|**Free**|There's no charge for Live Stream data|Subject to [pricing](../logs/cost-logs.md#application-insights-billing)
|**Sampling**|All selected metrics and counters are transmitted. Failures and stack traces are sampled. |Events may be [sampled](./api-filtering-sampling.md)|
|**Control channel**|Filter control signals are sent to the SDK. We recommend you secure this channel.|Communication is one way, to the portal|
@@ -131,7 +131,7 @@ While the above sample is for a console app, the same code can be used in any .N
(Available with ASP.NET, ASP.NET Core, and Azure Functions (v2).)
-You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the portal. Click the filter control that shows when you mouse-over any of the charts. The following chart is plotting a custom Request count KPI with filters on URL and Duration attributes. Validate your filters with the Stream Preview section that shows a live feed of telemetry that matches the criteria you have specified at any point in time.
+You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the portal. Select the filter control that shows when you mouse-over any of the charts. The following chart is plotting a custom Request count KPI with filters on URL and Duration attributes. Validate your filters with the Stream Preview section that shows a live feed of telemetry that matches the criteria you've specified at any point in time.
![Filter request rate](./media/live-stream/filter-request.png)
@@ -144,11 +144,11 @@ In addition to Application Insights telemetry, you can also monitor any Windows
Live metrics are aggregated at two points: locally on each server, and then across all servers. You can change the default at either by selecting other options in the respective drop-downs.
## Sample Telemetry: Custom Live Diagnostic Events
-By default, the live feed of events shows samples of failed requests and dependency calls, exceptions, events, and traces. Click the filter icon to see the applied criteria at any point in time.
+By default, the live feed of events shows samples of failed requests and dependency calls, exceptions, events, and traces. Select the filter icon to see the applied criteria at any point in time.
![Filter button](./media/live-stream/filter.png)
-As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this example, we are selecting specific request failures, and events.
+As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this example, we're selecting specific request failures, and events.
![Query Builder](./media/live-stream/query-builder.png)
@@ -237,9 +237,9 @@ For Azure Function Apps (v2), securing the channel with an API key can be accomp
Create an API key from within your Application Insights resource and go to **Settings > Configuration** for your Function App. Select **New application setting** and enter a name of `APPINSIGHTS_QUICKPULSEAUTHAPIKEY` and a value that corresponds to your API key.
-However, if you recognize and trust all the connected servers, you can try the custom filters without the authenticated channel. This option is available for six months. This override is required once every new session, or when a new server comes online.
+Securing the control channel is not necessary if you recognize and trust all the connected servers. This option is made available so that you can try custom filters without having to set up an authenticated channel. If you choose this option you will have to authorize the connected servers once every new session or when a new server comes online. We strongly discourage the use of unsecured channels and will disable this option 6 months after you start using it. To use custom filters without a secure channel simply click on any of the filter icons and authorize the connected servers. The “Authorize connected servers” dialog displays the date (highlighted below) after which this option will be disabled.
-![Live Metrics Auth options](./media/live-stream/live-stream-auth.png)
+:::image type="content" source="media/live-stream/live-stream-auth.png" alt-text="Screenshot displaying the authorize connected servers dialog." lightbox="media/live-stream/live-stream-auth.png":::
> [!NOTE]
> We strongly recommend that you set up the authenticated channel before entering potentially sensitive information like CustomerID in the filter criteria.
diff --git a/articles/azure-monitor/app/media/live-stream/live-stream-auth.png b/articles/azure-monitor/app/media/live-stream/live-stream-auth.png
index afe6bedcb7179..2b54a2e52f743 100644
Binary files a/articles/azure-monitor/app/media/live-stream/live-stream-auth.png and b/articles/azure-monitor/app/media/live-stream/live-stream-auth.png differ
diff --git a/articles/azure-monitor/profiler/profiler-containers.md b/articles/azure-monitor/profiler/profiler-containers.md
index 86a1d4fee4815..3a7a518b12c1c 100644
--- a/articles/azure-monitor/profiler/profiler-containers.md
+++ b/articles/azure-monitor/profiler/profiler-containers.md
@@ -127,7 +127,7 @@ In this article, you'll learn the various ways you can:
To hit the endpoint, either:
-- Visit [http://localhost:8080/weatherforecast](http://localhost:8080/weatherforecast) in your browser, or
+- Visit `http://localhost:8080/weatherforecast` in your browser, or
- Use curl:
```terraform
@@ -173,4 +173,4 @@ docker rm -f testapp
## Next Steps
- Learn more about [Application Insights Profiler](./profiler-overview.md).
-- Learn how to enable Profiler in your [ASP.NET Core applications run on Linux](./profiler-aspnetcore-linux.md).
\ No newline at end of file
+- Learn how to enable Profiler in your [ASP.NET Core applications run on Linux](./profiler-aspnetcore-linux.md).
diff --git a/articles/azure-monitor/visualize/workbooks-chart-visualizations.md b/articles/azure-monitor/visualize/workbooks-chart-visualizations.md
index 75f7e5df44abe..3132b90d8d36e 100644
--- a/articles/azure-monitor/visualize/workbooks-chart-visualizations.md
+++ b/articles/azure-monitor/visualize/workbooks-chart-visualizations.md
@@ -102,7 +102,7 @@ requests
| summarize Request = count() by bin(timestamp, 1h), RequestName = name
```
-Even though the underlying result set is different. All a user has to do is set the visualization to area, line, bar, or time and Workbooks will take care of the rest.
+Even though the queries return results in different formats, when a user sets the visualization to area, line, bar, or time, Workbooks understands how to handle the data to create the visualization.
[![Screenshot of a log line chart made from a make-series query](./media/workbooks-chart-visualizations/log-chart-line-make-series.png)](./media/workbooks-chart-visualizations/log-chart-line-make-series.png#lightbox)
@@ -221,4 +221,4 @@ The series setting tab lets you adjust the labels and colors shown for series in
## Next steps
- Learn how to create a [tile in workbooks](workbooks-tile-visualizations.md).
-- Learn how to create [interactive workbooks](workbooks-interactive.md).
\ No newline at end of file
+- Learn how to create [interactive workbooks](workbooks-interactive.md).
diff --git a/articles/azure-netapp-files/azacsnap-installation.md b/articles/azure-netapp-files/azacsnap-installation.md
index 8e5e54cd894a2..110f918d657ff 100644
--- a/articles/azure-netapp-files/azacsnap-installation.md
+++ b/articles/azure-netapp-files/azacsnap-installation.md
@@ -12,7 +12,7 @@ ms.service: azure-netapp-files
ms.workload: storage
ms.tgt_pltfrm: na
ms.topic: how-to
-ms.date: 02/05/2022
+ms.date: 06/01/2022
ms.author: phjensen
---
@@ -215,6 +215,9 @@ This section explains how to enable communication with storage. Ensure the stora
# [SAP HANA](#tab/sap-hana)
+> [!IMPORTANT]
+> If deploying to a centralized virtual machine, then it will need to have the SAP HANA client installed and set up so the AzAcSnap user can run `hdbsql` and `hdbuserstore` commands. The SAP HANA Client can downloaded from https://tools.hana.ondemand.com/#hanatools.
+
The snapshot tools communicate with SAP HANA and need a user with appropriate permissions to
initiate and release the database save-point. The following example shows the setup of the SAP
HANA v2 user and the `hdbuserstore` for communication to the SAP HANA database.
diff --git a/articles/azure-netapp-files/azacsnap-preview.md b/articles/azure-netapp-files/azacsnap-preview.md
index 7a1d7d969c265..8a479fe7ea7c5 100644
--- a/articles/azure-netapp-files/azacsnap-preview.md
+++ b/articles/azure-netapp-files/azacsnap-preview.md
@@ -12,7 +12,7 @@ ms.service: azure-netapp-files
ms.workload: storage
ms.tgt_pltfrm: na
ms.topic: reference
-ms.date: 03/07/2022
+ms.date: 06/01/2022
ms.author: phjensen
---
@@ -341,7 +341,7 @@ The following example commands set up a user (AZACSNAP) in the Oracle database,
1. Copy the ZIP file to the target system (for example, the centralized virtual machine running AzAcSnap).
- > [!NOTE]
+ > [!IMPORTANT]
> If deploying to a centralized virtual machine, then it will need to have the Oracle instant client installed and set up so the AzAcSnap user can
> run `sqlplus` commands. The Oracle Instant Client can downloaded from https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html.
> In order for SQL\*Plus to run correctly, download both the required package (for example, Basic Light Package) and the optional SQL\*Plus tools package.
diff --git a/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md b/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
index 8af49d6401613..751909e8192fb 100644
--- a/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
+++ b/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
@@ -107,7 +107,7 @@ This section provides references to SAP on Azure solutions.
### SAP AnyDB
-* [SAP System on Oracle Database on Azure - Azure Architecture Center](/azure/architecture/example-scenario/apps/sap-on-oracle)
+* [SAP System on Oracle Database on Azure - Azure Architecture Center](/azure/architecture/example-scenario/apps/sap-production)
* [Oracle Azure Virtual Machines DBMS deployment for SAP workload - Azure Virtual Machines](../virtual-machines/workloads/sap/dbms_guide_oracle.md#oracle-configuration-guidelines-for-sap-installations-in-azure-vms-on-linux)
* [Deploy SAP AnyDB (Oracle 19c) with Azure NetApp Files](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/deploy-sap-anydb-oracle-19c-with-azure-netapp-files/ba-p/2064043)
* [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)
diff --git a/articles/azure-portal/supportability/how-to-create-azure-support-request.md b/articles/azure-portal/supportability/how-to-create-azure-support-request.md
index 22bfc5e0e79c6..41c5d1d139f15 100644
--- a/articles/azure-portal/supportability/how-to-create-azure-support-request.md
+++ b/articles/azure-portal/supportability/how-to-create-azure-support-request.md
@@ -133,4 +133,4 @@ Follow these links to learn more:
* [Azure support ticket REST API](/rest/api/support)
* Engage with us on [Twitter](https://twitter.com/azuresupport)
* Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure)
-* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
+* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
\ No newline at end of file
diff --git a/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-authoring.md b/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-authoring.md
index 6928d99f269fb..20518bd5ba8bd 100644
--- a/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-authoring.md
+++ b/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-authoring.md
@@ -3,7 +3,7 @@ title: Author a RESTful endpoint
description: This tutorial shows how to author a RESTful endpoint for custom providers. It details how to handle requests and responses for the supported RESTful HTTP methods.
author: jjbfour
ms.topic: tutorial
-ms.date: 01/13/2021
+ms.date: 05/06/2022
ms.author: jobreen
---
@@ -24,7 +24,7 @@ In this tutorial, you update the function app to work as a RESTful endpoint for
- **POST**: Trigger an action
- **GET (collection)**: List all existing resources
- For this tutorial, you use Azure Table storage. But any database or storage service can work.
+ For this tutorial, you use Azure Table storage, but any database or storage service works.
## Partition custom resources in storage
@@ -36,7 +36,7 @@ The following example shows an `x-ms-customproviders-requestpath` header for a c
X-MS-CustomProviders-RequestPath: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomProviders/resourceProviders/{resourceProviderName}/{myResourceType}/{myResourceName}
```
-Based on the example's `x-ms-customproviders-requestpath` header, you can create the *partitionKey* and *rowKey* parameters for your storage as shown in the following table:
+Based on the `x-ms-customproviders-requestpath` header, you can create the *partitionKey* and *rowKey* parameters for your storage as shown in the following table:
Parameter | Template | Description
---|---|---
@@ -60,6 +60,7 @@ public class CustomResource : ITableEntity
public ETag ETag { get; set; }
}
```
+
**CustomResource** is a simple, generic class that accepts any input data. It's based on **ITableEntity**, which is used to store data. The **CustomResource** class implements all properties from interface **ITableEntity**: **timestamp**, **eTag**, **partitionKey**, and **rowKey**.
## Support custom provider RESTful methods
@@ -93,7 +94,7 @@ public static async Task TriggerCustomAction(HttpRequestMes
}
```
-The **TriggerCustomAction** method accepts an incoming request and simply echoes back the response with a status code.
+The **TriggerCustomAction** method accepts an incoming request and echoes back the response with a status code.
### Create a custom resource
@@ -134,7 +135,7 @@ public static async Task CreateCustomResource(HttpRequestMe
}
```
-The **CreateCustomResource** method updates the incoming request to include the Azure-specific fields **id**, **name**, and **type**. These fields are top-level properties used by services across Azure. They let the custom provider interoperate with other services like Azure Policy, Azure Resource Manager Templates, and Azure Activity Log.
+The **CreateCustomResource** method updates the incoming request to include the Azure-specific fields **id**, **name**, and **type**. These fields are top-level properties used by services across Azure. They let the custom provider interoperate with other services like Azure Policy, Azure Resource Manager templates, and Azure Activity Log.
Property | Example | Description
---|---|---
diff --git a/articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md b/articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md
index 2b5de01b926a7..f13472be39b44 100644
--- a/articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md
+++ b/articles/azure-resource-manager/custom-providers/tutorial-resource-onboarding.md
@@ -1,36 +1,36 @@
---
-title: Tutorial - resource onboarding
+title: Extend resources with custom providers
description: Resource onboarding through custom providers allows you to manipulate and extend existing Azure resources.
ms.topic: tutorial
ms.author: jobreen
author: jjbfour
-ms.date: 09/17/2019
+ms.date: 05/06/2022
---
-# Tutorial: Resource onboarding with Azure Custom Providers
+# Extend resources with custom providers
-In this tutorial, you'll deploy to Azure a custom resource provider that extends the Azure Resource Manager API with the Microsoft.CustomProviders/associations resource type. The tutorial shows how to extend existing resources that are outside the resource group where the custom provider instance is located. In this tutorial, the custom resource provider is powered by an Azure logic app, but you can use any public API endpoint.
+In this tutorial, you deploy a custom resource provider to Azure that extends the Azure Resource Manager API with the Microsoft.CustomProviders/associations resource type. The tutorial shows how to extend existing resources that are outside the resource group where the custom provider instance is located. In this tutorial, the custom resource provider is powered by an Azure logic app, but you can use any public API endpoint.
## Prerequisites
-To complete this tutorial, you need to know:
+To complete this tutorial, make sure you review the following:
* The capabilities of [Azure Custom Providers](overview.md).
* Basic information about [resource onboarding with custom providers](concepts-resource-onboarding.md).
## Get started with resource onboarding
-In this tutorial, there are two pieces that need to be deployed: the custom provider and the association. To make the process easier, you can optionally use a single template that deploys both.
+In this tutorial, there are two pieces that need to be deployed: **the custom provider** and **the association**. To make the process easier, you can optionally use a single template that deploys both.
The template will use these resources:
-* Microsoft.CustomProviders/resourceProviders
-* Microsoft.Logic/workflows
-* Microsoft.CustomProviders/associations
+* [Microsoft.CustomProviders/resourceProviders](/azure/templates/microsoft.customproviders/resourcproviders)
+* [Microsoft.Logic/workflows](/azure/templates/microsoft.logic/workflows)
+* [Microsoft.CustomProviders/associations](/azure/templates/microsoft.customproviders/associations)
```json
{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
@@ -76,7 +76,7 @@ The template will use these resources:
"resources": [
{
"type": "Microsoft.Resources/deployments",
- "apiVersion": "2017-05-10",
+ "apiVersion": "2021-04-01",
"condition": "[empty(parameters('customResourceProviderId'))]",
"name": "customProviderInfrastructureTemplate",
"properties": {
@@ -93,7 +93,7 @@ The template will use these resources:
"resources": [
{
"type": "Microsoft.Logic/workflows",
- "apiVersion": "2017-07-01",
+ "apiVersion": "2019-05-01",
"name": "[parameters('logicAppName')]",
"location": "[parameters('location')]",
"properties": {
@@ -160,7 +160,7 @@ The template will use these resources:
"name": "associations",
"mode": "Secure",
"routingType": "Webhook,Cache,Extension",
- "endpoint": "[[listCallbackURL(concat(resourceId('Microsoft.Logic/workflows', parameters('logicAppName')), '/triggers/CustomProviderWebhook'), '2017-07-01').value]"
+ "endpoint": "[[listCallbackURL(concat(resourceId('Microsoft.Logic/workflows', parameters('logicAppName')), '/triggers/CustomProviderWebhook'), '2019-05-01').value]"
}
]
}
@@ -202,7 +202,7 @@ The template will use these resources:
The first part of the template deploys the custom provider infrastructure. This infrastructure defines the effect of the associations resource. If you're not familiar with custom providers, see [Custom provider basics](overview.md).
-Let's deploy the custom provider infrastructure. Either copy, save, and deploy the preceding template, or follow along and deploy the infrastructure by using the Azure portal.
+Let's deploy the custom provider infrastructure. Either copy, save, and deploy the preceding template, or follow along and deploy the infrastructure using the Azure portal.
1. Go to the [Azure portal](https://portal.azure.com).
@@ -214,7 +214,7 @@ Let's deploy the custom provider infrastructure. Either copy, save, and deploy t
![Select Add](media/tutorial-resource-onboarding/templatesadd.png)
-4. Under **General**, enter a **Name** and **Description** for the new template:
+4. Under **General**, enter a *Name* and *Description* for the new template:
![Template name and description](media/tutorial-resource-onboarding/templatesdescription.png)
@@ -258,13 +258,13 @@ Let's deploy the custom provider infrastructure. Either copy, save, and deploy t
After you have the custom provider infrastructure set up, you can easily deploy more associations. The resource group for additional associations doesn't have to be the same as the resource group where you deployed the custom provider infrastructure. To create an association, you need to have Microsoft.CustomProviders/resourceproviders/write permissions on the specified Custom Resource Provider ID.
-1. Go to the custom provider **Microsoft.CustomProviders/resourceProviders** resource in the resource group of the previous deployment. You'll need to select the **Show hidden types** check box:
+1. Go to the custom provider **Microsoft.CustomProviders/resourceProviders** resource in the resource group of the previous deployment. You need to select the **Show hidden types** check box:
![Go to the resource](media/tutorial-resource-onboarding/showhidden.png)
2. Copy the Resource ID property of the custom provider.
-3. Search for **templates** in **All Services** or by using the main search box:
+3. Search for *templates* in **All Services** or by using the main search box:
![Search for templates](media/tutorial-resource-onboarding/templates.png)
@@ -278,9 +278,11 @@ After you have the custom provider infrastructure set up, you can easily deploy
![New associations resource](media/tutorial-resource-onboarding/createdassociationresource.png)
-If you want, you can go back to the logic app **Run history** and see that another call was made to the logic app. You can update the logic app to augment additional functionality for each created association.
+You can go back to the logic app **Run history** and see that another call was made to the logic app. You can update the logic app to augment additional functionality for each created association.
-## Getting help
+## Next steps
-If you have questions about Azure Custom Providers, try asking them on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-custom-providers). A similar question might have already been answered, so check first before posting. Add the tag `azure-custom-providers` to get a fast response!
+In this article, you deployed a custom resource provider to Azure that extends the Azure Resource Manager API with the Microsoft.CustomProviders/associates resource type. To continue learning about custom providers, see:
+* [Deploy associations for a custom provider using Azure Policy](./concepts-built-in-policy.md)
+* [Azure Custom Providers resource onboarding overview](./concepts-resource-onboarding.md)
diff --git a/articles/azure-signalr/media/signalr-concept-performance/server-load.png b/articles/azure-signalr/media/signalr-concept-performance/server-load.png
new file mode 100644
index 0000000000000..7baacf9d37c0f
Binary files /dev/null and b/articles/azure-signalr/media/signalr-concept-performance/server-load.png differ
diff --git a/articles/azure-signalr/signalr-concept-performance.md b/articles/azure-signalr/signalr-concept-performance.md
index 3328037c3eece..3a2eb072137da 100644
--- a/articles/azure-signalr/signalr-concept-performance.md
+++ b/articles/azure-signalr/signalr-concept-performance.md
@@ -13,6 +13,19 @@ One of the key benefits of using Azure SignalR Service is the ease of scaling Si
In this guide, we'll introduce the factors that affect SignalR application performance. We'll describe typical performance in different use-case scenarios. In the end, we'll introduce the environment and tools that you can use to generate a performance report.
+## Quick evaluation using metrics
+ Before going through the factors that impact the performance, let's first introduce an easy way to monitor the pressure of your service. There's a metrics called **Server Load** on the Portal.
+
+ ![Screenshot of the Server Load metric of Azure SignalR on Portal. The metrics shows Server Load is at about 8 percent usage. ](./media/signalr-concept-performance/server-load.png "Server Load")
+
+
+ It shows the computing pressure of your SignalR service. You could test on your own scenario and check this metrics to decide whether to scale up. The latency inside SignalR service would remain low if the Server Load is below 70%.
+
+> [!NOTE]
+> If you are using unit 50 or unit 100 **and** your scenario is mainly sending to small groups (group size <100) or single connection, you need to check [sending to small group](#small-group) or [sending to connection](#send-to-connection) for reference. In those scenarios there is large routing cost which is not included in the Server Load.
+
+ Below are detailed concepts for evaluating performance.
+
## Term definitions
*Inbound*: The incoming message to Azure SignalR Service.
diff --git a/articles/azure-video-indexer/connect-to-azure.md b/articles/azure-video-indexer/connect-to-azure.md
index ef2e82847795c..923f4dfbc8e0b 100644
--- a/articles/azure-video-indexer/connect-to-azure.md
+++ b/articles/azure-video-indexer/connect-to-azure.md
@@ -16,7 +16,6 @@ When creating an Azure Video Indexer account, you can choose a free trial accoun
1. [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
2. [Azure portal](https://portal.azure.com/#home)
- 3. [QuickStart ARM template](https://github.com/Azure-Samples/media-services-video-indexer/tree/master/ARM-Samples/Create-Account)
To read more on how to create a **new ARM-Based** Azure Video Indexer account, read this [article](create-video-analyzer-for-media-account.md)
@@ -70,7 +69,7 @@ If the connection to Azure failed, you can attempt to troubleshoot the problem b
### Create and configure a Media Services account
-1. Use the [Azure](https://portal.azure.com/) portal to create an Azure Media Services account, as described in [Create an account](/azure/azure/media-services/previous/media-services-portal-create-account).
+1. Use the [Azure](https://portal.azure.com/) portal to create an Azure Media Services account, as described in [Create an account](/azure/media-services/previous/media-services-portal-create-account).
Make sure the Media Services account was created with the classic APIs.
@@ -88,10 +87,10 @@ If the connection to Azure failed, you can attempt to troubleshoot the problem b
In the new Media Services account, select **Streaming endpoints**. Then select the streaming endpoint and press start.
:::image type="content" alt-text="Screenshot that shows how to specify streaming endpoints." source="./media/create-account/create-ams-account-se.png":::
-4. For Azure Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](/azure/azure/media-services/previous/media-services-portal-get-started-with-aad):
+4. For Azure Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](/azure/media-services/previous/media-services-portal-get-started-with-aad):
1. In the new Media Services account, select **API access**.
- 2. Select [Service principal authentication method](/azure/azure/media-services/previous/media-services-portal-get-started-with-aad).
+ 2. Select [Service principal authentication method](/azure/media-services/previous/media-services-portal-get-started-with-aad).
3. Get the client ID and client secret
After you select **Settings**->**Keys**, add **Description**, press **Save**, and the key value gets populated.
diff --git a/articles/azure-video-indexer/customize-person-model-with-website.md b/articles/azure-video-indexer/customize-person-model-with-website.md
index 9cffa583c3449..2a240ca20bc0a 100644
--- a/articles/azure-video-indexer/customize-person-model-with-website.md
+++ b/articles/azure-video-indexer/customize-person-model-with-website.md
@@ -5,7 +5,7 @@ services: azure-video-analyzer
author: Juliako
manager: femila
ms.topic: article
-ms.date: 12/16/2020
+ms.date: 05/31/2022
ms.author: juliako
---
@@ -34,7 +34,7 @@ You can use the Azure Video Indexer website to edit faces that were detected in
## Create a new Person model
1. Select the **+ Add model** button on the right.
-1. Enter the name of the model. You can now add new people and faces to the new Person model.
+1. Enter the name of the model and select the check button to save the new model created. You can now add new people and faces to the new Person model.
1. Select the list menu button and choose **+ Add person**.
> [!div class="mx-imgBorder"]
@@ -77,7 +77,7 @@ You can delete any Person model that you created in your account. However, you c
## Manage existing people in a Person model
-To look at the contents of any of your Person models, select the arrow next to the name of the Person model. The drop-down shows you all of the people in that particular Person model. If you select the list menu button next to each of the people, you see manage, rename, and delete options.
+To look at the contents of any of your Person models, select the arrow next to the name of the Person model. Then you can view all of the people in that particular Person model. If you select the list menu button next to each of the people, you see manage, rename, and delete options.
![Screenshot shows a contextual menu with options to Manage, Rename, and Delete.](./media/customize-face-model/manage-people.png)
@@ -108,7 +108,7 @@ You can add more faces to the person by selecting **Add images**.
Select the image you wish to delete and click **Delete**.
-#### Rename and delete the person
+#### Rename and delete a person
You can use the manage pane to rename the person and to delete the person from the Person model.
@@ -174,6 +174,10 @@ To delete a detected face in your video, go to the Insights pane and select the
The person, if they had been named, will also continue to exist in the Person model that was used to index the video from which you deleted the face unless you specifically delete the person from the Person model.
+## Optimize the ability of your model to recognize a person
+
+To optimize your model ability to recognize the person, upload as many different images as possible and from different angles. To get optimal results, use high resolution images.
+
## Next steps
[Customize Person model using APIs](customize-person-model-with-api.md)
diff --git a/articles/azure-vmware/tutorial-configure-networking.md b/articles/azure-vmware/tutorial-configure-networking.md
index f67db24e49d58..f63be0a17cbfb 100644
--- a/articles/azure-vmware/tutorial-configure-networking.md
+++ b/articles/azure-vmware/tutorial-configure-networking.md
@@ -3,7 +3,7 @@ title: Tutorial - Configure networking for your VMware private cloud in Azure
description: Learn to create and configure the networking needed to deploy your private cloud in Azure
ms.topic: tutorial
ms.custom: contperf-fy22q1
-ms.date: 07/30/2021
+ms.date: 05/31/2022
---
@@ -29,7 +29,7 @@ In this tutorial, you learn how to:
## Connect with the Azure vNet connect feature
-You can use the **Azure vNet connect** feature to use an existing vNet or create a new vNet to connect to Azure VMware Solution.
+You can use the **Azure vNet connect** feature to use an existing vNet or create a new vNet to connect to Azure VMware Solution. **Azure vNet connect** is a function to configure vNet connectivity, it does not record configuration state; browse the Azure portal to check what settings have been configured.
>[!NOTE]
>Address space in the vNet cannot overlap with the Azure VMware Solution private cloud CIDR.
@@ -42,6 +42,7 @@ Before selecting an existing vNet, there are specific requirements that must be
1. In the same region as Azure VMware Solution private cloud.
1. In the same resource group as Azure VMware Solution private cloud.
1. vNet must contain an address space that doesn't overlap with Azure VMware Solution.
+1. Validate solution design is within Azure VMware Solution limits (https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits).
### Select an existing vNet
diff --git a/articles/azure-web-pubsub/includes/cli-awps-creation.md b/articles/azure-web-pubsub/includes/cli-awps-creation.md
index 34d3a9a435fa3..d199a1c49b7e4 100644
--- a/articles/azure-web-pubsub/includes/cli-awps-creation.md
+++ b/articles/azure-web-pubsub/includes/cli-awps-creation.md
@@ -6,17 +6,17 @@ ms.date: 08/06/2021
ms.author: lianwei
---
-Use the Azure CLI [az webpubsub create](/cli/azure/webpubsub#az-webpubsub-create) command to create a Web PubSub in the resource group from the previous step, providing the following information:
+Run [az extension add](/cli/azure/extension#az-extension-add) to install or upgrade the *webpubsub* extension to the current version.
-- Resource name: A string of 3 to 24 characters that can contain only numbers (0-9), letters (a-z, A-Z), and hyphens (-)
+```azurecli-interactive
+az extension add --upgrade --name webpubsub
+```
+
+Use the Azure CLI [az webpubsub create](/cli/azure/webpubsub#az-webpubsub-create) command to create a Web PubSub in the resource group you've created. The following command creates a _Free_ Web PubSub resource under resource group _myResourceGroup_ in _EastUS_:
> [!Important]
> Each Web PubSub resource must have a unique name. Replace <your-unique-resource-name> with the name of your Web PubSub in the following examples.
-- Resource group name: **myResourceGroup**.
-- The location: **EastUS**.
-- Sku: **Free_F1**
-
```azurecli-interactive
az webpubsub create --name "" --resource-group "myResourceGroup" --location "EastUS" --sku Free_F1
```
diff --git a/articles/azure-web-pubsub/includes/cli-delete-resources.md b/articles/azure-web-pubsub/includes/cli-delete-resources.md
index 6e5a829970a13..aefcfccee3d36 100644
--- a/articles/azure-web-pubsub/includes/cli-delete-resources.md
+++ b/articles/azure-web-pubsub/includes/cli-delete-resources.md
@@ -6,7 +6,7 @@ ms.date: 08/06/2021
ms.author: lianwei
---
-Other quickstarts and tutorials in this collection build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you may wish to leave these resources in place.
+If you plan to continue on to work with subsequent quickstarts and tutorials, you may wish to leave these resources in place.
When no longer needed, you can use the Azure CLI [az group delete](/cli/azure/group) command to remove the resource group and all related resources:
diff --git a/articles/azure-web-pubsub/media/tutorial-serverless-iot/iot-devices-sample.png b/articles/azure-web-pubsub/media/tutorial-serverless-iot/iot-devices-sample.png
new file mode 100644
index 0000000000000..e9ff20bd3ddea
Binary files /dev/null and b/articles/azure-web-pubsub/media/tutorial-serverless-iot/iot-devices-sample.png differ
diff --git a/articles/azure-web-pubsub/toc.yml b/articles/azure-web-pubsub/toc.yml
index a53febf4a2162..d591ae1ef2ef9 100644
--- a/articles/azure-web-pubsub/toc.yml
+++ b/articles/azure-web-pubsub/toc.yml
@@ -37,6 +37,8 @@
href: tutorial-serverless-notification.md
- name: Build a real-time chat app with client authentication
href: quickstart-serverless.md
+ - name: Visualize IoT data with Azure Function
+ href: tutorial-serverless-iot.md
- name: Publish and subscribe messages
href: tutorial-pub-sub-messages.md
- name: Build a chat app
diff --git a/articles/azure-web-pubsub/tutorial-serverless-iot.md b/articles/azure-web-pubsub/tutorial-serverless-iot.md
new file mode 100644
index 0000000000000..7b060fdcfe2e1
--- /dev/null
+++ b/articles/azure-web-pubsub/tutorial-serverless-iot.md
@@ -0,0 +1,484 @@
+---
+title: Tutorial - Visualize IoT device data from IoT Hub using Azure Web PubSub service and Azure Functions
+description: A tutorial to walk through how to use Azure Web PubSub service and Azure Functions to monitor device data from IoT Hub.
+author: vicancy
+ms.author: lianwei
+ms.service: azure-web-pubsub
+ms.topic: tutorial
+ms.date: 06/01/2022
+---
+
+# Tutorial: Visualize IoT device data from IoT Hub using Azure Web PubSub service and Azure Functions
+
+In this tutorial, you learn how to use Azure Web PubSub service and Azure Functions to build a serverless application with real-time data visualization from IoT Hub.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Build a serverless data visualization app
+> * Work together with Web PubSub function input and output bindings and Azure IoT hub
+> * Run the sample functions locally
+
+## Prerequisites
+
+# [JavaScript](#tab/javascript)
+
+* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+
+* [Node.js](https://nodejs.org/en/download/), version 10.x.
+ > [!NOTE]
+ > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
+
+* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v3 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+
+* The [Azure CLI](/cli/azure) to manage Azure resources.
+
+---
+
+[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+
+[!INCLUDE [iot-hub-include-create-hub](../../includes/iot-hub-include-create-hub-quickstart.md)]
+
+## Create a Web PubSub instance
+If you already have a Web PubSub instance in your Azure subscription, you can skip this section.
+
+[!INCLUDE [create-instance-cli](includes/cli-awps-creation.md)]
+
+
+## Create and run the functions locally
+
+1. Make sure you have [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) installed. And then create an empty directory for the project. Run command under this working directory.
+
+ # [JavaScript](#tab/javascript)
+ ```bash
+ func init --worker-runtime javascript
+ ```
+ ---
+
+2. Update `host.json`'s `extensionBundle` to version larger than _3.3.0_ which contains Web PubSub support.
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.*, 4.0.0)"
+ }
+}
+```
+
+3. Create an `index` function to read and host a static web page for clients.
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
+ # [JavaScript](#tab/javascript)
+ - Update `index/index.js` with following code that serve the html content as a static site.
+ ```js
+ var fs = require("fs");
+ var path = require("path");
+
+ module.exports = function (context, req) {
+ let index = path.join(
+ context.executionContext.functionDirectory,
+ "index.html"
+ );
+ fs.readFile(index, "utf8", function (err, data) {
+ if (err) {
+ console.log(err);
+ context.done(err);
+ return;
+ }
+ context.res = {
+ status: 200,
+ headers: {
+ "Content-Type": "text/html",
+ },
+ body: data,
+ };
+ context.done();
+ });
+ };
+
+ ```
+
+4. Create this _index.html_ file under the same folder as file _index.js_:
+
+ ```html
+
+
+
+
+
+
+
+
+
+
+
+
+ Temperature Real-time Data
+
+
+
+
+ Temperature Real-time Data
+ 0 devices
+
+
+
+
+
+
+
+ ```
+
+5. Create a `negotiate` function to help clients get service connection url with access token.
+ ```bash
+ func new -n negotiate -t HttpTrigger
+ ```
+ # [JavaScript](#tab/javascript)
+ - Update `negotiate/function.json` to include input binding [`WebPubSubConnection`](reference-functions-bindings.md#input-binding), with the following json codes.
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "webPubSubConnection",
+ "name": "connection",
+ "hub": "%hubName%",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+ - Update `negotiate/index.js` and to return the `connection` binding which contains the generated token.
+ ```js
+ module.exports = function (context, req, connection) {
+ // Add your own auth logic here
+ context.res = { body: connection };
+ context.done();
+ };
+ ```
+
+6. Create a `messagehandler` function to generate notifications with template `"IoT Hub (Event Hub)"`.
+ ```bash
+ func new --template "IoT Hub (Event Hub)" --name messagehandler
+ ```
+ # [JavaScript](#tab/javascript)
+ - Update _messagehandler/function.json_ to add [Web PubSub output binding](reference-functions-bindings.md#output-binding) with the following json code. Please note that we use variable `%hubName%` as the hub name for both IoT eventHubName and Web PubSub hub.
+ ```json
+ {
+ "bindings": [
+ {
+ "type": "eventHubTrigger",
+ "name": "IoTHubMessages",
+ "direction": "in",
+ "eventHubName": "%hubName%",
+ "connection": "IOTHUBConnectionString",
+ "cardinality": "many",
+ "consumerGroup": "$Default",
+ "dataType": "string"
+ },
+ {
+ "type": "webPubSub",
+ "name": "actions",
+ "hub": "%hubName%",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
+ - Update `messagehandler/index.js` with the following code. It sends every message from IoT hub to every client connected to Web PubSub service using Web PubSub output bindings.
+ ```js
+ module.exports = function (context, IoTHubMessages) {
+ IoTHubMessages.forEach((message) => {
+ const deviceMessage = JSON.parse(message);
+ context.log(`Processed message: ${message}`);
+ context.bindings.actions = {
+ actionName: "sendToAll",
+ data: JSON.stringify({
+ IotData: deviceMessage,
+ MessageDate: deviceMessage.date || new Date().toISOString(),
+ DeviceId: deviceMessage.deviceId,
+ }),
+ };
+ });
+
+ context.done();
+ };
+ ```
+
+7. Update the Function settings
+
+ 1. Add `hubName` setting and replace `{YourIoTHubName}` with the hub name you used when creating your IoT Hub:
+
+ ```bash
+ func settings add hubName "{YourIoTHubName}"
+ ```
+
+ 2. Get the **Service Connection String** for IoT Hub using below CLI command:
+
+ ```azcli
+ az iot hub connection-string show --policy-name service --hub-name {YourIoTHubName} --output table --default-eventhub
+ ```
+
+ And set `IOTHubConnectionString` using below command, replacing `` with the value:
+
+ ```bash
+ func settings add IOTHubConnectionString ""
+ ```
+
+ 3. Get the **Connection String** for Web PubSub using below CLI command:
+
+ ```azcli
+ az webpubsub key show --name "" --resource-group "" --query primaryConnectionString
+ ```
+
+ And set `WebPubSubConnectionString` using below command, replacing `` with the value:
+
+ ```bash
+ func settings add WebPubSubConnectionString ""
+ ```
+
+ > [!NOTE]
+ > `IoT Hub (Event Hub)` Function trigger used in the sample has dependency on Azure Storage, but you can use local storage emulator when the Function is running locally. If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.`, you'll need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md).
+
+8. Run the function locally
+
+ Now you're able to run your local function by command below.
+
+ ```bash
+ func start
+ ```
+
+ And checking the running logs, you can visit your local host static page by visiting: `https://localhost:7071/api/index`.
+
+## Run the device to send data
+
+### Register a device
+
+A device must be registered with your IoT hub before it can connect.
+
+If you already have a device registered in your IoT hub, you can skip this section.
+
+1. Run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create) command in Azure Cloud Shell to create the device identity.
+
+ **YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub.
+
+ ```azurecli-interactive
+ az iot hub device-identity create --hub-name {YourIoTHubName} --device-id simDevice
+ ```
+
+2. Run the [az iot hub device-identity connection-string show](/cli/azure/iot/hub/device-identity/connection-string#az-iot-hub-device-identity-connection-string-show) command in Azure Cloud Shell to get the _device connection string_ for the device you just registered:
+
+ **YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub.
+
+ ```azurecli-interactive
+ az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id simDevice --output table
+ ```
+
+ Make a note of the device connection string, which looks like:
+
+ `HostName={YourIoTHubName}.azure-devices.net;DeviceId=simDevice;SharedAccessKey={YourSharedAccessKey}`
+
+- For quickest results, simulate temperature data using the [Raspberry Pi Azure IoT Online Simulator](https://azure-samples.github.io/raspberry-pi-web-simulator/#Getstarted). Paste in the **device connection string**, and select the **Run** button.
+
+- If you have a physical Raspberry Pi and BME280 sensor, you may measure and report real temperature and humidity values by following the [Connect Raspberry Pi to Azure IoT Hub (Node.js)](/azure/iot-hub/iot-hub-raspberry-pi-kit-node-get-started) tutorial.
+
+## Run the visualization website
+Open function host index page: `http://localhost:7071/api/index` to view the real-time dashboard. Register multiple devices and you can see the dashboard updates multiple devices in real-time. Open multiple browsers and you can see every page are updated in real-time.
+
+:::image type="content" source="media/tutorial-serverless-iot/iot-devices-sample.png" alt-text="Screenshot of multiple devices data visualization using Web PubSub service.":::
+
+## Clean up resources
+
+[!INCLUDE [quickstarts-free-trial-note](./includes/cli-delete-resources.md)]
+
+## Next steps
+
+In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
+
+> [!div class="nextstepaction"]
+> [Azure Web PubSub bindings for Azure Functions](https://azure.github.io/azure-webpubsub/references/functions-bindings)
+
+> [!div class="nextstepaction"]
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
diff --git a/articles/backup/backup-azure-manage-vms.md b/articles/backup/backup-azure-manage-vms.md
index c6a9b48085436..62c0eddd37a2e 100644
--- a/articles/backup/backup-azure-manage-vms.md
+++ b/articles/backup/backup-azure-manage-vms.md
@@ -17,7 +17,7 @@ In the Azure portal, the Recovery Services vault dashboard provides access to va
You can manage backups by using the dashboard and by drilling down to individual VMs. To begin machine backups, open the vault on the dashboard:
-![Full dashboard view with slider](./media/backup-azure-manage-vms/bottom-slider.png)
+:::image type="content" source="./media/backup-azure-manage-vms/bottom-slider.png" alt-text="Screenshot showing the full dashboard view with slider.":::
[!INCLUDE [backup-center.md](../../includes/backup-center.md)]
@@ -28,30 +28,30 @@ To view VMs on the vault dashboard:
1. Sign in to the [Azure portal](https://portal.azure.com/).
1. On the left menu, select **All services**.
- ![Select All services](./media/backup-azure-manage-vms/select-all-services.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/select-all-services.png" alt-text="Screenshot showing to select All services.":::
1. In the **All services** dialog box, enter *Recovery Services*. The list of resources filters according to your input. In the list of resources, select **Recovery Services vaults**.
- ![Enter and choose Recovery Services vaults](./media/backup-azure-manage-vms/all-services.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/all-services.png" alt-text="Screenshot showing to enter and choose Recovery Services vaults.":::
The list of Recovery Services vaults in the subscription appears.
1. For ease of use, select the pin icon next to your vault name and select **Pin to dashboard**.
1. Open the vault dashboard.
- ![Open the vault dashboard and Settings pane](./media/backup-azure-manage-vms/full-view-rs-vault.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/full-view-rs-vault.png" alt-text="Screenshot showing to open the vault dashboard and Settings pane.":::
1. On the **Backup Items** tile, select **Azure Virtual Machine**.
- ![Open the Backup Items tile](./media/backup-azure-manage-vms/azure-virtual-machine.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/azure-virtual-machine.png" alt-text="Screenshot showing to open the Backup Items tile.":::
1. On the **Backup Items** pane, you can view the list of protected VMs. In this example, the vault protects one virtual machine: *myVMR1*.
- ![View the Backup Items pane](./media/backup-azure-manage-vms/backup-items-blade-select-item.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-items-blade-select-item.png" alt-text="Screenshot showing to view the Backup Items pane.":::
1. From the vault item's dashboard, you can modify backup policies, run an on-demand backup, stop or resume protection of VMs, delete backup data, view restore points, and run a restore.
- ![The Backup Items dashboard and the Settings pane](./media/backup-azure-manage-vms/item-dashboard-settings.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/item-dashboard-settings.png" alt-text="Screenshot showing the Backup Items dashboard and the Settings pane.":::
## Manage backup policy for a VM
@@ -70,17 +70,17 @@ To manage a backup policy:
1. Sign in to the [Azure portal](https://portal.azure.com/). Open the vault dashboard.
2. On the **Backup Items** tile, select **Azure Virtual Machine**.
- ![Open the Backup Items tile](./media/backup-azure-manage-vms/azure-virtual-machine.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/azure-virtual-machine.png" alt-text="Screenshot showing to open the Backup Items tile.":::
3. On the **Backup Items** pane, you can view the list of protected VMs and last backup status with latest restore points time.
- ![View the Backup Items pane](./media/backup-azure-manage-vms/backup-items-blade-select-item.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-items-blade-select-item.png" alt-text="Screenshot showing to view the Backup Items pane.":::
4. From the vault item's dashboard, you can select a backup policy.
* To switch policies, select a different policy and then select **Save**. The new policy is immediately applied to the vault.
- ![Choose a backup policy](./media/backup-azure-manage-vms/backup-policy-create-new.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-policy-create-new.png" alt-text="Screenshot showing to choose a backup policy.":::
## Run an on-demand backup
@@ -97,13 +97,13 @@ To trigger an on-demand backup:
1. On the [vault item dashboard](#view-vms-on-the-dashboard), under **Protected Item**, select **Backup Item**.
- ![The Backup now option](./media/backup-azure-manage-vms/backup-now-button.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-now-button.png" alt-text="Screenshot showing the Backup now option.":::
2. From **Backup Management Type**, select **Azure Virtual Machine**. The **Backup Item (Azure Virtual Machine)** pane appears.
3. Select a VM and select **Backup Now** to create an on-demand backup. The **Backup Now** pane appears.
4. In the **Retain Backup Till** field, specify a date for the backup to be retained.
- ![The Backup Now calendar](./media/backup-azure-manage-vms/backup-now-check.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/backup-now-check.png" alt-text="Screenshot showing the Backup Now calendar.":::
5. Select **OK** to run the backup job.
@@ -127,7 +127,7 @@ To stop protection and retain data of a VM:
1. On the [vault item's dashboard](#view-vms-on-the-dashboard), select **Stop backup**.
2. Choose **Retain Backup Data**, and confirm your selection as needed. Add a comment if you want. If you aren't sure of the item's name, hover over the exclamation mark to view the name.
- ![Retain Backup data](./media/backup-azure-manage-vms/retain-backup-data.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/retain-backup-data.png" alt-text="Screenshot showing to retain Backup data.":::
A notification lets you know that the backup jobs have been stopped.
@@ -142,7 +142,7 @@ To stop protection and delete data of a VM:
1. On the [vault item's dashboard](#view-vms-on-the-dashboard), select **Stop backup**.
2. Choose **Delete Backup Data**, and confirm your selection as needed. Enter the name of the backup item and add a comment if you want.
- ![Delete backup data](./media/backup-azure-manage-vms/delete-backup-data.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/delete-backup-data.png" alt-text="Screenshot showing to delete backup data.":::
> [!NOTE]
> After completing the delete operation the backed up data will be retained for 14 days in the [soft deleted state](./soft-delete-virtual-machines.md). In addition, you can also [enable or disable soft delete](./backup-azure-security-feature-cloud.md#enabling-and-disabling-soft-delete).
@@ -158,7 +158,7 @@ To resume protection for a VM:
2. Follow the steps in [Manage backup policies](#manage-backup-policy-for-a-vm) to assign the policy for the VM. You don't need to choose the VM's initial protection policy.
3. After you apply the backup policy to the VM, you see the following message:
- ![Message indicating a successfully protected VM](./media/backup-azure-manage-vms/success-message.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/success-message.png" alt-text="Screenshot showing message indicating a successfully protected VM.":::
## Delete backup data
@@ -166,16 +166,16 @@ There are two ways to delete a VM's backup data:
* From the vault item dashboard, select Stop backup and follow the instructions for [Stop protection and delete backup data](#stop-protection-and-delete-backup-data) option.
- ![Select Stop backup](./media/backup-azure-manage-vms/stop-backup-button.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/stop-backup-button.png" alt-text="Screenshot showing to select Stop backup.":::
* From the vault item dashboard, select Delete backup data. This option is enabled if you had chosen to [Stop protection and retain backup data](#stop-protection-and-retain-backup-data) option during stop VM protection.
- ![Select Delete backup](./media/backup-azure-manage-vms/delete-backup-button.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/delete-backup-button.png" alt-text="Screenshot showing to select Delete backup.":::
* On the [vault item dashboard](#view-vms-on-the-dashboard), select **Delete backup data**.
* Type the name of the backup item to confirm that you want to delete the recovery points.
- ![Delete backup data](./media/backup-azure-manage-vms/delete-backup-data.png)
+ :::image type="content" source="./media/backup-azure-manage-vms/delete-backup-data.png" alt-text="Screenshot showing to delete backup data.":::
* To delete the backup data for the item, select **Delete**. A notification message lets you know that the backup data has been deleted.
diff --git a/articles/cloud-services-extended-support/swap-cloud-service.md b/articles/cloud-services-extended-support/swap-cloud-service.md
index 14971ead4a266..32b20b3e26756 100644
--- a/articles/cloud-services-extended-support/swap-cloud-service.md
+++ b/articles/cloud-services-extended-support/swap-cloud-service.md
@@ -58,7 +58,7 @@ To save compute costs, you can delete one of the cloud services (designated as a
## REST API
-To use the [REST API](/rest/api/compute/load-balancers/swap-public-ip-addresses) to swap to a new cloud services deployment in Azure Cloud Services (extended support), use the following command and JSON configuration:
+To use the [REST API](/rest/api/load-balancer/load-balancers/swap-public-ip-addresses) to swap to a new cloud services deployment in Azure Cloud Services (extended support), use the following command and JSON configuration:
```http
POST https://management.azure.com/subscriptions/subid/providers/Microsoft.Network/locations/westus/setLoadBalancerFrontendPublicIpAddresses?api-version=2021-02-01
diff --git a/articles/cognitive-services/LUIS/concepts/entities.md b/articles/cognitive-services/LUIS/concepts/entities.md
index 05faa5a566628..350f5eb3be877 100644
--- a/articles/cognitive-services/LUIS/concepts/entities.md
+++ b/articles/cognitive-services/LUIS/concepts/entities.md
@@ -133,7 +133,7 @@ You can use entities as a signal for an intent. For example, the presence of a c
| Example utterance | Entity | Intent |
|--|--|--|
-| Book me a _fight to New York_. | City | Book Flight |
+| Book me a _flight to New York_. | City | Book Flight |
| Book me the _main conference room_. | Room | Reserve Room |
## Entities as Feature for entities
diff --git a/articles/cognitive-services/Speech-Service/language-support.md b/articles/cognitive-services/Speech-Service/language-support.md
index 0af3eb80bdeae..3cd3302cb4f62 100644
--- a/articles/cognitive-services/Speech-Service/language-support.md
+++ b/articles/cognitive-services/Speech-Service/language-support.md
@@ -726,47 +726,50 @@ The following neural voices are in public preview.
| Language | Locale | Gender | Voice name | Style support |
|----------------------------------|---------|--------|----------------------------------------|---------------|
-| English (United Kingdom) | `en-GB` | Female | `en-GB-AbbiNeural` New | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-BellaNeural` New | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-HollieNeural` New | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-OliviaNeural` New | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-MaisieNeural` New | General, child voice |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-AlfieNeural` New | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-ElliotNeural` New | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-EthanNeural` New | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-NoahNeural` New | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-OliverNeural` New | General |
-| English (United Kingdom) | `en-GB` | Male | `en-GB-ThomasNeural` New | General |
-| English (United States) | `en-US` | Male | `en-US-DavisNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Female | `en-US-JaneNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Male | `en-US-JasonNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Female | `en-US-NancyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Male | `en-US-TonyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| French (France) | `fr-FR` | Female | `fr-FR-BrigitteNeural` New | General |
-| French (France) | `fr-FR` | Female | `fr-FR-CelesteNeural` New | General |
-| French (France) | `fr-FR` | Female | `fr-FR-CoralieNeural` New | General |
-| French (France) | `fr-FR` | Female | `fr-FR-JacquelineNeural` New | General |
-| French (France) | `fr-FR` | Female | `fr-FR-JosephineNeural` New | General |
-| French (France) | `fr-FR` | Female | `fr-FR-YvetteNeural` New | General |
-| French (France) | `fr-FR` | Female | `fr-FR-EloiseNeural` New | General, child voice |
-| French (France) | `fr-FR` | Male | `fr-FR-AlainNeural` New | General |
-| French (France) | `fr-FR` | Male | `fr-FR-ClaudeNeural` New | General |
-| French (France) | `fr-FR` | Male | `fr-FR-JeromeNeural` New | General |
-| French (France) | `fr-FR` | Male | `fr-FR-MauriceNeural` New | General |
-| French (France) | `fr-FR` | Male | `fr-FR-YvesNeural` New | General |
-| German (Germany) | `de-DE` | Female | `de-DE-AmalaNeural` New | General |
-| German (Germany) | `de-DE` | Female | `de-DE-ElkeNeural` New | General |
-| German (Germany) | `de-DE` | Female | `de-DE-KlarissaNeural` New | General |
-| German (Germany) | `de-DE` | Female | `de-DE-LouisaNeural` New | General |
-| German (Germany) | `de-DE` | Female | `de-DE-MajaNeural` New | General |
-| German (Germany) | `de-DE` | Female | `de-DE-TanjaNeural` New | General |
-| German (Germany) | `de-DE` | Female | `de-DE-GiselaNeural` New | General, child voice |
-| German (Germany) | `de-DE` | Male | `de-DE-BerndNeural` New | General |
-| German (Germany) | `de-DE` | Male | `de-DE-ChristophNeural` New | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KasperNeural` New | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KillianNeural` New | General |
-| German (Germany) | `de-DE` | Male | `de-DE-KlausNeural` New | General |
-| German (Germany) | `de-DE` | Male | `de-DE-RalfNeural` New | General |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunjianNeural` New | Optimized for broadcasting sports event, 2 new multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunhaoNeural` New | Optimized for promoting a product or service, 1 new multiple style available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunfengNeural` New | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-AbbiNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-BellaNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-HollieNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-OliviaNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-MaisieNeural` | General, child voice |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-AlfieNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-ElliotNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-EthanNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-NoahNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-OliverNeural` | General |
+| English (United Kingdom) | `en-GB` | Male | `en-GB-ThomasNeural` | General |
+| English (United States) | `en-US` | Male | `en-US-DavisNeural` New | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Female | `en-US-JaneNeural` New | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-JasonNeural` New | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Female | `en-US-NancyNeural` New | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-TonyNeural` New | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| French (France) | `fr-FR` | Female | `fr-FR-BrigitteNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-CelesteNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-CoralieNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-JacquelineNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-JosephineNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-YvetteNeural` | General |
+| French (France) | `fr-FR` | Female | `fr-FR-EloiseNeural` | General, child voice |
+| French (France) | `fr-FR` | Male | `fr-FR-AlainNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-ClaudeNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-JeromeNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-MauriceNeural` | General |
+| French (France) | `fr-FR` | Male | `fr-FR-YvesNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-AmalaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-ElkeNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-KlarissaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-LouisaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-MajaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-TanjaNeural` | General |
+| German (Germany) | `de-DE` | Female | `de-DE-GiselaNeural` | General, child voice |
+| German (Germany) | `de-DE` | Male | `de-DE-BerndNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-ChristophNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KasperNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KillianNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-KlausNeural` | General |
+| German (Germany) | `de-DE` | Male | `de-DE-RalfNeural` | General |
### Voice styles and roles
@@ -782,15 +785,15 @@ Use the following table to determine supported styles and roles for each neural
|Voice|Styles|Style degree|Roles|
|-----|-----|-----|-----|
|en-US-AriaNeural|`angry`, `chat`, `cheerful`, `customerservice`, `empathetic`, `excited`, `friendly`, `hopeful`, `narration-professional`, `newscast-casual`, `newscast-formal`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-DavisNeural|`angry`, `chat`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-DavisNeural Public preview|`angry`, `chat`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
|en-US-GuyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-JaneNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-JasonNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-JaneNeural Public preview|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-JasonNeural Public preview|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
|en-US-JennyNeural|`angry`, `assistant`, `chat`, `cheerful`,`customerservice`, `excited`, `friendly`, `hopeful`, `newscast`, `sad`, `shouting`, `terrified`, , `unfriendly`, `whispering`|||
-|en-US-NancyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|en-US-NancyNeural Public preview|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
|en-US-SaraNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|en-US-TonyNeural|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
-|fr-FR-DeniseNeural |`cheerful` Public preview, `sad`Public preview|||
+|en-US-TonyNeural Public preview|`angry`, `cheerful`, `excited`, `friendly`, `hopeful`, `sad`, `shouting`, `terrified`, `unfriendly`, `whispering`|||
+|fr-FR-DeniseNeural |`cheerful`, `sad`|||
|ja-JP-NanamiNeural|`chat`, `cheerful`, `customerservice`|||
|pt-BR-FranciscaNeural|`calm`|||
|zh-CN-XiaohanNeural|`affectionate`, `angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `gentle`, `sad`, `serious`|Supported||
@@ -802,6 +805,9 @@ Use the following table to determine supported styles and roles for each neural
|zh-CN-YunxiNeural|`angry`, `assistant`, `cheerful`, `depressed`, `disgruntled`, `embarrassed`, `fearful`, `narration-relaxed`, `sad`, `serious`|Supported|Supported|
|zh-CN-YunyangNeural|`customerservice`, `narration-professional`, `newscast-casual`|Supported||
|zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `sad`, `serious`|Supported|Supported|
+|zh-CN-YunjianNeural Public preview|`narration-relaxed`, `sports-commentary` Public preview, `sports-commentary-excited` Public preview|Supported||
+|zh-CN-YunhaoNeural Public preview|`general`, `advertisement-upbeat` Public preview|Supported||
+|zh-CN-YunfengNeural Public preview|`calm`, `angry`, ` disgruntled`, `cheerful`, `fearful`, `sad`, `serious`, `depressed`|Supported||
### Custom Neural Voice
diff --git a/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md b/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
index 60e4767b343a4..7a240f9da6714 100644
--- a/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
+++ b/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
@@ -111,7 +111,7 @@ In the following tables, the parameters without the **Adjustable** row aren't ad
3 For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). 4 See [additional explanations](#detailed-description-quota-adjustment-and-best-practices) and [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).
-5 See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit-for-custom-neural-voices).
+5 See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit).
## Detailed description, quota adjustment, and best practices
@@ -204,9 +204,9 @@ Suppose that a Speech service resource has the concurrent request limit set to 3
Generally, it's a very good idea to test the workload and the workload patterns before going to production.
-### Text-to-speech: increase concurrent request limit for custom neural voices
+### Text-to-speech: increase concurrent request limit
-By default, the number of concurrent requests for Custom Neural Voice endpoints is limited to 10. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
+For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
Increasing the limit of concurrent requests doesn't directly affect your costs. Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests.
diff --git a/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md b/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
index c8d840fe46b0f..e1c78566ff5ab 100644
--- a/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
+++ b/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
@@ -158,6 +158,7 @@ The following table has descriptions of each supported style.
|Style|Description|
|-----------|-------------|
+|`style="advertisement-upbeat"`|Expresses an excited and high-energy tone for promoting a product or service.|
|`style="affectionate"`|Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The personality of the speaker is often endearing in nature.|
|`style="angry"`|Expresses an angry and annoyed tone.|
|`style="assistant"`|Expresses a warm and relaxed tone for digital assistants.|
@@ -184,6 +185,8 @@ The following table has descriptions of each supported style.
|`style="sad"`|Expresses a sorrowful tone.|
|`style="serious"`|Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence.|
|`style="shouting"`|Speaks like from a far distant or outside and to make self be clearly heard|
+|`style="sports-commentary"`|Expresses a relaxed and interesting tone for broadcasting a sports event.|
+|`style="sports-commentary-excited"`|Expresses an intensive and energetic tone for broadcasting exciting moments in a sports event.|
|`style="whispering"`|Speaks very softly and make a quiet and gentle sound|
|`style="terrified"`|Expresses a very scared tone, with faster pace and a shakier voice. It sounds like the speaker is in an unsteady and frantic status.|
|`style="unfriendly"`|Expresses a cold and indifferent tone.|
diff --git a/articles/cognitive-services/Translator/document-translation/overview.md b/articles/cognitive-services/Translator/document-translation/overview.md
index c9846b633341f..18df3de8e1479 100644
--- a/articles/cognitive-services/Translator/document-translation/overview.md
+++ b/articles/cognitive-services/Translator/document-translation/overview.md
@@ -76,6 +76,16 @@ The following document file types are supported by Document Translation:
|Tab Separated Values/TAB|tsv/tab| A tab-delimited raw-data file used by spreadsheet programs.|
|Text|txt| An unformatted text document.|
+### Legacy file types
+
+Source file types will be preserved during the document translation with the following exceptions:
+
+| Source file extension | Translated file extension|
+| --- | --- |
+| .doc, .odt, .rtf, | .docx |
+| .xls, .ods | .xlsx |
+| .ppt, .odp | .pptx |
+
## Supported glossary formats
The following glossary file types are supported by Document Translation:
diff --git a/articles/cognitive-services/language-service/conversational-language-understanding/faq.md b/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
index c4fdbe54b3185..9d35f9783ad79 100644
--- a/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
+++ b/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.service: cognitive-services
ms.subservice: language-service
ms.topic: quickstart
-ms.date: 05/23/2022
+ms.date: 05/31/2022
ms.author: aahi
ms.custom: ignite-fall-2021, mode-other
---
@@ -74,6 +74,10 @@ Yes, you can use [orchestration workflow](../orchestration-workflow/overview.md)
Add any out of scope utterances to the [none intent](./concepts/none-intent.md).
+## How do I control the none intent?
+
+You can control the none intent threshhold from UI through the project settings, by changing the none inten threshold value. The values can be between 0.0 and 1.0. Also, you can change this threshold from the APIs by changing the *confidenceThreshold* in settings object. Learn more about [none intent](./concepts/none-intent.md#none-score-threshold)
+
## Is there any SDK support?
Yes, only for predictions, and samples are available for [Python](https://aka.ms/sdk-samples-conversation-python) and [C#](https://aka.ms/sdk-sample-conversation-dot-net). There is currently no authoring support for the SDK.
diff --git a/articles/cognitive-services/language-service/question-answering/tutorials/adding-synonyms.md b/articles/cognitive-services/language-service/question-answering/tutorials/adding-synonyms.md
index d7188c61b84ab..54f858a38d211 100644
--- a/articles/cognitive-services/language-service/question-answering/tutorials/adding-synonyms.md
+++ b/articles/cognitive-services/language-service/question-answering/tutorials/adding-synonyms.md
@@ -79,6 +79,8 @@ As you can see, when `troubleshoot` was not added as a synonym, we got a low con
## Notes
* Synonyms can be added in any order. The ordering is not considered in any computational logic.
+* Synonyms can only be added to a project that has at least one question and answer pair.
+* Synonyms can be added only when there is at least one question and answer pair present in a knowledge base.
* In case of overlapping synonym words between 2 sets of alterations, it may have unexpected results and it is not recommended to use overlapping sets.
* Special characters are not allowed for synonyms. For hyphenated words like "COVID-19", they are treated the same as "COVID 19", and "space" can be used as a term separator. Following is the list of special characters **not allowed**:
diff --git a/articles/cognitive-services/language-service/summarization/includes/quickstarts/rest-api.md b/articles/cognitive-services/language-service/summarization/includes/quickstarts/rest-api.md
index d7a50e9f4ca55..887fc1e3b775a 100644
--- a/articles/cognitive-services/language-service/summarization/includes/quickstarts/rest-api.md
+++ b/articles/cognitive-services/language-service/summarization/includes/quickstarts/rest-api.md
@@ -273,10 +273,10 @@ curl -X GET https://your-language-endpoint-here/language/analyze-conversation
```json
{
- "jobId": "28261846-59bc-435a-a73a-f47c2feb245e",
- "lastUpdatedDateTime": "2022-05-11T23:16:48Z",
- "createdDateTime": "2022-05-11T23:16:44Z",
- "expirationDateTime": "2022-05-12T23:16:44Z",
+ "jobId": "738120e1-7987-4d19-af0c-89d277762a2f",
+ "lastUpdatedDateTime": "2022-05-31T16:52:59Z",
+ "createdDateTime": "2022-05-31T16:52:51Z",
+ "expirationDateTime": "2022-06-01T16:52:51Z",
"status": "succeeded",
"errors": [],
"displayName": "Analyze conversations from 123",
@@ -289,7 +289,7 @@ curl -X GET https://your-language-endpoint-here/language/analyze-conversation
{
"kind": "conversationalSummarizationResults",
"taskName": "analyze 1",
- "lastUpdateDateTime": "2022-05-11T23:16:48.9553011Z",
+ "lastUpdateDateTime": "2022-05-31T16:52:59.85913Z",
"status": "succeeded",
"results": {
"conversations": [
@@ -298,11 +298,11 @@ curl -X GET https://your-language-endpoint-here/language/analyze-conversation
"summaries": [
{
"aspect": "issue",
- "text": "Customer wanted to set up wifi connection for Smart Brew 300 coffee machine, but it didn't work"
+ "text": "Customer tried to set up wifi connection for Smart Brew 300 machine, but it didn't work"
},
{
"aspect": "resolution",
- "text": "Asked customer if the power light is slowly blinking | Checked the Contoso coffee app. It had no prompt"
+ "text": "Asked customer to try the following steps | Asked customer for the power light | Checked if the app is prompting to connect to the machine | Transferred the call to a tech support"
}
],
"warnings": []
diff --git a/articles/cognitive-services/language-service/toc.yml b/articles/cognitive-services/language-service/toc.yml
index 0797e5c22c182..428487db851c2 100644
--- a/articles/cognitive-services/language-service/toc.yml
+++ b/articles/cognitive-services/language-service/toc.yml
@@ -87,14 +87,6 @@ items:
- name: Accepted data formats
href: custom-text-classification/concepts/data-formats.md
displayName: Data representation
- - name: Enterprise readiness
- items:
- - name: Virtual networks
- href: ../cognitive-services-virtual-networks.md?context=/azure/cognitive-services/language-service/context/context
- - name: Cognitive Services security
- href: ../cognitive-services-security.md?context=/azure/cognitive-services/language-service/context/context
- - name: Encryption of data at rest
- href: concepts/encryption-data-at-rest.md
- name: Tutorials
items:
- name: Enrich a Cognitive Search index
@@ -178,14 +170,6 @@ items:
href: custom-named-entity-recognition/concepts/evaluation-metrics.md
- name: Accepted data formats
href: custom-named-entity-recognition/concepts/data-formats.md
- - name: Enterprise readiness
- items:
- - name: Virtual networks
- href: ../cognitive-services-virtual-networks.md?context=/azure/cognitive-services/language-service/context/context
- - name: Cognitive Services security
- href: ../cognitive-services-security.md?context=/azure/cognitive-services/language-service/context/context
- - name: Encryption of data at rest
- href: concepts/encryption-data-at-rest.md
- name: Reference
items:
- name: Glossary
@@ -248,14 +232,6 @@ items:
href: conversational-language-understanding/concepts/data-formats.md
- name: None intent
href: conversational-language-understanding/concepts/none-intent.md
- - name: Enterprise readiness
- items:
- - name: Virtual networks
- href: ../cognitive-services-virtual-networks.md?context=/azure/cognitive-services/language-service/context/context
- - name: Cognitive Services security
- href: ../cognitive-services-security.md?context=/azure/cognitive-services/language-service/context/context
- - name: Encryption of data at rest
- href: concepts/encryption-data-at-rest.md
- name: Tutorials
items:
- name: Use Bot Framework
@@ -564,14 +540,6 @@ items:
href: orchestration-workflow/concepts/none-intent.md
- name: Data formats
href: orchestration-workflow/concepts/data-formats.md
- - name: Enterprise readiness
- items:
- - name: Virtual networks
- href: ../cognitive-services-virtual-networks.md?context=/azure/cognitive-services/language-service/context/context
- - name: Cognitive Services security
- href: ../cognitive-services-security.md?context=/azure/cognitive-services/language-service/context/context
- - name: Encryption of data at rest
- href: concepts/encryption-data-at-rest.md
- name: Tutorials
items:
- name: Connect conversational language understanding and custom question answering
@@ -952,18 +920,24 @@ items:
href: concepts/model-lifecycle.md
- name: Send requests asynchronously
href: concepts/use-asynchronously.md
+ - name: Enterprise readiness
+ items:
+ - name: Virtual networks
+ href: ../cognitive-services-virtual-networks.md?context=/azure/cognitive-services/language-service/context/context
+ - name: Cognitive Services security
+ href: ../cognitive-services-security.md?context=/azure/cognitive-services/language-service/context/context
+ - name: Encryption of data at rest
+ href: concepts/encryption-data-at-rest.md
+ - name: Region support
+ href: https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services
+ - name: Compliance and certification
+ href: https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/
- name: Tutorials
items:
- name: Use Cognitive Services in canvas apps
href: /powerapps/maker/canvas-apps/cognitive-services-api?context=/azure/cognitive-services/language-service/context/context
- name: Use Azure Kubernetes Service
href: tutorials/use-kubernetes-service.md
-- name: Enterprise readiness
- items:
- - name: Region support
- href: https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services
- - name: Compliance and certification
- href: https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/
- name: Resources
items:
- name: Support and help options
diff --git a/articles/confidential-computing/confidential-containers.md b/articles/confidential-computing/confidential-containers.md
index 2f8cc1933ee51..bac8d3c9d347b 100644
--- a/articles/confidential-computing/confidential-containers.md
+++ b/articles/confidential-computing/confidential-containers.md
@@ -37,7 +37,7 @@ You can enable confidential containers in Azure Partners and Open Source Softwar
### Fortanix
-[Fortanix](https://www.fortanix.com/) has portal and Command Line Interface (CLI) experiences to convert their containerized applications to SGX-capable confidential containers. You don't need to modify or recompile the application. Fortanix provides the flexibility to run and manage a broad set of applications. You can use existing applications, new enclave-native applications, and pre-packaged applications. Start with Fortanix's [Enclave Manager](https://em.fortanix.com/) UI or [REST APIs](https://www.fortanix.com/api/em/). Create confidential containers using the Fortanix's [quickstart guide for AKS](https://hubs.li/Q017JnNt0).
+[Fortanix](https://www.fortanix.com/) has portal and Command Line Interface (CLI) experiences to convert their containerized applications to SGX-capable confidential containers. You don't need to modify or recompile the application. Fortanix provides the flexibility to run and manage a broad set of applications. You can use existing applications, new enclave-native applications, and pre-packaged applications. Start with Fortanix's [Enclave Manager](https://em.fortanix.com/) UI or [REST APIs](https://www.fortanix.com/api/). Create confidential containers using the Fortanix's [quickstart guide for AKS](https://hubs.li/Q017JnNt0).
![Diagram of Fortanix deployment process, showing steps to move applications to confidential containers and deploy.](./media/confidential-containers/fortanix-confidential-containers-flow.png)
diff --git a/articles/connectors/connectors-create-api-informix.md b/articles/connectors/connectors-create-api-informix.md
index 6b4f6721176c2..e04a8dbc5c55b 100644
--- a/articles/connectors/connectors-create-api-informix.md
+++ b/articles/connectors/connectors-create-api-informix.md
@@ -3,8 +3,8 @@ title: Connect to IBM Informix database
description: Automate tasks and workflows that manage resources stored in IBM Informix by using Azure Logic Apps
services: logic-apps
ms.suite: integration
-author: ChristopherHouser
-ms.author: daberry
+author: mamccrea
+ms.author: mamccrea
ms.reviewer: estfan, azla
ms.topic: how-to
ms.date: 01/07/2020
diff --git a/articles/connectors/connectors-native-http.md b/articles/connectors/connectors-native-http.md
index 441739afafdf4..a65dfbda347a5 100644
--- a/articles/connectors/connectors-native-http.md
+++ b/articles/connectors/connectors-native-http.md
@@ -5,7 +5,7 @@ services: logic-apps
ms.suite: integration
ms.reviewer: estfan, azla
ms.topic: how-to
-ms.date: 09/13/2021
+ms.date: 05/31/2022
tags: connectors
---
@@ -97,7 +97,7 @@ This built-in action makes an HTTP call to the specified URL for an endpoint and
## Trigger and action outputs
-Here is more information about the outputs from an HTTP trigger or action, which returns this information:
+Here's more information about the outputs from an HTTP trigger or action, which returns this information:
| Property | Type | Description |
|----------|------|-------------|
@@ -123,7 +123,7 @@ Here is more information about the outputs from an HTTP trigger or action, which
If you have a **Logic App (Standard)** resource in single-tenant Azure Logic Apps, and you want to use an HTTP operation with any of the following authentication types, make sure to complete the extra setup steps for the corresponding authentication type. Otherwise, the call fails.
-* [TLS/SSL certificate](#tls-ssl-certificate-authentication): Add the app setting, `WEBSITE_LOAD_ROOT_CERTIFICATES`, and provide the thumbprint for your thumbprint for your TLS/SSL certificate.
+* [TLS/SSL certificate](#tls-ssl-certificate-authentication): Add the app setting, `WEBSITE_LOAD_ROOT_CERTIFICATES`, and set the value to the thumbprint for your TLS/SSL certificate.
* [Client certificate or Azure Active Directory Open Authentication (Azure AD OAuth) with the "Certificate" credential type](#client-certificate-authentication): Add the app setting, `WEBSITE_LOAD_USER_PROFILE`, and set the value to `1`.
@@ -215,7 +215,7 @@ For example, suppose you have a logic app that sends an HTTP POST request for an
![Multipart form data](./media/connectors-native-http/http-action-multipart.png)
-Here is the same example that shows the HTTP action's JSON definition in the underlying workflow definition:
+Here's the same example that shows the HTTP action's JSON definition in the underlying workflow definition:
```json
"HTTP_action": {
@@ -308,7 +308,7 @@ HTTP requests have a [timeout limit](../logic-apps/logic-apps-limits-and-config.
To specify the number of seconds between retry attempts, you can add the `Retry-After` header to the HTTP action response. For example, if the target endpoint returns the `429 - Too many requests` status code, you can specify a longer interval between retries. The `Retry-After` header also works with the `202 - Accepted` status code.
-Here is the same example that shows the HTTP action response that contains `Retry-After`:
+Here's the same example that shows the HTTP action response that contains `Retry-After`:
```json
{
@@ -319,6 +319,9 @@ Here is the same example that shows the HTTP action response that contains `Retr
}
```
+## Pagination support
+
+Sometimes, the target service responds by returning the results one page at a time. If the response specifies the next page with the **nextLink** or **@odata.nextLink** property, you can turn on the **Pagination** setting on the HTTP action. This setting causes the HTTP action to automatically follow these links and get the next page. However, if the response specifies the next page with any other tag, you might have to add a loop to your workflow. Make this loop follow that tag and manually get each page until the tag is null.
## Disable checking location headers
diff --git a/articles/container-apps/compare-options.md b/articles/container-apps/compare-options.md
index 46ef674cf5168..ae648de090c80 100644
--- a/articles/container-apps/compare-options.md
+++ b/articles/container-apps/compare-options.md
@@ -2,11 +2,11 @@
title: 'Comparing Container Apps with other Azure container options'
description: Understand when to use Azure Container Apps and how it compares to other container options including Azure Container Instances, Azure App Service, Azure Functions, and Azure Kubernetes Service.
services: container-apps
-author: jeffhollan
+author: craigshoemaker
ms.service: container-apps
ms.topic: quickstart
ms.date: 11/03/2021
-ms.author: jehollan
+ms.author: cshoe
ms.custom: ignite-fall-2021, mode-other, event-tier1-build-2022
---
diff --git a/articles/container-apps/revisions-manage.md b/articles/container-apps/revisions-manage.md
index 7a51bcbe97eac..58d7456d70cea 100644
--- a/articles/container-apps/revisions-manage.md
+++ b/articles/container-apps/revisions-manage.md
@@ -120,7 +120,7 @@ As you interact with this example, replace the placeholders surrounded by `<>` w
## Deactivate
-Deactivate revisions that are no longer in use with `az container app revision deactivate`. Deactivation stops all running replicas of a revision.
+Deactivate revisions that are no longer in use with `az containerapp revision deactivate`. Deactivation stops all running replicas of a revision.
# [Bash](#tab/bash)
diff --git a/articles/container-instances/TOC.yml b/articles/container-instances/TOC.yml
index 29656eba1639a..41cd904c87e54 100644
--- a/articles/container-instances/TOC.yml
+++ b/articles/container-instances/TOC.yml
@@ -113,6 +113,8 @@
href: container-instances-egress-ip-address.md
- name: Configure container group egress with NAT gateway
href: container-instances-nat-gateway.md
+ - name: Configure custom DNS settings for container group
+ href: container-instances-custom-dns.md
- name: Mount data volumes
items:
- name: Azure file share
diff --git a/articles/container-instances/container-instances-custom-dns.md b/articles/container-instances/container-instances-custom-dns.md
new file mode 100644
index 0000000000000..d58b52085b02a
--- /dev/null
+++ b/articles/container-instances/container-instances-custom-dns.md
@@ -0,0 +1,229 @@
+---
+title: Configure custom DNS settings for container group in Azure Container Instances
+description: Configure a public or private DNS configuration for a container group
+author: tomvcassidy
+ms.topic: how-to
+ms.service: container-instances
+services: container-instances
+ms.author: tomcassidy
+ms.date: 05/25/2022
+---
+
+# Deploy a container group with custom DNS settings
+
+In [Azure Virtual Network](../virtual-network/virtual-networks-overview.md), you can deploy container groups using the `az container create` command in the Azure CLI. You can also provide advanced configuration settings to the `az container create` command using a YAML configuration file.
+
+This article demonstrates how to deploy a container group with custom DNS settings using a YAML configuration file.
+
+For more information on deploying container groups to a virtual network, see the [Deploy in a virtual network article](container-instances-vnet.md).
+
+> [!IMPORTANT]
+> Previously, the process of deploying container groups on virtual networks used [network profiles](/azure/container-instances/container-instances-virtual-network-concepts#network-profile) for configuration. However, network profiles have been retired as of the `2021-07-01` API version. We recommend you use the latest API version, which relies on [subnet IDs](/azure/virtual-network/subnet-delegation-overview) instead.
+
+## Prerequisites
+
+* An **active Azure subscription**. If you don't have an active Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin.
+
+* **Azure CLI**. The command-line examples in this article use the [Azure CLI](/cli/azure/) and are formatted for the Bash shell. You can [install the Azure CLI](/cli/azure/install-azure-cli) locally or use the [Azure Cloud Shell][cloud-shell-bash].
+
+* A **resource group** to manage all the resources you use in this how-to guide. We use the example resource group name **ACIResourceGroup** throughout this article.
+
+ ```azurecli-interactive
+ az group create --name ACIResourceGroup --location westus
+ ```
+
+## Limitations
+
+For networking scenarios and limitations, see [Virtual network scenarios and resources for Azure Container Instances](container-instances-virtual-network-concepts.md).
+
+> [!IMPORTANT]
+> Container group deployment to a virtual network is available for Linux containers in most regions where Azure Container Instances is available. For details, see [Regions and resource availability](container-instances-region-availability.md).
+Examples in this article are formatted for the Bash shell. For PowerShell or command prompt, adjust the line continuation characters accordingly.
+
+## Create your virtual network
+
+You'll need a virtual network to deploy a container group with a custom DNS configuration. This virtual network will require a subnet with permissions to create Azure Container Instances resources and a linked private DNS zone to test name resolution.
+
+This guide uses a virtual network named `aci-vnet`, a subnet named `aci-subnet`, and a private DNS zone named `private.contoso.com`. We use **Azure Private DNS Zones**, which you can learn about in the [Private DNS Overview](../dns/private-dns-overview.md).
+
+If you have an existing virtual network that meets these criteria, you can skip to [Deploy your container group](#deploy-your-container-group).
+
+> [!TIP]
+> You can modify the following commands with your own information as needed.
+
+1. Create the virtual network using the [az network vnet create][az-network-vnet-create] command. Enter address prefixes in Classless Inter-Domain Routing (CIDR) format (for example: `10.0.0.0/16`).
+
+ ```azurecli
+ az network vnet create \
+ --name aci-vnet \
+ --resource-group ACIResourceGroup \
+ --location westus \
+ --address-prefix 10.0.0.0/16
+ ```
+
+1. Create the subnet using the [az network vnet subnet create][az-network-vnet-subnet-create] command. The following command creates a subnet in your virtual network with a delegation that permits it to create container groups. For more information about working with subnets, see the [Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md). For more information about subnet delegation, see the [Virtual Network Scenarios and Resources article section on delegated subnets](container-instances-virtual-network-concepts.md#subnet-delegated).
+
+ ```azurecli
+ az network vnet subnet create \
+ --name aci-subnet \
+ --resource-group ACIResourceGroup \
+ --vnet-name aci-vnet \
+ --address-prefixes 10.0.0.0/24 \
+ --delegations Microsoft.ContainerInstance/containerGroups
+ ```
+
+1. Record the subnet ID key-value pair from the output of this command. You'll use this in your YAML configuration file later. It will take the form `"id"`: `"/subscriptions//resourceGroups/ACIResourceGroup/providers/Microsoft.Network/virtualNetworks/aci-vnet/subnets/aci-subnet"`.
+
+1. Create the private DNS Zone using the [az network private-dns zone create][az-network-private-dns-zone-create] command.
+
+ ```azurecli
+ az network private-dns zone create -g ACIResourceGroup -n private.contoso.com
+ ```
+
+1. Link the DNS zone to your virtual network using the [az network private-dns link vnet create][az-network-private-dns-link-vnet-create] command. The DNS server is only required to test name resolution. The `-e` flag enables automatic hostname registration, which is unneeded, so we set it to `false`.
+
+ ```azurecli
+ az network private-dns link vnet create \
+ -g ACIResourceGroup \
+ -n aciDNSLink \
+ -z private.contoso.com \
+ -v aci-vnet \
+ -e false
+ ```
+
+Once you've completed the steps above, you should see an output with a final key-value pair that reads `"virtualNetworkLinkState"`: `"Completed"`.
+
+## Deploy your container group
+
+> [!NOTE]
+> Custom DNS settings are not currently available in the Azure portal for container group deployments. They must be provided with YAML file, Resource Manager template, [REST API](/rest/api/container-instances/containergroups/createorupdate), or an [Azure SDK](https://azure.microsoft.com/downloads/).
+
+Copy the following YAML into a new file named *custom-dns-deploy-aci.yaml*. Edit the following configurations with your values:
+
+* `dnsConfig`: DNS settings for your containers within your container group.
+ * `nameServers`: A list of name servers to be used for DNS lookups.
+ * `searchDomains`: DNS suffixes to be appended for DNS lookups.
+* `ipAddress`: The private IP address settings for the container group.
+ * `ports`: The ports to open, if any.
+ * `protocol`: The protocol (TCP or UDP) for the opened port.
+* `subnetIDs`: Network settings for the subnet(s) in the virtual network.
+ * `id`: The full Resource Manager resource ID of the subnet, which you obtained earlier.
+
+> [!NOTE]
+> The DNS config fields aren't automatically queried at this time, so these fields must be explicitly filled out.
+
+```yaml
+apiVersion: '2021-07-01'
+location: westus
+name: pwsh-vnet-dns
+properties:
+ containers:
+ - name: pwsh-vnet-dns
+ properties:
+ command:
+ - /bin/bash
+ - -c
+ - echo hello; sleep 10000
+ environmentVariables: []
+ image: mcr.microsoft.com/powershell:latest
+ ports:
+ - port: 80
+ resources:
+ requests:
+ cpu: 1.0
+ memoryInGB: 2.0
+ dnsConfig:
+ nameServers:
+ - 10.0.0.10 # DNS Server 1
+ - 10.0.0.11 # DNS Server 2
+ searchDomains: contoso.com # DNS search suffix
+ ipAddress:
+ type: Private
+ ports:
+ - port: 80
+ subnetIds:
+ - id: /subscriptions//resourceGroups/ACIResourceGroup/providers/Microsoft.Network/virtualNetworks/aci-vnet/subnets/aci-subnet
+ osType: Linux
+tags: null
+type: Microsoft.ContainerInstance/containerGroups
+```
+
+Deploy the container group with the [az container create][az-container-create] command, specifying the YAML file name with the `--file` parameter:
+
+```azurecli
+az container create --resource-group ACIResourceGroup \
+ --file custom-dns-deploy-aci.yaml
+```
+
+Once the deployment is complete, run the [az container show][az-container-show] command to display its status. Sample output:
+
+```azurecli
+az container show --resource-group ACIResourceGroup --name pwsh-vnet-dns -o table
+```
+
+```console
+Name ResourceGroup Status Image IP:ports Network CPU/Memory OsType Location
+---------------- --------------- -------- ------------------------------------------ ----------- --------- --------------- -------- ----------
+pwsh-vnet-dns ACIResourceGroup Running mcr.microsoft.com/powershell 10.0.0.5:80 Private 1.0 core/2.0 gb Linux westus
+```
+
+After the status shows `Running`, execute the [az container exec][az-container-exec] command to obtain bash access within the container.
+
+```azurecli
+az container exec --resource-group ACIResourceGroup --name pwsh-vnet-dns --exec-command "/bin/bash"
+```
+
+Validate that DNS is working as expected from within your container. For example, read the `/etc/resolv.conf` file to ensure it's configured with the DNS settings provided in the YAML file.
+
+```console
+root@wk-caas-81d609b206c541589e11058a6d260b38-90b0aff460a737f346b3b0:/# cat /etc/resolv.conf
+
+nameserver 10.0.0.10
+nameserver 10.0.0.11
+search contoso.com
+```
+
+## Clean up resources
+
+### Delete container instances
+
+When you're finished with the container instance you created, delete it with the [az container delete][az-container-delete] command:
+
+```azurecli
+az container delete --resource-group ACIResourceGroup --name pwsh-vnet-dns -y
+```
+
+### Delete network resources
+
+If you don't plan to use this virtual network again, you can delete it with the [az network vnet delete][az-network-vnet-delete] command:
+
+```azurecli
+az network vnet delete --resource-group ACIResourceGroup --name aci-vnet
+```
+
+### Delete resource group
+
+If you don't plan to use this resource group outside of this guide, you can delete it with [az group delete][az-group-delete] command:
+
+```azurecli
+az group delete --name ACIResourceGroup
+```
+
+Enter `y` when prompted if you're sure you wish to perform the operation.
+
+## Next steps
+
+See the Azure quickstart template [Create an Azure container group with VNet](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet), to deploy a container group within a virtual network.
+
+
+[az-network-vnet-create]: /cli/azure/network/vnet#az-network-vnet-create
+[az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az-network-vnet-subnet-create
+[az-network-private-dns-zone-create]: /cli/azure/network/private-dns/zone#az-network-private-dns-zone-create
+[az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az-network-private-dns-link-vnet-create
+[az-container-create]: /cli/azure/container#az-container-create
+[az-container-show]: /cli/azure/container#az-container-show
+[az-container-exec]: /cli/azure/container#az-container-exec
+[az-container-delete]: /cli/azure/container#az-container-delete
+[az-network-vnet-delete]: /cli/azure/network/vnet#az-network-vnet-delete
+[az-group-delete]: /cli/azure/group#az-group-create
+[cloud-shell-bash]: /cloud-shell/overview.md
diff --git a/articles/container-registry/TOC.yml b/articles/container-registry/TOC.yml
index 7bd6913227245..e53ac54a69a6b 100644
--- a/articles/container-registry/TOC.yml
+++ b/articles/container-registry/TOC.yml
@@ -185,7 +185,7 @@
items:
- name: Scan with Microsoft Defender for Cloud
href: scan-images-defender.md
- - name: Scan with GitHub actions
+ - name: Scan with GitHub Actions
href: github-action-scan.md
- name: Troubleshoot
expanded: false
diff --git a/articles/container-registry/container-registry-intro.md b/articles/container-registry/container-registry-intro.md
index 8ce63b07dedda..37d55f631e8f4 100644
--- a/articles/container-registry/container-registry-intro.md
+++ b/articles/container-registry/container-registry-intro.md
@@ -1,15 +1,15 @@
---
title: Managed container registries
-description: Introduction to the Azure Container Registry service, providing cloud-based, managed, private Docker registries.
+description: Introduction to the Azure Container Registry service, providing cloud-based, managed registries.
author: stevelas
ms.topic: overview
ms.date: 02/10/2020
ms.author: stevelas
ms.custom: "seodec18, mvc"
---
-# Introduction to private Docker container registries in Azure
+# Introduction to Container registries in Azure
-Azure Container Registry is a managed, private Docker registry service based on the open-source Docker Registry 2.0. Create and maintain Azure container registries to store and manage your private Docker container images and related artifacts.
+Azure Container Registry is a managed registry service based on the open-source Docker Registry 2.0. Create and maintain Azure container registries to store and manage your container images and related artifacts.
Use Azure container registries with your existing container development and deployment pipelines, or use Azure Container Registry Tasks to build container images in Azure. Build on demand, or fully automate builds with triggers such as source code commits and base image updates.
diff --git a/articles/cosmos-db/TOC.yml b/articles/cosmos-db/TOC.yml
index a2cb55da1af28..8039d244c9d43 100644
--- a/articles/cosmos-db/TOC.yml
+++ b/articles/cosmos-db/TOC.yml
@@ -390,6 +390,9 @@
- name: DateTimeAdd
displayName: DateTimeAdd, date time add, timestamp, date and time functions, datetime
href: sql/sql-query-datetimeadd.md
+ - name: DateTimeBin
+ displayName: DateTimeBin, date time bin, date and time functions, datetime
+ href: sql/sql-query-datetimebin.md
- name: DateTimeDiff
displayName: DateTimeDiff, date time diff, date and time functions, datetime
href: sql/sql-query-datetimediff.md
diff --git a/articles/cosmos-db/burst-capacity-faq.yml b/articles/cosmos-db/burst-capacity-faq.yml
index 77fc67c4ce5ef..d6916e4cb1747 100644
--- a/articles/cosmos-db/burst-capacity-faq.yml
+++ b/articles/cosmos-db/burst-capacity-faq.yml
@@ -18,12 +18,20 @@ summary: |
sections:
- name: General
questions:
+ - question: |
+ How much does it cost to use burst capacity?
+ answer: |
+ There's no charge to use burst capacity.
- question: |
How does burst capacity work with autoscale?
answer: |
Autoscale and burst capacity are compatible. Autoscale gives you a guaranteed instant 10 times scale range. Burst capacity allows you to take advantage of unused, idle capacity to handle temporary spikes, potentially beyond your autoscale max RU/s. For example, suppose we have an autoscale container with one physical partition that scales between 100 - 1000 RU/s. Without burst capacity, any requests that consume beyond 1000 RU/s would be rate limited. With burst capacity however, the partition can accumulate a maximum of 1000 RU/s of idle capacity each second. Burst capacity allows the partition to burst at a maximum rate of 3000 RU/s for a limited amount of time.
- The autoscale max RU/s per physical partition must be less than 3000 RU/s for burst capacity to be applicable.
+ Accumulation of burst is based on the maximum autoscale RU/s.
+
+ The autoscale maximum RU/s per physical partition must be less than 3000 RU/s for burst capacity to be applicable.
+
+ When burst capacity is used with autoscale, autoscale will use up to the maximum RU/s before using burst capacity. You may see autoscale scale up to max RU/s during spikes of traffic.
- question: |
What resources can use burst capacity?
answer: |
@@ -31,7 +39,7 @@ sections:
- question: |
How can I monitor burst capacity?
answer: |
- [Azure Monitor metrics](monitor-cosmos-db.md#analyzing-metrics), built-in to Azure Cosmos DB, can filter by the dimension **CapacityType** on the **TotalRequests** and **TotalRequestUnits** metrics. Requests served with burst capacity will have **CapacityType** equal to **BurstCapacity**.
+ [Azure Monitor metrics](monitor-cosmos-db.md#analyzing-metrics), built-in to Azure Cosmos DB, can filter by the dimension **CapacityType** on the **TotalRequests** and **TotalRequestUnits (preview)** metrics. Requests served with burst capacity will have **CapacityType** equal to **BurstCapacity**.
- question: |
How can I see which resources have less than 3000 RU/s per physical partition?
answer: |
diff --git a/articles/cosmos-db/concepts-limits.md b/articles/cosmos-db/concepts-limits.md
index fdb7976846339..9fd8e1b204789 100644
--- a/articles/cosmos-db/concepts-limits.md
+++ b/articles/cosmos-db/concepts-limits.md
@@ -5,7 +5,7 @@ author: markjbrown
ms.author: mjbrown
ms.service: cosmos-db
ms.topic: conceptual
-ms.date: 04/27/2022
+ms.date: 05/30/2022
---
# Azure Cosmos DB service quotas
@@ -93,9 +93,10 @@ Depending on the current RU/s provisioned and resource settings, each resource c
| Maximum RU/s per container | 5,000 |
| Maximum storage across all items per (logical) partition | 20 GB |
| Maximum number of distinct (logical) partition keys | Unlimited |
-| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 50 GB |
-| Maximum storage per container (Cassandra API)| 30 GB |
+| Maximum storage per container (SQL API, Mongo API, Table API, Gremlin API)| 1 TB |
+| Maximum storage per container (Cassandra API)| 1 TB |
+1 Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
## Control plane operations
@@ -104,7 +105,7 @@ You can [provision and manage your Azure Cosmos account](how-to-manage-database-
| Resource | Limit |
| --- | --- |
| Maximum number of accounts per subscription | 50 by default. 1 |
-| Maximum number of regional failovers | 1/hour by default. 12 |
+| Maximum number of regional failovers | 10/hour by default. 12 |
1 You can increase these limits by creating an [Azure Support request](create-support-request-quota-increase.md).
@@ -163,7 +164,7 @@ An Azure Cosmos item can represent either a document in a collection, a row in a
| Maximum level of nesting for embedded objects / arrays | 128 |
| Maximum TTL value |2147483647 |
-1 Large document sizes up to 16 Mb are currently in preview with Azure Cosmos DB API for MongoDB only. Sign-up for the feature “Azure Cosmos DB API For MongoDB 16MB Document Support” from [Preview Features the Azure portal](./access-previews.md), to try the new feature.
+1 Large document sizes up to 16 Mb are currently in preview with Azure Cosmos DB API for MongoDB only. Sign-up for the feature “Azure Cosmos DB API For MongoDB 16 MB Document Support” from [Preview Features the Azure portal](./access-previews.md), to try the new feature.
There are no restrictions on the item payloads (like number of properties and nesting depth), except for the length restrictions on partition key and ID values, and the overall size restriction of 2 MB. You may have to configure indexing policy for containers with large or complex item structures to reduce RU consumption. See [Modeling items in Cosmos DB](how-to-model-partition-example.md) for a real-world example, and patterns to manage large items.
@@ -215,7 +216,7 @@ See the [Autoscale](provision-throughput-autoscale.md#autoscale-limits) article
| Current RU/s the system is scaled to | `0.1*Tmax <= T <= Tmax`, based on usage|
| Minimum billable RU/s per hour| `0.1 * Tmax` Billing is done on a per-hour basis, where you're billed for the highest RU/s the system scaled to in the hour, or `0.1*Tmax`, whichever is higher. |
| Minimum autoscale max RU/s for a container | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100)` rounded to nearest 1000 RU/s |
-| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per additional container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
+| Minimum autoscale max RU/s for a database | `MAX(1000, highest max RU/s ever provisioned / 10, current storage in GB * 100, 1000 + (MAX(Container count - 25, 0) * 1000))`, rounded to nearest 1000 RU/s. Note if your database has more than 25 containers, the system increments the minimum autoscale max RU/s by 1000 RU/s per extra container. For example, if you have 30 containers, the lowest autoscale maximum RU/s you can set is 6000 RU/s (scales between 600 - 6000 RU/s).
## SQL query limits
@@ -291,7 +292,7 @@ Get started with Azure Cosmos DB with one of our quickstarts:
* [Get started with Azure Cosmos DB Gremlin API](create-graph-dotnet.md)
* [Get started with Azure Cosmos DB Table API](table/create-table-dotnet.md)
* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
- * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
+ * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
> [!div class="nextstepaction"]
diff --git a/articles/cosmos-db/local-emulator-release-notes.md b/articles/cosmos-db/local-emulator-release-notes.md
index f81423d90c08a..28dd85e7d83ce 100644
--- a/articles/cosmos-db/local-emulator-release-notes.md
+++ b/articles/cosmos-db/local-emulator-release-notes.md
@@ -22,6 +22,12 @@ This article shows the Azure Cosmos DB Emulator released versions and it details
## Release notes
+### 2.14.7 (May 9, 2022)
+
+ - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. In addition to this update there are couple issues that were addressed in this release:
+ * Update Data Explorer to the latest content and fix a broken link for the quick start sample documentation.
+ * Add option to enable the Mongo API version for the Linux Cosmos DB emulator by setting the environment variable: "AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT" in the Docker container setting. Valid setting are: "3.2", "3.6", "4.0" and "4.2"
+
### 2.14.6 (March 7, 2022)
- This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. In addition to this update there are couple issues that were addressed in this release:
diff --git a/articles/cosmos-db/managed-identity-based-authentication.md b/articles/cosmos-db/managed-identity-based-authentication.md
index d5b368b0b4201..c32beba403b45 100644
--- a/articles/cosmos-db/managed-identity-based-authentication.md
+++ b/articles/cosmos-db/managed-identity-based-authentication.md
@@ -1,220 +1,317 @@
---
-title: How to use a system-assigned managed identity to access Azure Cosmos DB data
+title: Use system-assigned managed identities to access Azure Cosmos DB data
description: Learn how to configure an Azure Active Directory (Azure AD) system-assigned managed identity (managed service identity) to access keys from Azure Cosmos DB.
-author: j-patrick
+author: seesharprun
ms.service: cosmos-db
ms.subservice: cosmosdb-sql
ms.topic: how-to
-ms.date: 07/02/2021
-ms.author: justipat
-ms.reviewer: sngun
+ms.date: 06/01/2022
+ms.author: sidandrews
+ms.reviewer: justipat
ms.custom: devx-track-csharp, devx-track-azurecli, subject-rbac-steps
-
---
# Use system-assigned managed identities to access Azure Cosmos DB data
-[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-
-> [!TIP]
-> [Data plane role-based access control (RBAC)](how-to-setup-rbac.md) is now available on Azure Cosmos DB, providing a seamless way to authorize your requests with Azure Active Directory.
-
-In this article, you'll set up a *robust, key rotation agnostic* solution to access Azure Cosmos DB keys by using [managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md). The example in this article uses Azure Functions, but you can use any service that supports managed identities.
-
-You'll learn how to create a function app that can access Azure Cosmos DB data without needing to copy any Azure Cosmos DB keys. The function app will wake up every minute and record the current temperature of an aquarium fish tank. To learn how to set up a timer-triggered function app, see the [Create a function in Azure that is triggered by a timer](../azure-functions/functions-create-scheduled-function.md) article.
-
-To simplify the scenario, a [Time To Live](./time-to-live.md) setting is already configured to clean up older temperature documents.
-
-> [!IMPORTANT]
-> Because this approach fetches your account's primary key through the Azure Cosmos DB control plane, it will not work if [a read-only lock has been applied](../azure-resource-manager/management/lock-resources.md) to your account. In this situation, consider using the Azure Cosmos DB [data plane RBAC](how-to-setup-rbac.md) instead.
-
-## Assign a system-assigned managed identity to a function app
-
-In this step, you'll assign a system-assigned managed identity to your function app.
+[!INCLUDE [appliesto-sql-api](includes/appliesto-sql-api.md)]
-1. In the [Azure portal](https://portal.azure.com/), open the **Azure Function** pane and go to your function app.
+In this article, you'll set up a *robust, key rotation agnostic* solution to access Azure Cosmos DB keys by using [managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) and [data plane role-based access control](how-to-setup-rbac.md). The example in this article uses Azure Functions, but you can use any service that supports managed identities.
-1. Open the **Platform features** > **Identity** tab:
+You'll learn how to create a function app that can access Azure Cosmos DB data without needing to copy any Azure Cosmos DB keys. The function app will trigger when an HTTP request is made and then list all of the existing databases.
- :::image type="content" source="./media/managed-identity-based-authentication/identity-tab-selection.png" alt-text="Screenshot showing Platform features and Identity options for the function app.":::
+## Prerequisites
-1. On the **Identity** tab, turn **On** the system identity **Status** and select **Save**. The **Identity** pane should look as follows:
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing Azure Cosmos DB SQL API account. [Create an Azure Cosmos DB SQL API account](sql/create-cosmosdb-resources-portal.md)
+- An existing Azure Functions function app. [Create your first function in the Azure portal](../azure-functions/functions-create-function-app-portal.md)
+ - A system-assigned managed identity for the function app. [Add a system-assigned identity](/app-service/overview-managed-identity.md?tabs=cli#add-a-system-assigned-identity)
+- [Azure Functions Core Tools](../azure-functions/functions-run-local.md)
+- To perform the steps in this article, install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in to Azure](/cli/azure/authenticate-azure-cli).
- :::image type="content" source="./media/managed-identity-based-authentication/identity-tab-system-managed-on.png" alt-text="Screenshot showing system identity Status set to On.":::
+## Prerequisite check
-## Grant access to your Azure Cosmos account
-
-In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you'll use the following two roles:
-
-|Built-in role |Description |
-|---------|---------|
-|[DocumentDB Account Contributor](../role-based-access-control/built-in-roles.md#documentdb-account-contributor)|Can manage Azure Cosmos DB accounts. Allows retrieval of read/write keys. |
-|[Cosmos DB Account Reader Role](../role-based-access-control/built-in-roles.md#cosmos-db-account-reader-role)|Can read Azure Cosmos DB account data. Allows retrieval of read keys. |
-
-> [!TIP]
-> When you assign roles, assign only the needed access. If your service requires only reading data, then assign the **Cosmos DB Account Reader** role to the managed identity. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
-
-In this scenario, the function app will read the temperature of the aquarium, then write back that data to a container in Azure Cosmos DB. Because the function app must write the data, you'll need to assign the **DocumentDB Account Contributor** role.
+1. In a terminal or command window, store the names of your Azure Functions function app, Azure Cosmos DB account and resource group as shell variables named ``functionName``, ``cosmosName``, and ``resourceGroupName``.
-### Assign the role using Azure portal
+ ```azurecli-interactive
+ # Variable for function app name
+ functionName="msdocs-function-app"
+
+ # Variable for Cosmos DB account name
+ cosmosName="msdocs-cosmos-app"
-1. Sign in to the Azure portal and go to your Azure Cosmos DB account.
+ # Variable for resource group name
+ resourceGroupName="msdocs-cosmos-functions-dotnet-identity"
+ ```
-1. Select **Access control (IAM)**.
+ > [!NOTE]
+ > These variables will be re-used in later steps. This example assumes your Azure Cosmos DB account name is ``msdocs-cosmos-app``, your function app name is ``msdocs-function-app`` and your resource group name is ``msdocs-cosmos-functions-dotnet-identity``.
-1. Select **Add** > **Add role assignment**.
+1. View the function app's properties using the [``az functionapp show``](/cli/azure/functionapp&preserve-view=true#az-functionapp-show) command.
- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
+ ```azurecli-interactive
+ az functionapp show \
+ --resource-group $resourceGroupName \
+ --name $functionName
+ ```
-1. On the **Role** tab, select **DocumentDB Account Contributor**.
+1. View the properties of the system-assigned managed identity for your function app using [``az webapp identity show``](/cli/azure/webapp/identity#az-webapp-identity-show).
-1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+ ```azurecli-interactive
+ az webapp identity show \
+ --resource-group $resourceGroupName \
+ --name $functionName
+ ```
-1. Select your Azure subscription.
+1. View the Cosmos DB account's properties using [``az cosmosdb show``](/cli/azure/cosmosdb#az-cosmosdb-show).
-1. Under **System-assigned managed identity**, select **Function App**, and then select **FishTankTemperatureService**.
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $cosmosName
+ ```
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+## Create Cosmos DB SQL API databases
-### Assign the role using Azure CLI
+In this step, you'll create two databases.
-To assign the role by using Azure CLI, open the Azure Cloud Shell and run the following commands:
+1. In a terminal or command window, create a new ``products`` database using [``az cosmosdb sql database create``](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create).
-```azurecli-interactive
+ ```azurecli-interactive
+ az cosmosdb sql database create \
+ --resource-group $resourceGroupName \
+ --name products \
+ --account-name $cosmosName
+ ```
-scope=$(az cosmosdb show --name '' --resource-group '' --query id)
+1. Create a new ``customers`` database.
-principalId=$(az webapp identity show -n '' -g '' --query principalId)
+ ```azurecli-interactive
+ az cosmosdb sql database create \
+ --resource-group $resourceGroupName \
+ --name customers \
+ --account-name $cosmosName
+ ```
-az role assignment create --assignee $principalId --role "DocumentDB Account Contributor" --scope $scope
-```
+## Get Cosmos DB SQL API endpoint
-## Programmatically access the Azure Cosmos DB keys
+In this step, you'll query the document endpoint for the SQL API account.
-Now we have a function app that has a system-assigned managed identity with the **DocumentDB Account Contributor** role in the Azure Cosmos DB permissions. The following function app code will get the Azure Cosmos DB keys, create a CosmosClient object, get the temperature of the aquarium, and then save this to Azure Cosmos DB.
+1. Use ``az cosmosdb show`` with the **query** parameter set to ``documentEndpoint``. Record the result. You'll use this value in a later step.
-This sample uses the [List Keys API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/database-accounts/list-keys) to access your Azure Cosmos DB account keys.
+ ```azurecli-interactive
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $cosmosName \
+ --query documentEndpoint
-> [!IMPORTANT]
-> If you want to [assign the Cosmos DB Account Reader](#grant-access-to-your-azure-cosmos-account) role, you'll need to use the [List Read Only Keys API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/database-accounts/list-read-only-keys). This will populate just the read-only keys.
+ cosmosEndpoint=$(
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $cosmosName \
+ --query documentEndpoint \
+ --output tsv
+ )
+
+ echo $cosmosEndpoint
+ ```
-The List Keys API returns the `DatabaseAccountListKeysResult` object. This type isn't defined in the C# libraries. The following code shows the implementation of this class:
+ > [!NOTE]
+ > This variable will be re-used in a later step.
-```csharp
-namespace Monitor
-{
- public class DatabaseAccountListKeysResult
- {
- public string primaryMasterKey { get; set; }
- public string primaryReadonlyMasterKey { get; set; }
- public string secondaryMasterKey { get; set; }
- public string secondaryReadonlyMasterKey { get; set; }
- }
-}
-```
-
-The example also uses a simple document called "TemperatureRecord," which is defined as follows:
+## Grant access to your Azure Cosmos account
-```csharp
-using System;
+In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you'll use the [Cosmos DB Built-in Data Reader](how-to-setup-rbac.md#built-in-role-definitions) role.
-namespace Monitor
-{
- public class TemperatureRecord
+> [!TIP]
+> When you assign roles, assign only the needed access. If your service requires only reading data, then assign the **Cosmos DB Built-in Data Reader** role to the managed identity. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
+
+1. Use ``az cosmosdb show`` with the **query** parameter set to ``id``. Store the result in a shell variable named ``scope``.
+
+ ```azurecli-interactive
+ scope=$(
+ az cosmosdb show \
+ --resource-group $resourceGroupName \
+ --name $cosmosName \
+ --query id \
+ --output tsv
+ )
+
+ echo $scope
+ ```
+
+ > [!NOTE]
+ > This variable will be re-used in a later step.
+
+1. Use ``az webapp identity show`` with the **query** parameter set to ``principalId``. Store the result in a shell variable named ``principal``.
+
+ ```azurecli-interactive
+ principal=$(
+ az webapp identity show \
+ --resource-group $resourceGroupName \
+ --name $functionName \
+ --query principalId \
+ --output tsv
+ )
+
+ echo $principal
+ ```
+
+1. Create a new JSON object with the configuration of the new custom role.
+
+ ```json
{
- public string id { get; set; } = Guid.NewGuid().ToString();
- public DateTime RecordTime { get; set; }
- public int Temperature { get; set; }
+ "RoleName": "Read Cosmos Metadata",
+ "Type": "CustomRole",
+ "AssignableScopes": ["/"],
+ "Permissions": [{
+ "DataActions": [
+ "Microsoft.DocumentDB/databaseAccounts/readMetadata"
+ ]
+ }]
}
-}
-```
+ ```
-You'll use the [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication) library to get the system-assigned managed identity token. To learn other ways to get the token and find out more information about the `Microsoft.Azure.Service.AppAuthentication` library, see the [Service-to-service authentication](/dotnet/api/overview/azure/service-to-service-authentication) article.
+1. Use [``az role assignment create``](/cli/azure/cosmosdb/sql/role/assignment#az-cosmosdb-sql-role-assignment-create) to assign the ``Cosmos DB Built-in Data Reader`` role to the system-assigned managed identity.
+ ```azurecli-interactive
+ az cosmosdb sql role assignment create \
+ --resource-group $resourceGroupName \
+ --account-name $cosmosName \
+ --role-definition-name "Read Cosmos Metadata" \
+ --principal-id $principal \
+ --scope $scope
+ ```
-```csharp
-using System;
-using System.Net.Http;
-using System.Net.Http.Headers;
-using System.Threading.Tasks;
-using Microsoft.Azure.Cosmos;
-using Microsoft.Azure.Services.AppAuthentication;
-using Microsoft.Azure.WebJobs;
-using Microsoft.Extensions.Logging;
+## Programmatically access the Azure Cosmos DB keys
-namespace Monitor
-{
- public static class FishTankTemperatureService
+We now have a function app that has a system-assigned managed identity with the **Cosmos DB Built-in Data Reader** role. The following function app will query the Azure Cosmos DB account for a list of databases.
+
+1. Create a local function project with the ``--dotnet`` parameter in a folder named ``csmsfunc``. Change your shell's directory
+
+ ```azurecli-interactive
+ func init csmsfunc --dotnet
+
+ cd csmsfunc
+ ```
+
+1. Create a new function with the **template** parameter set to ``httptrigger`` and the **name** set to ``readdatabases``.
+
+ ```azurecli-interactive
+ func new --template httptrigger --name readdatabases
+ ```
+
+1. Add the [``Azure.Identity``](https://www.nuget.org/packages/Azure.Identity/) and [``Microsoft.Azure.Cosmos``](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/) NuGet package to the .NET project. Build the project using [``dotnet build``](/dotnet/core/tools/dotnet-build).
+
+ ```azurecli-interactive
+ dotnet add package Azure.Identity
+
+ dotnet add package Microsoft.Azure.Cosmos
+
+ dotnet build
+ ```
+
+1. Open the function code in an integrated developer environment (IDE).
+
+ > [!TIP]
+ > If you are using the Azure CLI locally or in the Azure Cloud Shell, you can open Visual Studio Code.
+ >
+ > ```azurecli
+ > code .
+ > ```
+ >
+
+1. Replace the code in the **readdatabases.cs** file with this sample function implementation. Save the updated file.
+
+ ```csharp
+ using System;
+ using System.Collections.Generic;
+ using System.Threading.Tasks;
+ using Azure.Identity;
+ using Microsoft.AspNetCore.Mvc;
+ using Microsoft.Azure.Cosmos;
+ using Microsoft.Azure.WebJobs;
+ using Microsoft.Azure.WebJobs.Extensions.Http;
+ using Microsoft.AspNetCore.Http;
+ using Microsoft.Extensions.Logging;
+
+ namespace csmsfunc
{
- private static string subscriptionId =
- "";
- private static string resourceGroupName =
- "";
- private static string accountName =
- "";
- private static string cosmosDbEndpoint =
- "";
- private static string databaseName =
- "";
- private static string containerName =
- "";
-
- // HttpClient is intended to be instantiated once, rather than per-use.
- static readonly HttpClient httpClient = new HttpClient();
-
- [FunctionName("FishTankTemperatureService")]
- public static async Task Run([TimerTrigger("0 * * * * *")]TimerInfo myTimer, ILogger log)
+ public static class readdatabases
{
- log.LogInformation($"Starting temperature monitoring: {DateTime.Now}");
-
- // AzureServiceTokenProvider will help us to get the Service Managed token.
- var azureServiceTokenProvider = new AzureServiceTokenProvider();
-
- // Authenticate to the Azure Resource Manager to get the Service Managed token.
- string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync("https://management.azure.com/");
-
- // Setup the List Keys API to get the Azure Cosmos DB keys.
- string endpoint = $"https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/listKeys?api-version=2019-12-12";
+ [FunctionName("readdatabases")]
+ public static async Task Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req,
+ ILogger log)
+ {
+ log.LogTrace("Start function");
+
+ CosmosClient client = new CosmosClient(
+ accountEndpoint: Environment.GetEnvironmentVariable("COSMOS_ENDPOINT", EnvironmentVariableTarget.Process),
+ new DefaultAzureCredential()
+ );
+
+ using FeedIterator iterator = client.GetDatabaseQueryIterator();
+
+ List<(string name, string uri)> databases = new();
+ while(iterator.HasMoreResults)
+ {
+ foreach(DatabaseProperties database in await iterator.ReadNextAsync())
+ {
+ log.LogTrace($"[Database Found]\t{database.Id}");
+ databases.Add((database.Id, database.SelfLink));
+ }
+ }
+
+ return new OkObjectResult(databases);
+ }
+ }
+ }
+ ```
- // Add the access token to request headers.
- httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
+## (Optional) Run the function locally
- // Post to the endpoint to get the keys result.
- var result = await httpClient.PostAsync(endpoint, new StringContent(""));
+In a local environment, the [``DefaultAzureCredential``](/dotnet/api/azure.identity.defaultazurecredential) class will use various local credentials to determine the current identity. While running locally isn't required for the how-to, you can develop locally using your own identity or a service principal.
- // Get the result back as a DatabaseAccountListKeysResult.
- DatabaseAccountListKeysResult keys = await result.Content.ReadFromJsonAsync();
+1. In the **local.settings.json** file, add a new setting named ``COSMOS_ENDPOINT`` in the **Values** object. The value of the setting should be the document endpoint you recorded earlier in this how-to guide.
- log.LogInformation("Starting to create the client");
+ ```json
+ ...
+ "Values": {
+ ...
+ "COSMOS_ENDPOINT": "https://msdocs-cosmos-app.documents.azure.com:443/",
+ ...
+ }
+ ...
+ ```
- CosmosClient client = new CosmosClient(cosmosDbEndpoint, keys.primaryMasterKey);
+ > [!NOTE]
+ > This JSON object has been shortened for brevity. This JSON object also includes a sample value that assumes your account name is ``msdocs-cosmos-app``.
- log.LogInformation("Client created");
+1. Run the function app
- var database = client.GetDatabase(databaseName);
- var container = database.GetContainer(containerName);
+ ```azurecli
+ func start
+ ```
- log.LogInformation("Get the temperature.");
+## Deploy to Azure
- var tempRecord = new TemperatureRecord() { RecordTime = DateTime.UtcNow, Temperature = GetTemperature() };
+Once published, the ``DefaultAzureCredential`` class will use credentials from the environment or a managed identity. For this guide, the system-assigned managed identity will be used as a credential for the [``CosmosClient``](/dotnet/api/microsoft.azure.cosmos.cosmosclient) constructor.
- log.LogInformation("Store temperature");
+1. Set the ``COSMOS_ENDPOINT`` setting on the function app already deployed in Azure.
- await container.CreateItemAsync(tempRecord);
+ ```azurecli-interactive
+ az functionapp config appsettings set \
+ --resource-group $resourceGroupName \
+ --name $functionName \
+ --settings "COSMOS_ENDPOINT=$cosmosEndpoint"
+ ```
- log.LogInformation($"Ending temperature monitor: {DateTime.Now}");
- }
+1. Deploy your function app to Azure by reusing the ``functionName`` shell variable:
- private static int GetTemperature()
- {
- // Fake the temperature sensor for this demo.
- Random r = new Random(DateTime.UtcNow.Second);
- return r.Next(0, 120);
- }
- }
-}
-```
+ ```azurecli-interactive
+ func azure functionapp publish $functionName
+ ```
-You are now ready to [deploy your function app](../azure-functions/create-first-function-vs-code-csharp.md).
+1. [Test your function in the Azure portal](../azure-functions/functions-create-function-app-portal.md#test-the-function).
## Next steps
diff --git a/articles/cosmos-db/media/managed-identity-based-authentication/identity-tab-selection.png b/articles/cosmos-db/media/managed-identity-based-authentication/identity-tab-selection.png
deleted file mode 100644
index 02a3ff2125dd0..0000000000000
Binary files a/articles/cosmos-db/media/managed-identity-based-authentication/identity-tab-selection.png and /dev/null differ
diff --git a/articles/cosmos-db/media/managed-identity-based-authentication/identity-tab-system-managed-on.png b/articles/cosmos-db/media/managed-identity-based-authentication/identity-tab-system-managed-on.png
deleted file mode 100644
index ddfc588ba4144..0000000000000
Binary files a/articles/cosmos-db/media/managed-identity-based-authentication/identity-tab-system-managed-on.png and /dev/null differ
diff --git a/articles/cosmos-db/merge.md b/articles/cosmos-db/merge.md
index 98c91f7dc519e..0ba4b7be43895 100644
--- a/articles/cosmos-db/merge.md
+++ b/articles/cosmos-db/merge.md
@@ -143,6 +143,6 @@ Support for these connectors is planned for the future.
## Next steps
-* Learn more about [using Azure CLI with Azure Cosmos DB.](/cli/azure/azure-cli-reference-for-cosmos-db.md)
+* Learn more about [using Azure CLI with Azure Cosmos DB.](/cli/azure/azure-cli-reference-for-cosmos-db)
* Learn more about [using Azure PowerShell with Azure Cosmos DB.](/powershell/module/az.cosmosdb/)
* Learn more about [partitioning in Azure Cosmos DB.](partitioning-overview.md)
diff --git a/articles/cosmos-db/mongodb/how-to-setup-rbac.md b/articles/cosmos-db/mongodb/how-to-setup-rbac.md
index 109abc7f75288..76feafb8652dd 100644
--- a/articles/cosmos-db/mongodb/how-to-setup-rbac.md
+++ b/articles/cosmos-db/mongodb/how-to-setup-rbac.md
@@ -100,7 +100,7 @@ az cosmosdb create -n -g --kind MongoDB --
8. Create a database for users to connect to in the Azure portal.
9. Create an RBAC user with built-in read role.
```powershell
-az cosmosdb mongodb user definition create --account-name --resource-group --body {\"Id\":\"testdb.read\",\"UserName\":\"\",\"Password\":\"\",\"DatabaseName\":\"\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"read\",\"Db\":\"\"}]}
+az cosmosdb mongodb user definition create --account-name --resource-group --body {\"Id\":\".\",\"UserName\":\"\",\"Password\":\"\",\"DatabaseName\":\"\",\"CustomData\":\"Some_Random_Info\",\"Mechanisms\":\"SCRAM-SHA-256\",\"Roles\":[{\"Role\":\"read\",\"Db\":\"\"}]}
```
diff --git a/articles/cosmos-db/sql/sql-query-date-time-functions.md b/articles/cosmos-db/sql/sql-query-date-time-functions.md
index 23c1507adb8fe..b2569047507f3 100644
--- a/articles/cosmos-db/sql/sql-query-date-time-functions.md
+++ b/articles/cosmos-db/sql/sql-query-date-time-functions.md
@@ -29,6 +29,7 @@ or numeric ticks whose value is the number of 100 nanosecond ticks which have el
The following functions allow you to easily manipulate DateTime, timestamp, and tick values:
* [DateTimeAdd](sql-query-datetimeadd.md)
+* [DateTimeBin](sql-query-datetimebin.md)
* [DateTimeDiff](sql-query-datetimediff.md)
* [DateTimeFromParts](sql-query-datetimefromparts.md)
* [DateTimePart](sql-query-datetimepart.md)
diff --git a/articles/cosmos-db/sql/sql-query-datetimebin.md b/articles/cosmos-db/sql/sql-query-datetimebin.md
new file mode 100644
index 0000000000000..3b68e9b9ee85b
--- /dev/null
+++ b/articles/cosmos-db/sql/sql-query-datetimebin.md
@@ -0,0 +1,121 @@
+---
+title: DateTimeBin in Azure Cosmos DB query language
+description: Learn about SQL system function DateTimeBin in Azure Cosmos DB.
+author: jcocchi
+ms.service: cosmos-db
+ms.subservice: cosmosdb-sql
+ms.topic: conceptual
+ms.date: 05/27/2022
+ms.author: jucocchi
+ms.custom: query-reference
+---
+
+# DateTimeBin (Azure Cosmos DB)
+ [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
+
+Returns the nearest multiple of *BinSize* below the specified DateTime given the unit of measurement *DateTimePart* and start value of *BinAtDateTime*.
+
+
+## Syntax
+
+```sql
+DateTimeBin ( , [,BinSize] [,BinAtDateTime])
+```
+
+
+## Arguments
+
+*DateTime*
+ The string value date and time to be binned. A UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` where:
+
+|Format|Description|
+|-|-|
+|YYYY|four-digit year|
+|MM|two-digit month (01 = January, etc.)|
+|DD|two-digit day of month (01 through 31)|
+|T|signifier for beginning of time elements|
+|hh|two-digit hour (00 through 23)|
+|mm|two-digit minutes (00 through 59)|
+|ss|two-digit seconds (00 through 59)|
+|.fffffff|seven-digit fractional seconds|
+|Z|UTC (Coordinated Universal Time) designator|
+
+For more information on the ISO 8601 format, see [ISO_8601](https://en.wikipedia.org/wiki/ISO_8601)
+
+*DateTimePart*
+ The date time part specifies the units for BinSize. DateTimeBin is Undefined for DayOfWeek, Year, and Month. The finest granularity for binning by Nanosecond is 100 nanosecond ticks; if Nanosecond is specified with a BinSize less than 100, the result is Undefined. This table lists all valid DateTimePart arguments for DateTimeBin:
+
+| DateTimePart | abbreviations |
+| ------------ | -------------------- |
+| Day | "day", "dd", "d" |
+| Hour | "hour", "hh" |
+| Minute | "minute", "mi", "n" |
+| Second | "second", "ss", "s" |
+| Millisecond | "millisecond", "ms" |
+| Microsecond | "microsecond", "mcs" |
+| Nanosecond | "nanosecond", "ns" |
+
+*BinSize* (optional)
+ Numeric value that specifies the size of bins. If not specified, the default value is one.
+
+
+*BinAtDateTime* (optional)
+ A UTC date and time ISO 8601 string value in the format `YYYY-MM-DDThh:mm:ss.fffffffZ` that specifies the start date to bin from. Default value is the Unix epoch, ‘1970-01-01T00:00:00.000000Z’.
+
+
+## Return types
+
+Returns the result of binning the *DateTime* value.
+
+
+## Remarks
+
+DateTimeBin will return `Undefined` for the following reasons:
+- The DateTimePart value specified is invalid
+- The BinSize value is zero or negative
+- The DateTime or BinAtDateTime isn't a valid ISO 8601 DateTime or precedes the year 1601 (the Windows epoch)
+
+
+## Examples
+
+The following example bins ‘2021-06-28T17:24:29.2991234Z’ by one hour:
+
+```sql
+SELECT DateTimeBin('2021-06-28T17:24:29.2991234Z', 'hh') AS BinByHour
+```
+
+```json
+[
+ {
+ "BinByHour": "2021-06-28T17:00:00.0000000Z"
+ }
+]
+```
+
+The following example bins ‘2021-06-28T17:24:29.2991234Z’ given different *BinAtDateTime* values:
+
+```sql
+SELECT
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5) AS One_BinByFiveDaysUnixEpochImplicit,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '1970-01-01T00:00:00.0000000Z') AS Two_BinByFiveDaysUnixEpochExplicit,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '1601-01-01T00:00:00.0000000Z') AS Three_BinByFiveDaysFromWindowsEpoch,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '2021-01-01T00:00:00.0000000Z') AS Four_BinByFiveDaysFromYearStart,
+DateTimeBin('2021-06-28T17:24:29.2991234Z', 'day', 5, '0001-01-01T00:00:00.0000000Z') AS Five_BinByFiveDaysFromUndefinedYear
+```
+
+```json
+[
+ {
+ "One_BinByFiveDaysUnixEpochImplicit": "2021-06-27T00:00:00.0000000Z",
+ "Two_BinByFiveDaysUnixEpochExplicit": "2021-06-27T00:00:00.0000000Z",
+ "Three_BinByFiveDaysFromWindowsEpoch": "2021-06-28T00:00:00.0000000Z",
+ "Four_BinByFiveDaysFromYearStart": "2021-06-25T00:00:00.0000000Z"
+ }
+]
+```
+
+## Next steps
+
+- [Date and time functions Azure Cosmos DB](sql-query-date-time-functions.md)
+- [System functions Azure Cosmos DB](sql-query-system-functions.md)
+- [Introduction to Azure Cosmos DB](../introduction.md)
diff --git a/articles/cosmos-db/table/create-table-dotnet.md b/articles/cosmos-db/table/create-table-dotnet.md
index 5b08457b4d102..d6f2a7c5db985 100644
--- a/articles/cosmos-db/table/create-table-dotnet.md
+++ b/articles/cosmos-db/table/create-table-dotnet.md
@@ -1,13 +1,13 @@
---
title: 'Quickstart: Table API with .NET - Azure Cosmos DB'
description: This quickstart shows how to access the Azure Cosmos DB Table API from a .NET application using the Azure.Data.Tables SDK
-author: DavidCBerry13
+author: rothja
ms.service: cosmos-db
ms.subservice: cosmosdb-table
ms.devlang: csharp
ms.topic: quickstart
ms.date: 09/26/2021
-ms.author: daberry
+ms.author: jroth
ms.custom: devx-track-csharp, mode-api, devx-track-azurecli
---
diff --git a/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md b/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
index 2eeb5f1f5ee1b..c5728cede095d 100644
--- a/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
+++ b/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
@@ -13,7 +13,9 @@ ms.reviewer: adwise
# Use cost alerts to monitor usage and spending
-This article helps you understand and use Cost Management alerts to monitor your Azure usage and spending. Cost alerts are automatically generated based when Azure resources are consumed. Alerts show all active cost management and billing alerts together in one place. When your consumption reaches a given threshold, alerts are generated by Cost Management. There are three types of cost alerts: budget alerts, credit alerts, and department spending quota alerts.
+This article helps you understand and use Cost Management alerts to monitor your Azure usage and spending. Cost alerts are automatically generated based when Azure resources are consumed. Alerts show all active cost management and billing alerts together in one place. When your consumption reaches a given threshold, alerts are generated by Cost Management. There are three main types of cost alerts: budget alerts, credit alerts, and department spending quota alerts.
+
+You can also [create a cost anomaly alert](../understand/analyze-unexpected-charges.md#create-an-anomaly-alert) to automatically get notified when an anomaly is detected.
## Required permissions for alerts
diff --git a/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md b/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
index da14a7063722b..622dfe0c0c5d0 100644
--- a/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
+++ b/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
@@ -65,9 +65,9 @@ Enterprise administrators can also view an overall summary of the charges for th
## Download or view your Azure billing invoice
-You can download your invoice from the [Azure portal](https://portal.azure.com) or have it sent in email. Invoices are sent to whoever is set up to receive invoices for the enrollment.
+An EA administrator can download the invoice from the [Azure portal](https://portal.azure.com) or have it sent in email. Invoices are sent to whoever is set up to receive invoices for the enrollment.
-Only an Enterprise Administrator has permission to view and get the billing invoice. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md).
+Only an Enterprise Administrator has permission to view and download the billing invoice. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md).
You receive an Azure invoice when any of the following events occur during your billing cycle:
diff --git a/articles/cost-management-billing/manage/ea-portal-agreements.md b/articles/cost-management-billing/manage/ea-portal-agreements.md
index 3d1854b50509b..707d98098e4dd 100644
--- a/articles/cost-management-billing/manage/ea-portal-agreements.md
+++ b/articles/cost-management-billing/manage/ea-portal-agreements.md
@@ -37,6 +37,8 @@ An enrollment has one of the following status values. Each value determines how
- Migrate to the Microsoft Online Subscription Program (MOSP)
- Confirm disablement of all services associated with the enrollment
+EA credit expires when the EA enrollment ends.
+
**Expired** - The EA enrollment expires when it reaches the enterprise agreement end date and is opted out of the extended term. Sign a new enrollment contract as soon as possible. Although your service won't be disabled immediately, there's a risk of it getting disabled.
As of August 1, 2019, new opt-out forms aren't accepted for Azure commercial customers. Instead, all enrollments go into indefinite extended term. If you want to stop using Azure services, close your subscription in the [Azure portal](https://portal.azure.com). Or, your partner can submit a termination request. There's no change for customers with government agreement types.
diff --git a/articles/cost-management-billing/manage/ea-portal-enrollment-invoices.md b/articles/cost-management-billing/manage/ea-portal-enrollment-invoices.md
index 3a87adb361663..8685cd558aed0 100644
--- a/articles/cost-management-billing/manage/ea-portal-enrollment-invoices.md
+++ b/articles/cost-management-billing/manage/ea-portal-enrollment-invoices.md
@@ -3,7 +3,7 @@ title: Azure Enterprise enrollment invoices
description: This article explains how to manage and act on your Azure Enterprise invoice.
author: bandersmsft
ms.author: banders
-ms.date: 12/03/2021
+ms.date: 05/31/2022
ms.topic: conceptual
ms.service: cost-management-billing
ms.subservice: enterprise
@@ -204,7 +204,7 @@ If an Amendment M503 is signed, you can move any agreement from any frequency to
### Request an invoice copy
-To request a copy of your invoice, contact your partner.
+If you're an indirect enterprise agreement customer, contact your partner to request a copy of your invoice.
## Credits and adjustments
diff --git a/articles/cost-management-billing/reservations/reservation-renew.md b/articles/cost-management-billing/reservations/reservation-renew.md
index 58896c48c15d4..01ef55972ff9c 100644
--- a/articles/cost-management-billing/reservations/reservation-renew.md
+++ b/articles/cost-management-billing/reservations/reservation-renew.md
@@ -43,7 +43,7 @@ The following conditions are required to renew a reservation:
## Default renewal settings
-By default, the renewal inherits all properties from the expiring reservation. A reservation renewal purchase has the same SKU, region, scope, billing subscription, term, and quantity.
+By default, the renewal inherits all properties except automatic renewal setting from the expiring reservation. A reservation renewal purchase has the same SKU, region, scope, billing subscription, term, and quantity.
However, you can update the renewal reservation purchase quantity to optimize your savings.
diff --git a/articles/cost-management-billing/understand/analyze-unexpected-charges.md b/articles/cost-management-billing/understand/analyze-unexpected-charges.md
index 03d2ecfca4b10..318f58a77ea52 100644
--- a/articles/cost-management-billing/understand/analyze-unexpected-charges.md
+++ b/articles/cost-management-billing/understand/analyze-unexpected-charges.md
@@ -7,7 +7,7 @@ ms.reviewer: micflan
ms.service: cost-management-billing
ms.subservice: cost-management
ms.topic: conceptual
-ms.date: 04/02/2022
+ms.date: 05/31/2022
ms.author: banders
ms.custom: contperf-fy21q1
---
@@ -16,6 +16,8 @@ ms.custom: contperf-fy21q1
The article helps you identify anomalies and unexpected changes in your cloud costs using Cost Management and Billing. You'll start with anomaly detection for subscriptions in cost analysis to identify any atypical usage patterns based on your cost and usage trends. You'll then learn how to drill into cost information to find and investigate cost spikes and dips.
+You can also create an anomaly alert to automatically get notified when an anomaly is detected.
+
In general, there are three types of changes that you might want to investigate:
- New costs—For example, a resource that was started or added such as a virtual machine. New costs often appear as a cost starting from zero.
@@ -106,6 +108,22 @@ If you have an existing policy of [tagging resources](../costs/cost-mgt-best-pra
If you've used the preceding strategies and you still don't understand why you received a charge or if you need other help with billing issues, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+## Create an anomaly alert
+
+You can create an anomaly alert to automatically get notified when an anomaly is detected. All email recipients get notified when a subscription cost anomaly is detected.
+
+An anomaly alert email includes a summary of changes in resource group count and cost. It also includes the top resource group changes for the day compared to the previous 60 days. And, it has a direct link to the Azure portal so that you can review the cost and investigate further.
+
+1. Start on a subscription scope.
+1. In the left menu, select **Cost alerts**.
+1. On the Cost alerts page, select **+ Add** > **Add anomaly alert**.
+1. On the Subscribe to emails page, enter required information and then select **Save**.
+ :::image type="content" source="./media/analyze-unexpected-charges/subscribe-emails.png" alt-text="Screenshot showing the Subscribe to emails page where you enter notification information for an alert." lightbox="./media/analyze-unexpected-charges/subscribe-emails.png" :::
+
+Here's an example email generated for an anomaly alert.
+
+:::image type="content" source="./media/analyze-unexpected-charges/anomaly-alert-email.png" alt-text="Screenshot showing an example anomaly alert email." lightbox="./media/analyze-unexpected-charges/anomaly-alert-email.png" :::
+
## Next steps
- Learn about how to [Optimize your cloud investment with Cost Management](../costs/cost-mgt-best-practices.md).
\ No newline at end of file
diff --git a/articles/cost-management-billing/understand/download-azure-invoice.md b/articles/cost-management-billing/understand/download-azure-invoice.md
index 0e48aa7af0f95..20e685e6887cc 100644
--- a/articles/cost-management-billing/understand/download-azure-invoice.md
+++ b/articles/cost-management-billing/understand/download-azure-invoice.md
@@ -14,7 +14,7 @@ ms.author: banders
# View and download your Microsoft Azure invoice
-You can download your invoice in the [Azure portal](https://portal.azure.com/) or have it sent in email. If you're an Azure customer with an Enterprise Agreement (EA customer), you can't download your organization's invoice. Instead, invoices are sent to the person set to receive invoices for the enrollment.
+You can download your invoice in the [Azure portal](https://portal.azure.com/) or have it sent in email. If you're an Azure customer with an Enterprise Agreement (EA customer), only an EA administrator can download and view your organization's invoice. Invoices are sent to the person set to receive invoices for the enrollment.
## When invoices are generated
diff --git a/articles/cost-management-billing/understand/media/analyze-unexpected-charges/anomaly-alert-email.png b/articles/cost-management-billing/understand/media/analyze-unexpected-charges/anomaly-alert-email.png
new file mode 100644
index 0000000000000..ad42ebd8cb611
Binary files /dev/null and b/articles/cost-management-billing/understand/media/analyze-unexpected-charges/anomaly-alert-email.png differ
diff --git a/articles/cost-management-billing/understand/media/analyze-unexpected-charges/subscribe-emails.png b/articles/cost-management-billing/understand/media/analyze-unexpected-charges/subscribe-emails.png
new file mode 100644
index 0000000000000..0deaf913907f7
Binary files /dev/null and b/articles/cost-management-billing/understand/media/analyze-unexpected-charges/subscribe-emails.png differ
diff --git a/articles/data-factory/data-flow-source.md b/articles/data-factory/data-flow-source.md
index 8ba1b0b922188..7ba913e2fd695 100644
--- a/articles/data-factory/data-flow-source.md
+++ b/articles/data-factory/data-flow-source.md
@@ -8,7 +8,7 @@ ms.service: data-factory
ms.subservice: data-flows
ms.topic: conceptual
ms.custom: seo-lt-2019
-ms.date: 05/27/2022
+ms.date: 05/31/2022
---
# Source transformation in mapping data flow
@@ -123,7 +123,7 @@ If your text file has no defined schema, select **Detect data type** so that the
**Reset schema** resets the projection to what is defined in the referenced dataset.
-You can modify the column data types in a downstream derived-column transformation. Use a select transformation to modify the column names.
+**Overwrite schema** allows you to modify the projected data types here the source, overwriting the schema-defined data types. You can alternatively modify the column data types in a downstream derived-column transformation. Use a select transformation to modify the column names.
### Import schema
diff --git a/articles/defender-for-cloud/episode-eight.md b/articles/defender-for-cloud/episode-eight.md
index 65d5b10e5a78d..09ebc4fae657e 100644
--- a/articles/defender-for-cloud/episode-eight.md
+++ b/articles/defender-for-cloud/episode-eight.md
@@ -2,7 +2,7 @@
title: Microsoft Defender for IoT
description: Learn how Defender for IoT discovers devices to monitor and how it fits in the Microsoft Security portfolio.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 06/01/2022
---
# Microsoft Defender for IoT
@@ -11,7 +11,7 @@ ms.date: 05/25/2022
-
+
- [1:20](/shows/mdc-in-the-field/defender-for-iot#time=01m20s) - Overview of the Defender for IoT solution
diff --git a/articles/defender-for-cloud/episode-eleven.md b/articles/defender-for-cloud/episode-eleven.md
index a95c45dc7198d..a8ca65d0fc5fc 100644
--- a/articles/defender-for-cloud/episode-eleven.md
+++ b/articles/defender-for-cloud/episode-eleven.md
@@ -2,7 +2,7 @@
title: Threat landscape for Defender for Containers
description: Learn about the new detections that are available for different attacks and how Defender for Containers can help to quickly identify malicious activities in containers.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 06/01/2022
---
# Threat landscape for Defender for Containers
@@ -11,7 +11,7 @@ ms.date: 05/25/2022
-
+
- [01:15](/shows/mdc-in-the-field/threat-landscape-containers#time=01m15s) - The evolution of attacks against Kubernetes
diff --git a/articles/defender-for-cloud/episode-five.md b/articles/defender-for-cloud/episode-five.md
index 86f35a328c8f7..4dfcce09854b7 100644
--- a/articles/defender-for-cloud/episode-five.md
+++ b/articles/defender-for-cloud/episode-five.md
@@ -2,7 +2,7 @@
title: Microsoft Defender for Servers
description: Learn all about Microsoft Defender for Servers from the product manager.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 06/01/2022
---
# Microsoft Defender for Servers
@@ -11,7 +11,7 @@ ms.date: 05/25/2022
-
+
- [1:22](/shows/mdc-in-the-field/defender-for-containers#time=01m22s) - Overview of the announcements for Microsoft Defender for Servers
diff --git a/articles/defender-for-cloud/episode-four.md b/articles/defender-for-cloud/episode-four.md
index f8850e1f96df8..a7b3ff43556f6 100644
--- a/articles/defender-for-cloud/episode-four.md
+++ b/articles/defender-for-cloud/episode-four.md
@@ -2,7 +2,7 @@
title: Security posture management improvements in Microsoft Defender for Cloud
description: Learn how to manage your security posture with Microsoft Defender for Cloud.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 06/01/2022
---
# Security posture management improvements in Microsoft Defender for Cloud
@@ -11,7 +11,7 @@ ms.date: 05/25/2022
-
+
- [1:24](/shows/mdc-in-the-field/defender-for-containers#time=01m24s) - Security recommendation refresh time changes
diff --git a/articles/defender-for-cloud/episode-nine.md b/articles/defender-for-cloud/episode-nine.md
index edd39666bc8b5..8b9b9656ec976 100644
--- a/articles/defender-for-cloud/episode-nine.md
+++ b/articles/defender-for-cloud/episode-nine.md
@@ -2,7 +2,7 @@
title: Microsoft Defender for Containers in a multi-cloud environment
description: Learn about Microsoft Defender for Containers implementation in AWS and GCP.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 06/01/2022
---
# Microsoft Defender for Containers in a Multi-Cloud Environment
@@ -13,7 +13,7 @@ Maya explains about the new workload protection capabilities related to Containe
-
+
- [01:12](/shows/mdc-in-the-field/containers-multi-cloud#time=01m12s) - Container protection in a multi-cloud environment
diff --git a/articles/defender-for-cloud/episode-one.md b/articles/defender-for-cloud/episode-one.md
index f0e3673e9d56b..5187b1bbb0fbc 100644
--- a/articles/defender-for-cloud/episode-one.md
+++ b/articles/defender-for-cloud/episode-one.md
@@ -2,7 +2,7 @@
title: New AWS connector in Microsoft Defender for Cloud
description: Learn all about the new AWS connector in Microsoft Defender for Cloud.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 05/29/2022
---
# New AWS connector in Microsoft Defender for Cloud
@@ -11,7 +11,7 @@ ms.date: 05/25/2022
-
+
- [00:00](/shows/mdc-in-the-field/aws-connector) - Introduction
diff --git a/articles/defender-for-cloud/episode-seven.md b/articles/defender-for-cloud/episode-seven.md
index b0fda391035d2..444c534c72633 100644
--- a/articles/defender-for-cloud/episode-seven.md
+++ b/articles/defender-for-cloud/episode-seven.md
@@ -2,7 +2,7 @@
title: New GCP connector in Microsoft Defender for Cloud
description: Learn all about the new GCP connector in Microsoft Defender for Cloud.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 05/29/2022
---
# New GCP connector in Microsoft Defender for Cloud
@@ -11,7 +11,7 @@ ms.date: 05/25/2022
-
+
- [1:23](/shows/mdc-in-the-field/gcp-connector#time=01m23s) - Overview of the new GCP connector
diff --git a/articles/defender-for-cloud/episode-six.md b/articles/defender-for-cloud/episode-six.md
index b624c486560c4..0807275e84a24 100644
--- a/articles/defender-for-cloud/episode-six.md
+++ b/articles/defender-for-cloud/episode-six.md
@@ -13,7 +13,7 @@ Carlos also covers how Microsoft Defender for Cloud is used to fill the gap betw
-
+
- [1:30](/shows/mdc-in-the-field/lessons-from-the-field#time=01m30s) - Why Microsoft Defender for Cloud is a unique solution when compared with other competitors?
diff --git a/articles/defender-for-cloud/episode-ten.md b/articles/defender-for-cloud/episode-ten.md
index c3d232b113290..fb0c18e157968 100644
--- a/articles/defender-for-cloud/episode-ten.md
+++ b/articles/defender-for-cloud/episode-ten.md
@@ -2,7 +2,7 @@
title: Protecting containers in GCP with Defender for Containers
description: Learn how to use Defender for Containers, to protect Containers that are located in Google Cloud Projects.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 05/29/2022
---
# Protecting containers in GCP with Defender for Containers
@@ -13,7 +13,7 @@ Nadav gives insights about workload protection for GKE and how to obtain visibil
-
+
- [00:55](/shows/mdc-in-the-field/gcp-containers#time=00m55s) - Architecture solution for Defender for Containers and support for GKE
diff --git a/articles/defender-for-cloud/episode-three.md b/articles/defender-for-cloud/episode-three.md
index d13c1e568fa51..ce766f87ddc1c 100644
--- a/articles/defender-for-cloud/episode-three.md
+++ b/articles/defender-for-cloud/episode-three.md
@@ -2,7 +2,7 @@
title: Microsoft Defender for Containers
description: Learn how about Microsoft Defender for Containers.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 05/29/2022
---
# Microsoft Defender for Containers
@@ -11,7 +11,7 @@ ms.date: 05/25/2022
-
+
- [1:09](/shows/mdc-in-the-field/defender-for-containers#time=01m09s) - What's new in the Defender for Containers plan?
diff --git a/articles/defender-for-cloud/episode-twelve.md b/articles/defender-for-cloud/episode-twelve.md
index 22749ebcfba03..f61a67ffb9b24 100644
--- a/articles/defender-for-cloud/episode-twelve.md
+++ b/articles/defender-for-cloud/episode-twelve.md
@@ -2,7 +2,7 @@
title: Enhanced workload protection features in Defender for Servers
description: Learn about the enhanced capabilities available in Defender for Servers, for VMs that are located in GCP, AWS and on-premises.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 05/29/2022
---
# Enhanced workload protection features in Defender for Servers
@@ -13,7 +13,7 @@ Netta explains how Defender for Servers applies Azure Arc as a bridge to onboard
-
+
- [00:55](/shows/mdc-in-the-field/enhanced-workload-protection#time=00m55s) - Arc Auto-provisioning in GCP
diff --git a/articles/defender-for-cloud/episode-two.md b/articles/defender-for-cloud/episode-two.md
index ed55c44dd9615..a190cd59a2201 100644
--- a/articles/defender-for-cloud/episode-two.md
+++ b/articles/defender-for-cloud/episode-two.md
@@ -2,7 +2,7 @@
title: Integrate Azure Purview with Microsoft Defender for Cloud
description: Learn how to integrate Azure Purview with Microsoft Defender for Cloud.
ms.topic: reference
-ms.date: 05/25/2022
+ms.date: 05/29/2022
---
# Integrate Azure Purview with Microsoft Defender for Cloud
@@ -13,7 +13,7 @@ David explains the use case scenarios for this integration and how the data clas
-
+
- [1:36](/shows/mdc-in-the-field/integrate-with-purview) - Overview of Azure Purview
diff --git a/articles/defender-for-cloud/review-security-recommendations.md b/articles/defender-for-cloud/review-security-recommendations.md
index 99acbe8208b0b..f5cf4fd9b9beb 100644
--- a/articles/defender-for-cloud/review-security-recommendations.md
+++ b/articles/defender-for-cloud/review-security-recommendations.md
@@ -130,7 +130,7 @@ The Insights column of the page gives you more details for each recommendation.
| Icon | Name | Description |
|--|--|--|
-| :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: | *Preview recommendation** | This recommendation won't affect your secure score until it's GA. |
+| :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: | Preview recommendation | This recommendation won't affect your secure score until it's GA. |
| :::image type="icon" source="media/secure-score-security-controls/fix-icon.png" border="false"::: | **Fix** | From within the recommendation details page, you can use 'Fix' to resolve this issue. |
| :::image type="icon" source="media/secure-score-security-controls/enforce-icon.png" border="false"::: | **Enforce** | From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource. |
| :::image type="icon" source="media/secure-score-security-controls/deny-icon.png" border="false"::: | **Deny** | From within the recommendation details page, you can prevent new resources from being created with this issue. |
diff --git a/articles/defender-for-iot/device-builders/concept-micro-agent-configuration.md b/articles/defender-for-iot/device-builders/concept-micro-agent-configuration.md
index 84473704c26d2..09b147dad0b86 100644
--- a/articles/defender-for-iot/device-builders/concept-micro-agent-configuration.md
+++ b/articles/defender-for-iot/device-builders/concept-micro-agent-configuration.md
@@ -53,7 +53,7 @@ These configurations include process, and network activity collectors.
|--|--|--|--|
| **Interval** | `High` `Medium` `Low` | Determines the sending frequency. | `Medium` |
| **Aggregation mode** | `True` `False` | Determines whether to process event aggregation for an identical event. | `True` |
-| **Cache size** | cycle FIFO | Defines the number of events collected in between the the times that data is sent. | `256` |
+| **Cache size** | cycle FIFO | Defines the number of events collected in between the times that data is sent. | `256` |
| **Disable collector** | `True` `False` | Determines whether or not the collector is operational. | `False` |
| | | | |
@@ -67,7 +67,7 @@ These configurations include process, and network activity collectors.
| Setting Name | Setting options | Description | Default |
|--|--|--|--|
-| **Devices** | A list of the network devices separated by a comma.
For example `eth0,eth1` | Defines the list of network devices (interfaces) that the agent will use to monitor the traffic.
If a network device is not listed, the Network Raw events will not be recorded for the missing device.| `eth0` |
+| **Devices** | A list of the network devices separated by a comma.
For example `eth0,eth1` | Defines the list of network devices (interfaces) that the agent will use to monitor the traffic.
If a network device isn't listed, the Network Raw events will not be recorded for the missing device.| `eth0` |
| | | | |
## Process collector specific-settings
diff --git a/articles/defender-for-iot/device-builders/concept-recommendations.md b/articles/defender-for-iot/device-builders/concept-recommendations.md
index 2768baa1ef8c0..6ca7ce4bef515 100644
--- a/articles/defender-for-iot/device-builders/concept-recommendations.md
+++ b/articles/defender-for-iot/device-builders/concept-recommendations.md
@@ -1,6 +1,6 @@
---
title: Security recommendations for IoT Hub
-description: Learn about the concept of security recommendations and how they are used in the Defender for IoT Hub.
+description: Learn about the concept of security recommendations and how they're used in the Defender for IoT Hub.
ms.topic: conceptual
ms.date: 11/09/2021
---
diff --git a/articles/defender-for-iot/device-builders/concept-security-module.md b/articles/defender-for-iot/device-builders/concept-security-module.md
index 8610689e45fe6..f4c81649bd3eb 100644
--- a/articles/defender-for-iot/device-builders/concept-security-module.md
+++ b/articles/defender-for-iot/device-builders/concept-security-module.md
@@ -1,6 +1,6 @@
---
title: Defender-IoT-micro-agent and device twins
-description: Learn about the concept of Defender-IoT-micro-agent twins and how they are used in Defender for IoT.
+description: Learn about the concept of Defender-IoT-micro-agent twins and how they're used in Defender for IoT.
ms.topic: conceptual
ms.date: 03/28/2022
---
diff --git a/articles/defender-for-iot/device-builders/configure-pam-to-audit-sign-in-events.md b/articles/defender-for-iot/device-builders/configure-pam-to-audit-sign-in-events.md
index 70f7e499fd4cf..01a7c69f92926 100644
--- a/articles/defender-for-iot/device-builders/configure-pam-to-audit-sign-in-events.md
+++ b/articles/defender-for-iot/device-builders/configure-pam-to-audit-sign-in-events.md
@@ -1,6 +1,6 @@
---
title: Configure Pluggable Authentication Modules (PAM) to audit sign-in events (Preview)
-description: Learn how to configure Pluggable Authentication Modules (PAM) to audit sign-in events when syslog is not configured for your device.
+description: Learn how to configure Pluggable Authentication Modules (PAM) to audit sign-in events when syslog isn't configured for your device.
ms.date: 02/20/2022
ms.topic: how-to
---
diff --git a/articles/defender-for-iot/device-builders/how-to-agent-configuration.md b/articles/defender-for-iot/device-builders/how-to-agent-configuration.md
index 9de9cdac2a9cb..7b4044ea87403 100644
--- a/articles/defender-for-iot/device-builders/how-to-agent-configuration.md
+++ b/articles/defender-for-iot/device-builders/how-to-agent-configuration.md
@@ -41,9 +41,9 @@ If the agent configuration object does not exist in the **azureiotsecurity** mod
## Configuration schema and validation
-Make sure to validate your agent configuration against this [schema](https://aka.ms/iot-security-github-module-schema). An agent will not launch if the configuration object does not match the schema.
+Make sure to validate your agent configuration against this [schema](https://aka.ms/iot-security-github-module-schema). An agent will not launch if the configuration object doesn't match the schema.
-If, while the agent is running, the configuration object is changed to a non-valid configuration (the configuration does not match the schema), the agent will ignore the invalid configuration and will continue using the current configuration.
+If, while the agent is running, the configuration object is changed to a non-valid configuration (the configuration doesn't match the schema), the agent will ignore the invalid configuration and will continue using the current configuration.
### Configuration validation
diff --git a/articles/defender-for-iot/device-builders/how-to-azure-rtos-security-module.md b/articles/defender-for-iot/device-builders/how-to-azure-rtos-security-module.md
index c4d043c662922..b4f9d27d2c162 100644
--- a/articles/defender-for-iot/device-builders/how-to-azure-rtos-security-module.md
+++ b/articles/defender-for-iot/device-builders/how-to-azure-rtos-security-module.md
@@ -30,7 +30,7 @@ The default behavior of each configuration is provided in the following tables:
| ASC_SECURITY_MODULE_ID | String | defender-iot-micro-agent | The unique identifier of the device. |
| SECURITY_MODULE_VERSION_(MAJOR)(MINOR)(PATCH) | Number | 3.2.1 | The version. |
| ASC_SECURITY_MODULE_SEND_MESSAGE_RETRY_TIME | Number | 3 | The amount of time the Defender-IoT-micro-agent will take to send the security message after a fail. (in seconds) |
-| ASC_SECURITY_MODULE_PENDING_TIME | Number | 300 | The Defender-IoT-micro-agent pending time (in seconds). The state will change to suspend, if the time is exceeded.. |
+| ASC_SECURITY_MODULE_PENDING_TIME | Number | 300 | The Defender-IoT-micro-agent pending time (in seconds). The state will change to suspend, if the time is exceeded. |
## Collection
diff --git a/articles/defender-for-iot/device-builders/how-to-deploy-linux-c.md b/articles/defender-for-iot/device-builders/how-to-deploy-linux-c.md
index 18914895450a2..56dffc8b98c1b 100644
--- a/articles/defender-for-iot/device-builders/how-to-deploy-linux-c.md
+++ b/articles/defender-for-iot/device-builders/how-to-deploy-linux-c.md
@@ -48,7 +48,7 @@ This script performs the following function:
1. Installs prerequisites.
-1. Adds a service user (with interactive sign in disabled).
+1. Adds a service user (with interactive sign-in disabled).
1. Installs the agent as a **Daemon** - assumes the device uses **systemd** for service management.
diff --git a/articles/defender-for-iot/device-builders/how-to-deploy-linux-cs.md b/articles/defender-for-iot/device-builders/how-to-deploy-linux-cs.md
index b2c1d55994799..24c87fdd44bc9 100644
--- a/articles/defender-for-iot/device-builders/how-to-deploy-linux-cs.md
+++ b/articles/defender-for-iot/device-builders/how-to-deploy-linux-cs.md
@@ -46,7 +46,7 @@ This script performs the following actions:
- Installs prerequisites.
-- Adds a service user (with interactive sign in disabled).
+- Adds a service user (with interactive sign-in disabled).
- Installs the agent as a **Daemon** - assumes the device uses **systemd** for legacy deployment model.
diff --git a/articles/defender-for-iot/device-builders/how-to-install-micro-agent-for-edge.md b/articles/defender-for-iot/device-builders/how-to-install-micro-agent-for-edge.md
index fcb19f76aa539..4421b0bf6641d 100644
--- a/articles/defender-for-iot/device-builders/how-to-install-micro-agent-for-edge.md
+++ b/articles/defender-for-iot/device-builders/how-to-install-micro-agent-for-edge.md
@@ -67,7 +67,7 @@ This article explains how to install, and authenticate the Defender micro agent
systemctl status defender-iot-micro-agent.service
```
- 1. Ensure that the service is stable by making sure it is `active` and that the uptime of the process is appropriate
+ 1. Ensure that the service is stable by making sure it's `active` and that the uptime of the process is appropriate
:::image type="content" source="media/quickstart-standalone-agent-binary-installation/active-running.png" alt-text="Check to make sure your service is stable and active.":::
diff --git a/articles/defender-for-iot/device-builders/how-to-manage-device-inventory-on-the-cloud.md b/articles/defender-for-iot/device-builders/how-to-manage-device-inventory-on-the-cloud.md
index 88269eae10058..54f082b6dac0f 100644
--- a/articles/defender-for-iot/device-builders/how-to-manage-device-inventory-on-the-cloud.md
+++ b/articles/defender-for-iot/device-builders/how-to-manage-device-inventory-on-the-cloud.md
@@ -13,7 +13,7 @@ The device inventory can be used to view device systems, and network information
Some of the benefits of the device inventory include:
-- Identify all IOT, and OT devices from different inputs. For example, allowing you to understand which devices in your environment are not communicating, and will require troubleshooting.
+- Identify all IOT, and OT devices from different inputs. For example, allowing you to understand which devices in your environment aren't communicating, and will require troubleshooting.
- Group, and filter devices by site, type, or vendor.
@@ -99,7 +99,7 @@ For a list of filters that can be applied to the device inventory table, see the
1. Select the **Apply button**.
-Multiple filters can be applied at one time. The filters are not saved when you leave the Device inventory page.
+Multiple filters can be applied at one time. The filters aren't saved when you leave the Device inventory page.
## View device information
@@ -115,7 +115,7 @@ Select the :::image type="icon" source="media/how-to-manage-device-inventory-on-
## How to identify devices that have not recently communicated with the Azure cloud
-If you are under the impression that certain devices are not actively communicating, there is a way to check, and see which devices have not communicated in a specified time period.
+If you are under the impression that certain devices are not actively communicating, there's a way to check, and see which devices have not communicated in a specified time period.
**To identify all devices that have not communicated recently**:
diff --git a/articles/defender-for-iot/device-builders/how-to-region-move.md b/articles/defender-for-iot/device-builders/how-to-region-move.md
index 56a8257b5c671..0b5d66d9bad81 100644
--- a/articles/defender-for-iot/device-builders/how-to-region-move.md
+++ b/articles/defender-for-iot/device-builders/how-to-region-move.md
@@ -24,7 +24,7 @@ You can move a Microsoft Defender for IoT "iotsecuritysolutions" resource to a d
## Prepare
-In this section, you will prepare to move the resource for the move by finding the resource and confirming it is in a region you wish to move from.
+In this section, you'll prepare to move the resource for the move by finding the resource and confirming it is in a region you wish to move from.
Before transitioning the resource to the new region, we recommended using [log analytics](../../azure-monitor/logs/quick-create-workspace.md) to store alerts, and raw events.
@@ -44,19 +44,19 @@ Before transitioning the resource to the new region, we recommended using [log a
1. Select your hub from the list.
-1. Ensure that you have selected the correct hub, and that it is in the region you want to move it from.
+1. Ensure that you've selected the correct hub, and that it is in the region you want to move it from.
:::image type="content" source="media/region-move/location.png" alt-text="Screenshot showing you the region your hub is located in.":::
## Move
-You are now ready to move your resource to your new location. Follow [these instructions](../../iot-hub/iot-hub-how-to-clone.md) to move your IoT Hub.
+You're now ready to move your resource to your new location. Follow [these instructions](../../iot-hub/iot-hub-how-to-clone.md) to move your IoT Hub.
After transferring, and enabling the resource, you can link to the same log analytics workspace that was configured earlier.
## Verify
-In this section, you will verify that the resource has been moved, that the connection to the IoT Hub has been enabled, and that everything is working correctly.
+In this section, you'll verify that the resource has been moved, that the connection to the IoT Hub has been enabled, and that everything is working correctly.
**To verify the resource in in the correct region**:
diff --git a/articles/defender-for-iot/device-builders/how-to-security-data-access.md b/articles/defender-for-iot/device-builders/how-to-security-data-access.md
index 051546c9a37f8..cc942345c2c3b 100644
--- a/articles/defender-for-iot/device-builders/how-to-security-data-access.md
+++ b/articles/defender-for-iot/device-builders/how-to-security-data-access.md
@@ -18,13 +18,13 @@ Defender for IoT stores security alerts, recommendations, and raw security data
To configure which Log Analytics workspace is used:
1. Open your IoT hub.
-1. Click the **Settings** blade under the **Security** section.
-1. Click **Data Collection**, and change your Log Analytics workspace configuration.
+1. Select the **Settings** blade under the **Security** section.
+1. Select **Data Collection**, and change your Log Analytics workspace configuration.
To access your alerts and recommendations in your Log Analytics workspace after configuration:
1. Choose an alert or recommendation in Defender for IoT.
-1. Click **further investigation**, then click **To see which devices have this alert click here and view the DeviceId column**.
+1. Select **further investigation**, then select **To see which devices have this alert click here and view the DeviceId column**.
For details on querying data from Log Analytics, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
diff --git a/articles/defender-for-iot/device-builders/quickstart-onboard-iot-hub.md b/articles/defender-for-iot/device-builders/quickstart-onboard-iot-hub.md
index 1016ee7a721bf..731f0a0f6983b 100644
--- a/articles/defender-for-iot/device-builders/quickstart-onboard-iot-hub.md
+++ b/articles/defender-for-iot/device-builders/quickstart-onboard-iot-hub.md
@@ -47,7 +47,7 @@ You can onboard Defender for IoT to an existing IoT Hub, where you can then moni
:::image type="content" source="media/quickstart-onboard-iot-hub/secure-your-iot-solution.png" alt-text="Select the secure your IoT solution button to secure your solution." lightbox="media/quickstart-onboard-iot-hub/secure-your-iot-solution-expanded.png":::
-The **Secure your IoT solution** button will only appear if the IoT Hub has not already been onboarded, or if you set the Defender for IoT toggle to **Off** while onboarding.
+The **Secure your IoT solution** button will only appear if the IoT Hub hasn't already been onboarded, or if you set the Defender for IoT toggle to **Off** while onboarding.
:::image type="content" source="media/quickstart-onboard-iot-hub/toggle-is-off.png" alt-text="If your toggle was set to off during onboarding.":::
diff --git a/articles/defender-for-iot/device-builders/references-defender-for-iot-glossary.md b/articles/defender-for-iot/device-builders/references-defender-for-iot-glossary.md
index dc5fbd10375ed..7c4b1877d2d12 100644
--- a/articles/defender-for-iot/device-builders/references-defender-for-iot-glossary.md
+++ b/articles/defender-for-iot/device-builders/references-defender-for-iot-glossary.md
@@ -23,7 +23,7 @@ This glossary provides a brief description of important terms and concepts for t
|--|--|--|
| **Device twins** | Device twins are JSON documents that store device state information including metadata, configurations, and conditions. | [Module Twin](#m)
[Defender-IoT-micro-agent twin](#s) |
| **Defender-IoT-micro-agent twin** `(DB)` | The Defender-IoT-micro-agent twin holds all of the information that is relevant to device security, for each specific device in your solution. | [Device twin](#d)
[Module Twin](#m) |
-| **Device inventory** | Defender for IoT identifies, and classifies devices as a single unique network device in the inventory for:
- Standalone IT, OT, and IoT devices with 1 or multiple NICs.
- Devices composed of multiple backplane components. This includes all racks, slots, and modules.
- Devices that act as network infrastructure. For example, switches, and routers with multiple NICs.
- Public internet IP addresses, multicast groups, and broadcast groups are not considered inventory devices.
Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.|
+| **Device inventory** | Defender for IoT identifies, and classifies devices as a single unique network device in the inventory for:
- Standalone IT, OT, and IoT devices with 1 or multiple NICs.
- Devices composed of multiple backplane components. This includes all racks, slots, and modules.
- Devices that act as network infrastructure. For example, switches, and routers with multiple NICs.
- Public internet IP addresses, multicast groups, and broadcast groups aren't considered inventory devices.
Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.|
## E
diff --git a/articles/defender-for-iot/device-builders/release-notes.md b/articles/defender-for-iot/device-builders/release-notes.md
index 18d41a7a6ed02..4dbeec1b47f47 100644
--- a/articles/defender-for-iot/device-builders/release-notes.md
+++ b/articles/defender-for-iot/device-builders/release-notes.md
@@ -33,7 +33,7 @@ Listed below are the support, breaking change policies for Defender for IoT, and
- **CIS benchmarks**: The micro agent now supports recommendations based on CIS Distribution Independent Linux Benchmarks, version 2.0.0, and the ability to disable specific CIS Benchmark checks or groups using twin configurations. For more information, see [Micro agent configurations (Preview)](concept-micro-agent-configuration.md).
-- **Micro agent supported devices list expands**: The micro agent now supports Debian 11 AMD64 and ARM32v7 devices, as well as Ubuntu Server 18.04 ARM32 Linux devices & Ubuntu Server 20.04 ARM32 & ARM64 Linux devices.
+- **Micro agent supported devices list expands**: The micro agent now supports Debian 11 AMD64 and ARM32v7 devices, and Ubuntu Server 18.04 ARM32 Linux devices & Ubuntu Server 20.04 ARM32 & ARM64 Linux devices.
For more information, see [Agent portfolio overview and OS support (Preview)](concept-agent-portfolio-overview-os-support.md).
@@ -50,7 +50,7 @@ Listed below are the support, breaking change policies for Defender for IoT, and
- DNS network activity on managed devices is now supported. Microsoft threat intelligence security graph can now detect suspicious activity based on DNS traffic.
-- [Leaf device proxying](../../iot-edge/how-to-connect-downstream-iot-edge-device.md#integrate-microsoft-defender-for-iot-with-iot-edge-gateway): There is now an enhanced integration with IoT Edge. This integration enhances the connectivity between the agent, and the cloud using leaf device proxying.
+- [Leaf device proxying](../../iot-edge/how-to-connect-downstream-iot-edge-device.md#integrate-microsoft-defender-for-iot-with-iot-edge-gateway): There's now an enhanced integration with IoT Edge. This integration enhances the connectivity between the agent, and the cloud using leaf device proxying.
## October 2021
diff --git a/articles/defender-for-iot/device-builders/resources-agent-frequently-asked-questions.md b/articles/defender-for-iot/device-builders/resources-agent-frequently-asked-questions.md
index 24fd220f6f3ae..ee770f690d266 100644
--- a/articles/defender-for-iot/device-builders/resources-agent-frequently-asked-questions.md
+++ b/articles/defender-for-iot/device-builders/resources-agent-frequently-asked-questions.md
@@ -11,7 +11,7 @@ This article provides a list of frequently asked questions and answers about the
## Do I have to install an embedded security agent?
-Agent installation on your IoT devices isn't mandatory in order to enable Defender for IoT. You can choose between the following two options There are four different levels of security monitoring, and management capabilities which will provide different levels of protection:
+Agent installation on your IoT devices isn't mandatory in order to enable Defender for IoT. You can choose between the following two options There are four different levels of security monitoring, and management capabilities, which will provide different levels of protection:
- Install the Defender for IoT embedded security agent with or without modifications. This option provides the highest level of enhanced security insights into device behavior and access.
diff --git a/articles/defender-for-iot/device-builders/security-agent-architecture.md b/articles/defender-for-iot/device-builders/security-agent-architecture.md
index 39a91ced02a4a..cd7b5a88648df 100644
--- a/articles/defender-for-iot/device-builders/security-agent-architecture.md
+++ b/articles/defender-for-iot/device-builders/security-agent-architecture.md
@@ -42,7 +42,7 @@ Defender for IoT offers different installer agents for 32 bit and 64-bit Windows
## Next steps
-In this article, you got a high-level overview about Defender for IoT Defender-IoT-micro-agent architecture, and the available installers.To continue getting started with Defender for IoT deployment, review the security agent authentication methods that are available.
+In this article, you got a high-level overview about Defender for IoT Defender-IoT-micro-agent architecture, and the available installers. To continue getting started with Defender for IoT deployment, review the security agent authentication methods that are available.
> [!div class="nextstepaction"]
> [Security agent authentication methods](concept-security-agent-authentication-methods.md)
diff --git a/articles/defender-for-iot/device-builders/troubleshoot-agent.md b/articles/defender-for-iot/device-builders/troubleshoot-agent.md
index 65d0e0fcd58e9..44406003b2be4 100644
--- a/articles/defender-for-iot/device-builders/troubleshoot-agent.md
+++ b/articles/defender-for-iot/device-builders/troubleshoot-agent.md
@@ -9,7 +9,7 @@ ms.date: 03/28/2022
This article explains how to solve potential problems in the security agent start-up process.
-Microsoft Defender for IoT agent self-starts immediately after installation. The agent start up process includes reading local configuration, connecting to Azure IoT Hub, and retrieving the remote twin configuration. Failure in any one of these steps may cause the security agent to fail.
+Microsoft Defender for IoT agent self-starts immediately after installation. The agent start-up process includes reading local configuration, connecting to Azure IoT Hub, and retrieving the remote twin configuration. Failure in any one of these steps may cause the security agent to fail.
In this troubleshooting guide you'll learn how to:
@@ -19,7 +19,7 @@ In this troubleshooting guide you'll learn how to:
## Validate if the security agent is running
-1. To validate is the security agent is running, wait a few minutes after installing the agent and and run the following command.
+1. To validate that the security agent is running, wait a few minutes after installing the agent and run the following command.
**C agent**
@@ -78,10 +78,10 @@ Defender for IoT agent encountered an error! Error in: {Error Code}, reason: {Er
| Error Code | Error sub code | Error details | Remediate C | Remediate C# |
|--|--|--|--|--|
| Local Configuration | Missing configuration | A configuration is missing in the local configuration file. The error message should state which key is missing. | Add the missing key to the /var/LocalConfiguration.json file, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details. | Add the missing key to the General.config file, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. |
-| Local Configuration | Cant Parse Configuration | A configuration value can't be parsed. The error message should state which key can't be parsed. A configuration value cannot be parsed either because the value is not in the expected type, or the value is out of range. | Fix the value of the key in /var/LocalConfiguration.json file so that it matches the LocalConfiguration schema, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. | Fix the value of the key in General.config file so that it matches the schema, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details. |
+| Local Configuration | Cant Parse Configuration | A configuration value can't be parsed. The error message should state which key can't be parsed. A configuration value cannot be parsed either because the value isn't in the expected type, or the value is out of range. | Fix the value of the key in /var/LocalConfiguration.json file so that it matches the LocalConfiguration schema, see the [c#-localconfig-reference](azure-iot-security-local-configuration-csharp.md) for details. | Fix the value of the key in General.config file so that it matches the schema, see the [cs-localconfig-reference](azure-iot-security-local-configuration-c.md) for details. |
| Local Configuration | File Format | Failed to parse configuration file. | The configuration file is corrupted, download the agent and re-install. | - |
-| Remote Configuration | Timeout | The agent could not fetch the azureiotsecurity module twin within the timeout period. | Make sure authentication configuration is correct and try again. | The agent could not fetch the azureiotsecurity module twin within timeout period. Make sure authentication configuration is correct and try again. |
-| Authentication | File Not Exist | The file in the given path does not exist. | Make sure the file exists in the given path or go to the **LocalConfiguration.json** file and change the **FilePath** configuration. | Make sure the file exists in the given path or go to the **Authentication.config** file and change the **filePath** configuration. |
+| Remote Configuration | Timeout | The agent could not fetch the azureiotsecurity module twin within the timeout period. | Make sure authentication configuration is correct and try again. | The agent couldn't fetch the azureiotsecurity module twin within timeout period. Make sure authentication configuration is correct and try again. |
+| Authentication | File Not Exist | The file in the given path doesn't exist. | Make sure the file exists in the given path or go to the **LocalConfiguration.json** file and change the **FilePath** configuration. | Make sure the file exists in the given path or go to the **Authentication.config** file and change the **filePath** configuration. |
| Authentication | File Permission | The agent does not have sufficient permissions to open the file. | Give the **asciotagent** user read permissions on the file in the given path. | Make sure the file is accessible. |
| Authentication | File Format | The given file is not in the correct format. | Make sure the file is in the correct format. The supported file types are .pfx and .pem. | Make sure the file is a valid certificate file. |
| Authentication | Unauthorized | The agent was not able to authenticate against IoT Hub with the given credentials. | Validate authentication configuration in LocalConfiguration file, go through the authentication configuration and make sure all the details are correct, validate that the secret in the file matches the authenticated identity. | Validate authentication configuration in Authentication.config, go through the authentication configuration and make sure all the details are correct, then validate that the secret in the file matches the authenticated identity. |
diff --git a/articles/defender-for-iot/device-builders/troubleshoot-defender-micro-agent.md b/articles/defender-for-iot/device-builders/troubleshoot-defender-micro-agent.md
index e1da0f613726b..9e10cbf0b05b9 100644
--- a/articles/defender-for-iot/device-builders/troubleshoot-defender-micro-agent.md
+++ b/articles/defender-for-iot/device-builders/troubleshoot-defender-micro-agent.md
@@ -19,9 +19,9 @@ To view the status of the service:
systemctl status defender-iot-micro-agent.service
```
-1. Check that the service is stable by making sure it is `active`, and that the uptime in the process is appropriate.
+1. Check that the service is stable by making sure it's `active`, and that the uptime in the process is appropriate.
- :::image type="content" source="media/troubleshooting/active-running.png" alt-text="Ensure your service is stable by checking to see that it is active and the uptime is appropriate.":::
+ :::image type="content" source="media/troubleshooting/active-running.png" alt-text="Ensure your service is stable by checking to see that it's active and the uptime is appropriate.":::
If the service is listed as `inactive`, use the following command to start the service:
diff --git a/articles/defender-for-iot/organizations/how-to-connect-sensor-by-proxy.md b/articles/defender-for-iot/organizations/how-to-connect-sensor-by-proxy.md
index 2af897064deb4..e806005770844 100644
--- a/articles/defender-for-iot/organizations/how-to-connect-sensor-by-proxy.md
+++ b/articles/defender-for-iot/organizations/how-to-connect-sensor-by-proxy.md
@@ -25,7 +25,7 @@ The following diagram shows data going from Microsoft Defender for IoT to the Io
## Set up your system
-For this scenario we will be installing, and configuring the latest version of [Squid](http://www.squid-cache.org/) on an Ubuntu 18 server.
+For this scenario we'll be installing, and configuring the latest version of [Squid](http://www.squid-cache.org/) on an Ubuntu 18 server.
> [!Note]
> Microsoft Defender for IoT does not offer support for Squid or any other proxy service.
diff --git a/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md b/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
index 40af51155ee33..f1cbedfd0563d 100644
--- a/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
+++ b/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
@@ -101,7 +101,7 @@ This section describes how to define users. Cyberx, support, and administrator u
If users aren't active at the keyboard or mouse for a specific time, they're signed out of their session and must sign in again.
-When users haven't worked with their console mouse or keyboard for 30 minutes, a session sign out is forced.
+When users haven't worked with their console mouse or keyboard for 30 minutes, a session sign-out is forced.
This feature is enabled by default and on upgrade, but can be disabled. In addition, session counting times can be updated. Session times are defined in seconds. Definitions are applied per sensor and on-premises management console.
@@ -203,9 +203,9 @@ You can recover the password for the on-premises management console or the senso
**To recover the password for the on-premises management console, or the sensor**:
-1. On the sign in screen of either the on-premises management console or the sensor, select **Password recovery**. The **Password recovery** screen opens.
+1. On the sign-in screen of either the on-premises management console or the sensor, select **Password recovery**. The **Password recovery** screen opens.
- :::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Screenshot of the Select Password recovery from the sign in screen of either the on-premises management console, or the sensor.":::
+ :::image type="content" source="media/how-to-create-and-manage-users/password-recovery.png" alt-text="Screenshot of the Select Password recovery from the sign-in screen of either the on-premises management console, or the sensor.":::
1. Select either **CyberX** or **Support** from the drop-down menu, and copy the unique identifier code.
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/and-condition.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/and-condition.png
deleted file mode 100644
index 29c23b95f5016..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/and-condition.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/custom-alert-rules.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/custom-alert-rules.png
deleted file mode 100644
index 1ad0f82996146..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/custom-alert-rules.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/define-conditions.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/define-conditions.png
deleted file mode 100644
index 7ae30883e68ef..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/define-conditions.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/export-logs-details.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/export-logs-details.png
deleted file mode 100644
index 31c4e2add663f..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/export-logs-details.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/horizon-from-the-menu.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/horizon-from-the-menu.png
deleted file mode 100644
index 1a3513e4795f1..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/horizon-from-the-menu.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/horizon-overview.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/horizon-overview.png
deleted file mode 100644
index eaa43e7321376..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/horizon-overview.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/horizon-plugin.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/horizon-plugin.png
deleted file mode 100644
index c9f041e8f2328..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/horizon-plugin.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/infrastructure.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/infrastructure.png
deleted file mode 100644
index 48d522cf51505..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/infrastructure.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/ip-address-value.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/ip-address-value.png
deleted file mode 100644
index 53f598fb302c6..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/ip-address-value.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/monitor-icon.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/monitor-icon.png
deleted file mode 100644
index 01cb411563189..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/monitor-icon.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/plugins-menu.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/plugins-menu.png
deleted file mode 100644
index c1788dde6cdfa..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/plugins-menu.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/rule-window.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/rule-window.png
deleted file mode 100644
index 5fbe45f0334dd..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/rule-window.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/sample-rule-window.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/sample-rule-window.png
deleted file mode 100644
index f837e356787cb..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/sample-rule-window.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/snmp-monitor.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/snmp-monitor.png
deleted file mode 100644
index 87b2dc71468af..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/snmp-monitor.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/toggle-icon.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/toggle-icon.png
deleted file mode 100644
index 1090f75389d56..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/toggle-icon.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/upload-a-plugin.png b/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/upload-a-plugin.png
deleted file mode 100644
index 3eef419524372..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-manage-proprietary-protocols/upload-a-plugin.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/azure-defender-for-iot-sensor-download-software-screen.png b/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/azure-defender-for-iot-sensor-download-software-screen.png
deleted file mode 100644
index baf152a15daf8..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/azure-defender-for-iot-sensor-download-software-screen.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/corporate-hpe-proliant-dl360-v2.png b/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/corporate-hpe-proliant-dl360-v2.png
deleted file mode 100644
index 708c2ed7e47ca..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/corporate-hpe-proliant-dl360-v2.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/defender-for-iot-iso-download-screen.png b/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/defender-for-iot-iso-download-screen.png
deleted file mode 100644
index 320bd3a5f75f3..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/defender-for-iot-iso-download-screen.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/deployment-type-enterprise-for-azure-defender-for-iot-v2.png b/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/deployment-type-enterprise-for-azure-defender-for-iot-v2.png
deleted file mode 100644
index cb6ef5238e7b3..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/deployment-type-enterprise-for-azure-defender-for-iot-v2.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/enterprise-and-smb-hpe-proliant-dl20-v2.png b/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/enterprise-and-smb-hpe-proliant-dl20-v2.png
deleted file mode 100644
index b7983e9612a77..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/enterprise-and-smb-hpe-proliant-dl20-v2.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/office-ruggedized.png b/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/office-ruggedized.png
deleted file mode 100644
index 10b48ec4d90f9..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/how-to-prepare-your-network/office-ruggedized.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/tutorial-install-components/defender-for-iot-management-console-sign-in-screen.png b/articles/defender-for-iot/organizations/media/tutorial-install-components/defender-for-iot-management-console-sign-in-screen.png
deleted file mode 100644
index a53339ef345bf..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/tutorial-install-components/defender-for-iot-management-console-sign-in-screen.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/media/tutorial-install-components/sensor-version-select-screen-v2.png b/articles/defender-for-iot/organizations/media/tutorial-install-components/sensor-version-select-screen-v2.png
deleted file mode 100644
index 79abb149442f3..0000000000000
Binary files a/articles/defender-for-iot/organizations/media/tutorial-install-components/sensor-version-select-screen-v2.png and /dev/null differ
diff --git a/articles/defender-for-iot/organizations/references-defender-for-iot-glossary.md b/articles/defender-for-iot/organizations/references-defender-for-iot-glossary.md
index 82ad81860634c..b5ad7078bb80d 100644
--- a/articles/defender-for-iot/organizations/references-defender-for-iot-glossary.md
+++ b/articles/defender-for-iot/organizations/references-defender-for-iot-glossary.md
@@ -44,11 +44,7 @@ This glossary provides a brief description of important terms and concepts for t
|--|--|--|
| **Data mining** | Generate comprehensive and granular reports about your network devices:
- **SOC incident response**: Reports in real time to help deal with immediate incident response. For example, a report can list devices that might need patching.
- **Forensics**: Reports based on historical data for investigative reports.
- **IT network integrity**: Reports that help improve overall network security. For example, a report can list devices with weak authentication credentials.
- **visibility**: Reports that cover all query items to view all baseline parameters of your network.
Save data-mining reports for read-only users to view. | **[Baseline](#b)
[Reports](#r)** |
| **Defender for IoT platform** | The Defender for IoT solution installed on Defender for IoT sensors and the on-premises management console. | **[Sensor](#s)
[On-premises management console](#o)** |
-| **Inventory device** | Defender for IoT will identify and classify devices as a single unique network device in the inventory for:
-1. Standalone IT/OT/IoT devices (w/ 1 or multiple NICs)
-1. Devices composed of multiple backplane components (including all racks/slots/modules)
-1. Devices acting as network infrastructure such as Switch/Router (w/ multiple NICs).
-Public internet IP addresses, multicast groups, and broadcast groups are not considered inventory devices. Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.|
+| **Inventory device** | Defender for IoT will identify and classify devices as a single unique network device in the inventory for:
- Standalone IT/OT/IoT devices (w/ 1 or multiple NICs) - Devices composed of multiple backplane components (including all racks/slots/modules) - Devices acting as network infrastructure such as Switch/Router (w/ multiple NICs).
Public internet IP addresses, multicast groups, and broadcast groups are not considered inventory devices. Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.|
| **Device map** | A graphical representation of network devices that Defender for IoT detects. It shows the connections between devices and information about each device. Use the map to:
- Retrieve and control critical device information.
- Analyze network slices.
- Export device details and summaries. | **[Purdue layer group](#p)** |
| **Device inventory - sensor** | The device inventory displays an extensive range of device attributes detected by Defender for IoT. Options are available to:
- Filter displayed information.
- Export this information to a CSV file.
- Import Windows registry details. | **[Group](#g)**
**[Device inventory- on-premises management console](#d)** |
| **Device inventory - on-premises management console** | Device information from connected sensors can be viewed from the on-premises management console in the device inventory. This gives users of the on-premises management console a comprehensive view of all network information. | **[Device inventory - sensor](#d)
[Device inventory - data integrator](#d)** |
diff --git a/articles/devops-project/index.yml b/articles/devops-project/index.yml
index ee5699a21e90c..1afd13588e319 100644
--- a/articles/devops-project/index.yml
+++ b/articles/devops-project/index.yml
@@ -1,7 +1,7 @@
### YamlMime:Landing
title: DevOps Starter documentation to deploy to Azure
-summary: DevOps Starter presents a simplified experience where you bring your existing code and Git repository, or choose from one of the sample applications to create a continuous integration (CI) and continuous delivery (CD) pipeline or workflow to Azure. DevOps Starter now supports creating GitHub actions workflows.
+summary: DevOps Starter presents a simplified experience where you bring your existing code and Git repository, or choose from one of the sample applications to create a continuous integration (CI) and continuous delivery (CD) pipeline or workflow to Azure. DevOps Starter now supports creating GitHub Actions workflows.
metadata:
title: DevOps Starter documentation
@@ -32,7 +32,7 @@ landingContent:
linkLists:
- linkListType: quickstart
links:
- - text: Node.js using GitHub actions
+ - text: Node.js using GitHub Actions
url: ./devops-starter-gh-nodejs.md
- text: .NET
url: ./azure-devops-project-aspnet-core.md
@@ -54,7 +54,7 @@ landingContent:
linkLists:
- linkListType: tutorial
links:
- - text: Deploy your app to Azure Web App using GitHub actions
+ - text: Deploy your app to Azure Web App using GitHub Actions
url: devops-starter-gh-web-app.md
- text: Bring your own code with GitHub
url: azure-devops-project-vms.md
diff --git a/articles/event-hubs/event-hubs-capture-overview.md b/articles/event-hubs/event-hubs-capture-overview.md
index bad083519e798..fa2a856826824 100644
--- a/articles/event-hubs/event-hubs-capture-overview.md
+++ b/articles/event-hubs/event-hubs-capture-overview.md
@@ -2,7 +2,7 @@
title: Capture streaming events - Azure Event Hubs | Microsoft Docs
description: This article provides an overview of the Capture feature that allows you to capture events streaming through Azure Event Hubs.
ms.topic: article
-ms.date: 02/16/2021
+ms.date: 05/31/2022
---
# Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage
@@ -27,6 +27,9 @@ Event Hubs Capture enables you to specify your own Azure Blob storage account an
Captured data is written in [Apache Avro][Apache Avro] format: a compact, fast, binary format that provides rich data structures with inline schema. This format is widely used in the Hadoop ecosystem, Stream Analytics, and Azure Data Factory. More information about working with Avro is available later in this article.
+> [!NOTE]
+> When you use no code editor in the Azure portal, you can capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in the **Parquet** format. For more information, see [How to: capture data from Event Hubs in Parquet format](../stream-analytics/capture-event-hub-data-parquet.md?toc=%2Fazure%2Fevent-hubs%2Ftoc.json) and [Tutorial: capture Event Hubs data in Parquet format and analyze with Azure Synapse Analytics](../stream-analytics/event-hubs-parquet-capture-tutorial.md?toc=%2Fazure%2Fevent-hubs%2Ftoc.json).
+
### Capture windowing
Event Hubs Capture enables you to set up a window to control capturing. This window is a minimum size and time configuration with a "first wins policy," meaning that the first trigger encountered causes a capture operation. If you have a fifteen-minute, 100 MB capture window and send 1 MB per second, the size window triggers before the time window. Each partition captures independently and writes a completed block blob at the time of capture, named for the time at which the capture interval was encountered. The storage naming convention is as follows:
@@ -41,13 +44,13 @@ The date values are padded with zeroes; an example filename might be:
https://mystorageaccount.blob.core.windows.net/mycontainer/mynamespace/myeventhub/0/2017/12/08/03/03/17.avro
```
-In the event that your Azure storage blob is temporarily unavailable, Event Hubs Capture will retain your data for the data retention period configured on your event hub and back fill the data once your storage account is available again.
+If your Azure storage blob is temporarily unavailable, Event Hubs Capture will retain your data for the data retention period configured on your event hub and back fill the data once your storage account is available again.
### Scaling throughput units or processing units
In the standard tier of Event Hubs, the traffic is controlled by [throughput units](event-hubs-scalability.md#throughput-units) and in the premium tier Event Hubs, it's controlled by [processing units](event-hubs-scalability.md#processing-units). Event Hubs Capture copies data directly from the internal Event Hubs storage, bypassing throughput unit or processing unit egress quotas and saving your egress for other processing readers, such as Stream Analytics or Spark.
-Once configured, Event Hubs Capture runs automatically when you send your first event, and continues running. To make it easier for your downstream processing to know that the process is working, Event Hubs writes empty files when there is no data. This process provides a predictable cadence and marker that can feed your batch processors.
+Once configured, Event Hubs Capture runs automatically when you send your first event, and continues running. To make it easier for your downstream processing to know that the process is working, Event Hubs writes empty files when there's no data. This process provides a predictable cadence and marker that can feed your batch processors.
## Setting up Event Hubs Capture
@@ -124,7 +127,7 @@ Apache Avro has complete Getting Started guides for [Java][Java] and [Python][Py
Event Hubs Capture is metered similarly to [throughput units](event-hubs-scalability.md#throughput-units) (standard tier) or [processing units](event-hubs-scalability.md#processing-units) (in premium tier): as an hourly charge. The charge is directly proportional to the number of throughput units or processing units purchased for the namespace. As throughput units or processing units are increased and decreased, Event Hubs Capture meters increase and decrease to provide matching performance. The meters occur in tandem. For pricing details, see [Event Hubs pricing](https://azure.microsoft.com/pricing/details/event-hubs/).
-Capture does not consume egress quota as it is billed separately.
+Capture doesn't consume egress quota as it is billed separately.
## Integration with Event Grid
diff --git a/articles/expressroute/expressroute-locations-providers.md b/articles/expressroute/expressroute-locations-providers.md
index 6715021659c1c..d1c89f7b7c383 100644
--- a/articles/expressroute/expressroute-locations-providers.md
+++ b/articles/expressroute/expressroute-locations-providers.md
@@ -144,7 +144,7 @@ Azure national clouds are isolated from each other and from global commercial Az
| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | n/a | Supported | AT&T NetBond, British Telecom, Equinix, Level 3 Communications, Verizon |
| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | Supported | Equinix, Internet2, Megaport, Verizon |
| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | n/a | Supported | Equinix, CenturyLink Cloud Connect, Verizon |
-| **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/data-center-locations/arizona/phoenix-data-center/) | US Gov Arizona | Supported | AT&T NetBond, CenturyLink Cloud Connect, Megaport |
+| **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/locations/arizona/phoenix-arizona-chandler/) | US Gov Arizona | Supported | AT&T NetBond, CenturyLink Cloud Connect, Megaport |
| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | Supported | CenturyLink Cloud Connect, Megaport |
| **Silicon Valley** | [Equinix SV4](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv4/) | n/a | Supported | AT&T, Equinix, Level 3 Communications, Verizon |
| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | n/a | Supported | Equinix, Megaport |
diff --git a/articles/expressroute/expressroute-locations.md b/articles/expressroute/expressroute-locations.md
index 4d4dc0be816d4..7515d382e9e97 100644
--- a/articles/expressroute/expressroute-locations.md
+++ b/articles/expressroute/expressroute-locations.md
@@ -319,7 +319,7 @@ If you are remote and do not have fiber connectivity or you want to explore othe
| **[Stream Data Centers]( https://www.streamdatacenters.com/products-services/network-cloud/ )** | Megaport |
| **[RagingWire Data Centers](https://www.ragingwire.com/wholesale/wholesale-data-centers-worldwide-nexcenters)** | IX Reach, Megaport, PacketFabric |
| **[T5 Datacenters](https://t5datacenters.com/)** | IX Reach |
-| **[vXchnge](https://www.vxchnge.com)** | IX Reach, Megaport |
+| **vXchnge** | IX Reach, Megaport |
## Connectivity through National Research and Education Networks (NREN)
diff --git a/articles/frontdoor/TOC.yml b/articles/frontdoor/TOC.yml
index 7f12f7b98734c..5691a01170734 100644
--- a/articles/frontdoor/TOC.yml
+++ b/articles/frontdoor/TOC.yml
@@ -93,6 +93,8 @@
href: how-to-configure-origin.md
- name: Add a custom domain
href: standard-premium/how-to-add-custom-domain.md
+ - name: Add a root or apex domain
+ href: front-door-how-to-onboard-apex-domain.md?pivots=front-door-standard-premium
- name: Configure HTTPS on a custom domain
href: standard-premium/how-to-configure-https-custom-domain.md
- name: Rules Engine
@@ -122,7 +124,7 @@
- name: Configure HTTPS on a custom domain
href: front-door-custom-domain-https.md
- name: Add a root or apex domain
- href: front-door-how-to-onboard-apex-domain.md
+ href: front-door-how-to-onboard-apex-domain.md?pivots=front-door-classic
- name: Set up a Rules Engine
href: front-door-tutorial-rules-engine.md
- name: Configure HTTP to HTTPS redirect
diff --git a/articles/frontdoor/front-door-how-to-onboard-apex-domain.md b/articles/frontdoor/front-door-how-to-onboard-apex-domain.md
index fe5eb7040215a..5bcbc84b33024 100644
--- a/articles/frontdoor/front-door-how-to-onboard-apex-domain.md
+++ b/articles/frontdoor/front-door-how-to-onboard-apex-domain.md
@@ -5,15 +5,27 @@ services: front-door
author: duongau
ms.service: frontdoor
ms.topic: how-to
-ms.date: 11/13/2020
+ms.date: 05/31/2022
ms.author: duau
-
+zone_pivot_groups: front-door-tiers
---
+
# Onboard a root or apex domain on your Front Door
+
+::: zone pivot="front-door-standard-premium"
+
+Azure Front Door supports adding custom domain to Front Door profile. This is done by adding DNS TXT record for domain ownership validation and creating a CNAME record in your DNS configuration to route DNS queries for the custom domain to Azure Front Door endpoint. For apex domain, DNS TXT will continue to be used for domain validation. However, the DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create CNAME for `contoso.com` itself. Front Door doesn't expose the frontend IP address associated with your Front Door profile. So you can't map your apex domain to an IP address if your intent is to onboard it to Azure Front Door.
+
+::: zone-end
+
+::: zone pivot="front-door-classic"
+
Azure Front Door uses CNAME records to validate domain ownership for onboarding of custom domains. Front Door doesn't expose the frontend IP address associated with your Front Door profile. So you can't map your apex domain to an IP address if your intent is to onboard it to Azure Front Door.
The DNS protocol prevents the assignment of CNAME records at the zone apex. For example, if your domain is `contoso.com`; you can create CNAME records for `somelabel.contoso.com`; but you can't create CNAME for `contoso.com` itself. This restriction presents a problem for application owners who have load-balanced applications behind Azure Front Door. Since using a Front Door profile requires creation of a CNAME record, it isn't possible to point at the Front Door profile from the zone apex.
+::: zone-end
+
This problem can be resolved by using alias records in Azure DNS. Unlike CNAME records, alias records are created at the zone apex. Application owners can use it to point their zone apex record to a Front Door profile that has public endpoints. Application owners point to the same Front Door profile that's used for any other domain within their DNS zone. For example, `contoso.com` and `www.contoso.com` can point to the same Front Door profile.
Mapping your apex or root domain to your Front Door profile basically requires CNAME flattening or DNS chasing. A mechanism where the DNS provider recursively resolves the CNAME entry until it hits an IP address. This functionality is supported by Azure DNS for Front Door endpoints.
@@ -23,6 +35,71 @@ Mapping your apex or root domain to your Front Door profile basically requires C
You can use the Azure portal to onboard an apex domain on your Front Door and enable HTTPS on it by associating it with a certificate for TLS termination. Apex domains are also referred as root or naked domains.
+::: zone pivot="front-door-standard-premium"
+
+## Onboard the custom domain to your Front Door
+
+1. Select **Domains** from under *Settings* on the left side pane for your Front Door profile and then select **+ Add** to add a new custom domain.
+
+ :::image type="content" source="./media/front-door-apex-domain/add-domain.png" alt-text="Screenshot of adding a new domain to Front Door profile.":::
+
+1. On **Add a domain** page, you'll enter information about the custom domain. You can choose Azure-managed DNS (recommended) or you can choose to use your DNS provider.
+
+ - **Azure-managed DNS** - select an existing DNS zone and for *Custom domain*, select **Add new**. Select **APEX domain** from the pop-up and then select **OK** to save.
+
+ :::image type="content" source="./media/front-door-apex-domain/add-custom-domain.png" alt-text="Screenshot of adding a new custom domain to Front Door profile.":::
+
+ - **Another DNS provider** - make sure the DNS provider supports CNAME flattening and follow the steps for [adding a custom domain](standard-premium/how-to-add-custom-domain.md#add-a-new-custom-domain).
+
+1. Select the **Pending** validation state. A new page will appear with DNS TXT record information needed to validate the custom domain. The TXT record is in the form of `_dnsauth.`.
+
+ :::image type="content" source="./media/front-door-apex-domain/pending-validation.png" alt-text="Screenshot of custom domain pending validation.":::
+
+ - **Azure DNS-based zone** - select the **Add** button and a new TXT record with the displayed record value will be created in the Azure DNS zone.
+
+ :::image type="content" source="./media/front-door-apex-domain/validate-custom-domain.png" alt-text="Screenshot of validate a new custom domain.":::
+
+ - If you're using another DNS provider, manually create a new TXT record of name `_dnsauth.` with the record value as shown on the page.
+
+1. Close the *Validate the custom domain* page and return to the *Domains* page for the Front Door profile. You should see the *Validation state* change from **Pending** to **Approved**. If not, wait up to 10 minutes for changes to reflect. If your validation doesn't get approved make sure your TXT record is correct and name servers are configured correctly if you're using Azure DNS.
+
+ :::image type="content" source="./media/front-door-apex-domain/validation-approved.png" alt-text="Screenshot of new custom domain passing validation.":::
+
+1. Select **Unassociated** from the *Endpoint association* column, to add the new custom domain to an endpoint.
+
+ :::image type="content" source="./media/front-door-apex-domain/unassociated-endpoint.png" alt-text="Screenshot of unassociated custom domain to an endpoint.":::
+
+1. On the *Associate endpoint and route* page, select the **Endpoint** and **Route** you would like to associate the domain to. Then select **Associate** to complete this step.
+
+ :::image type="content" source="./media/front-door-apex-domain/associate-endpoint.png" alt-text="Screenshot of associated endpoint and route page for a domain.":::
+
+1. Under the *DNS state* column, select the **CNAME record is currently not detected** to add the alias record to DNS provider.
+
+ - **Azure DNS** - select the **Add** button on the page.
+
+ :::image type="content" source="./media/front-door-apex-domain/cname-record.png" alt-text="Screenshot of add or update CNAME record page.":::
+
+ - **A DNS provider that supports CNAME flattening** - you must manually enter the alias record name.
+
+1. Once the alias record gets created and the custom domain is associated to the Azure Front Door endpoint, traffic will start flowing.
+
+ :::image type="content" source="./media/front-door-apex-domain/cname-record-added.png" alt-text="Screenshot of completed APEX domain configuration.":::
+
+> [!NOTE]
+> **DNS state** column is meant for CNAME mapping check. Because apex domain doesn’t support CNAME record, the DNS state will show 'CNAME record is currently not detected' even after you added the alias record to the DNS provider.
+
+## Enable HTTPS on your custom domain
+
+Follow the guidance for [configuring HTTPS for your custom domain](standard-premium/how-to-configure-https-custom-domain.md) to enable HTTPS for your apex domain.
+
+## Managed certificate renewal for apex domain
+
+Front Door managed certificates will automatically rotate certificates only if the domain CNAME is pointed to Front Door endpoint. If the APEX domain doesn’t have a CNAME record pointing to Front Door endpoint, the auto-rotation for managed certificate will fail until domain ownership is re-validated. The validation column will become `Pending-revalidation` 45 days before the managed certificate expires. Select the **Pending-revalidation** link and then select the **Regenerate** button to regenerate the TXT token. After that, add the TXT token to the DNS provider settings.
+
+::: zone-end
+
+::: zone pivot="front-door-classic"
+
## Create an alias record for zone apex
1. Open **Azure DNS** configuration for the domain to be onboarded.
@@ -73,6 +150,8 @@ You can use the Azure portal to onboard an apex domain on your Front Door and en
> [!WARNING]
> Ensure that you have created appropriate routing rules for your apex domain or added the domain to existing routing rules.
+::: zone-end
+
## Next steps
- Learn how to [create a Front Door](quickstart-create-front-door.md).
diff --git a/articles/frontdoor/media/front-door-apex-domain/add-custom-domain.png b/articles/frontdoor/media/front-door-apex-domain/add-custom-domain.png
new file mode 100644
index 0000000000000..db8021780dae7
Binary files /dev/null and b/articles/frontdoor/media/front-door-apex-domain/add-custom-domain.png differ
diff --git a/articles/frontdoor/media/front-door-apex-domain/add-domain.png b/articles/frontdoor/media/front-door-apex-domain/add-domain.png
new file mode 100644
index 0000000000000..2d84664437fb1
Binary files /dev/null and b/articles/frontdoor/media/front-door-apex-domain/add-domain.png differ
diff --git a/articles/frontdoor/media/front-door-apex-domain/associate-endpoint.png b/articles/frontdoor/media/front-door-apex-domain/associate-endpoint.png
new file mode 100644
index 0000000000000..55fb5f4809b82
Binary files /dev/null and b/articles/frontdoor/media/front-door-apex-domain/associate-endpoint.png differ
diff --git a/articles/frontdoor/media/front-door-apex-domain/cname-record-added.png b/articles/frontdoor/media/front-door-apex-domain/cname-record-added.png
new file mode 100644
index 0000000000000..fd4d27792f884
Binary files /dev/null and b/articles/frontdoor/media/front-door-apex-domain/cname-record-added.png differ
diff --git a/articles/frontdoor/media/front-door-apex-domain/cname-record.png b/articles/frontdoor/media/front-door-apex-domain/cname-record.png
new file mode 100644
index 0000000000000..a1683b83c594a
Binary files /dev/null and b/articles/frontdoor/media/front-door-apex-domain/cname-record.png differ
diff --git a/articles/frontdoor/media/front-door-apex-domain/pending-validation.png b/articles/frontdoor/media/front-door-apex-domain/pending-validation.png
new file mode 100644
index 0000000000000..b35c244c35f94
Binary files /dev/null and b/articles/frontdoor/media/front-door-apex-domain/pending-validation.png differ
diff --git a/articles/frontdoor/media/front-door-apex-domain/unassociated-endpoint.png b/articles/frontdoor/media/front-door-apex-domain/unassociated-endpoint.png
new file mode 100644
index 0000000000000..a3be0ee9c8640
Binary files /dev/null and b/articles/frontdoor/media/front-door-apex-domain/unassociated-endpoint.png differ
diff --git a/articles/frontdoor/media/front-door-apex-domain/validate-custom-domain.png b/articles/frontdoor/media/front-door-apex-domain/validate-custom-domain.png
new file mode 100644
index 0000000000000..b5b269fdd2d34
Binary files /dev/null and b/articles/frontdoor/media/front-door-apex-domain/validate-custom-domain.png differ
diff --git a/articles/frontdoor/media/front-door-apex-domain/validation-approved.png b/articles/frontdoor/media/front-door-apex-domain/validation-approved.png
new file mode 100644
index 0000000000000..191c2b7063e9a
Binary files /dev/null and b/articles/frontdoor/media/front-door-apex-domain/validation-approved.png differ
diff --git a/articles/governance/blueprints/overview.md b/articles/governance/blueprints/overview.md
index e8f8ffdc61afb..93bc210acceb6 100644
--- a/articles/governance/blueprints/overview.md
+++ b/articles/governance/blueprints/overview.md
@@ -16,7 +16,7 @@ Just as a blueprint allows an engineer or an architect to sketch a project's des
Azure Blueprints enables cloud architects and central information technology groups to define a
repeatable set of Azure resources that implements and adheres to an organization's standards,
patterns, and requirements. Azure Blueprints makes it possible for development teams to rapidly
-build and stand up new environments with trust they're building within organizational compliance
+build and start up new environments with trust they're building within organizational compliance
with a set of built-in components, such as networking, to speed up development and delivery.
Blueprints are a declarative way to orchestrate the deployment of various resource templates and
diff --git a/articles/governance/policy/samples/guest-configuration-baseline-docker.md b/articles/governance/policy/samples/guest-configuration-baseline-docker.md
new file mode 100644
index 0000000000000..9ae80c6532ad0
--- /dev/null
+++ b/articles/governance/policy/samples/guest-configuration-baseline-docker.md
@@ -0,0 +1,110 @@
+---
+title: Reference - Azure Policy guest configuration baseline for Docker
+description: Details of the Docker baseline on Azure implemented through Azure Policy guest configuration.
+ms.date: 05/17/2022
+ms.topic: reference
+ms.custom: generated
+---
+# Docker security baseline
+
+This article details the configuration settings for Docker hosts as applicable in the following
+implementations:
+
+- **\[Preview\]: Linux machines should meet requirements for the Azure security baseline for Docker hosts**
+- **Vulnerabilities in security configuration on your machines should be remediated** in Azure
+ Security Center
+
+For more information, see [Understand the guest configuration feature of Azure Policy](../concepts/guest-configuration.md) and
+[Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md).
+
+## General security controls
+
+|Name (CCEID) |Details |Remediation check |
+|---|---|---|
+|Docker inventory Information (0.0) |Description: None |None |
+|Ensure a separate partition for containers has been created (1.01) |Description: Docker depends on /var/lib/docker as the default directory where all Docker related files, including the images, are stored. This directory might fill up fast and soon Docker and the host could become unusable. So, it's advisable to create a separate partition (logical volume) for storing Docker files. |For new installations, create a separate partition for /var/lib/docker mount point. For systems that were previously installed, use the Logical Volume Manager (LVM) to create partitions. |
+|Ensure docker version is up-to-date (1.03) |Description: Using up-to-date docker version will keep your host secure |Follow the docker documentation in aim to upgrade your version |
+|Ensure auditing is configured for the docker daemon (1.05) |Description: Apart from auditing your regular Linux file system and system calls, audit Docker daemon as well. Docker daemon runs with root privileges. It's thus necessary to audit its activities and usage. |Add the line `-w /usr/bin/docker -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /var/lib/docker (1.06) |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /var/lib/docker is one such directory. It holds all the information about containers. It must be audited. |Add the line `-w /var/lib/docker -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /etc/docker (1.07) |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /etc/docker is one such directory. It holds various certificates and keys used for TLS communication between Docker daemon and Docker client. It must be audited. |Add the line `-w /etc/docker -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - docker.service (1.08) |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. Docker.service is one such file. The docker.service file might be present if the daemon parameters have been changed by an administrator. It holds various parameters for Docker daemon. It must be audited, if applicable. |Find out the 'docker.service' file location by running: `systemctl show -p FragmentPath docker.service` and add the line `-w {docker.service file location} -k docker` into the /etc/audit/audit.rules file where `{docker.service file location}` is the file path you have found earlier. Restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - docker.socket (1.09) |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. Docker.socket is one such file. It holds various parameters for Docker daemon socket. It must be audited, if applicable. |Find out the 'docker.socket' file location by running: `systemctl show -p FragmentPath docker.socket` and add the line `-w {docker.socket file location} -k docker` into the /etc/audit/audit.rules file where `{docker.socket file location}` is the file path you have found earlier. Restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /etc/default/docker (1.10) |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /etc/default/docker is one such file. It holds various parameters for Docker daemon. It must be audited, if applicable. |Add the line `-w /etc/default/docker -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /etc/docker/daemon.json (1.11) |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /etc/docker/daemon.json is one such file. It holds various parameters for Docker daemon. It must be audited, if applicable. |Add the line `-w /etc/docker/daemon.json -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /usr/bin/docker-containerd (1.12) |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /usr/bin/docker-containerd is one such file. Docker now relies on containerd and runC to spawn containers. It must be audited, if applicable. |Add the line `-w /usr/bin/docker-containerd -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure auditing is configured for Docker files and directories - /usr/bin/docker-runc (1.13) |Description: Apart from auditing your regular Linux file system and system calls, audit all Docker related files and directories. Docker daemon runs with root privileges. Its behavior depends on some key files and directories. /usr/bin/docker-runc is one such file. Docker now relies on containerd and runC to spawn containers. It must be audited, if applicable. |Add the line `-w /usr/bin/docker-runc -k docker` into the /etc/audit/audit.rules file. Then, restart the audit daemon by running the command: `service auditd restart` |
+|Ensure network traffic is restricted between containers on the default bridge (2.01) |Description: The inter-container communication would be disabled on the default network bridge. If any communication between containers on the same host is desired, then it needs to be explicitly defined using container linking or alternatively custom networks have to be defined. |Run the docker in daemon mode and pass `--icc=false` as an argument or set the 'icc' setting to false in the daemon.json file. Alternatively, you can follow the Docker documentation and create a custom network and only join containers that need to communicate to that custom network. The `--icc` parameter only applies to the default docker bridge, if custom networks are used then the approach of segmenting networks should be adopted instead. |
+|Ensure the logging level is set to 'info'. (2.02) |Description: Setting up an appropriate log level, configures the Docker daemon to log events that you would want to review later. A base log level of `info` and above would capture all logs except debug logs. Until and unless required, you shouldn't run Docker daemon at `debug` log level. |Run the Docker daemon as below: ```dockerd --log-level info``` |
+|Ensure Docker is allowed to make changes to iptables (2.03) |Description: Docker will never make changes to your system `iptables` rules if you choose to do so. Docker server would automatically make the needed changes to iptables based on how you choose your networking options for the containers if it's allowed to do so. It's recommended to let Docker server make changes to `iptables`automatically to avoid networking misconfiguration that might hamper the communication between containers and to the outside world. Additionally, it would save you hassles of updating `iptable`every time you choose to run the containers or modify networking options. |Don't run the Docker daemon with `--iptables=false` parameter. For example, don't start the Docker daemon as below: ```dockerd --iptables=false``` |
+|Ensure insecure registries aren't used (2.04) |Description: You shouldn't be using any insecure registries in the production environment. Insecure registries can be tampered with leading to possible compromise to your production system. |remove `--insecure-registry` flag from the dockerd start command |
+|The 'aufs' storage driver shouldn't be used by the docker daemon (2.05) |Description: The 'aufs' storage driver is the oldest storage driver. It's based on a Linux kernel patch-set that is unlikely to be merged into the main Linux kernel. aufs driver is also known to cause some serious kernel crashes. aufs just has legacy support from Docker. Most importantly, aufs isn't a supported driver in many Linux distributions using latest Linux kernels |The 'aufs' storage driver should be replaced by a different storage driver, we recommend to use 'overlay2' |
+|Ensure TLS authentication for Docker daemon is configured (2.06) |Description: By default, Docker daemon binds to a non-networked Unix socket and runs with `root` privileges. If you change the default docker daemon binding to a TCP port or any other Unix socket, anyone with access to that port or socket can have full access to Docker daemon and in turn to the host system. Hence, you shouldn't bind the Docker daemon to another IP/port or a Unix socket. If you must expose the Docker daemon via a network socket, configure TLS authentication for the daemon and Docker Swarm APIs (if using). This would restrict the connections to your Docker daemon over the network to a limited number of clients who could successfully authenticate over TLS. |Follow the steps mentioned in the Docker documentation or other references. |
+|Ensure the default ulimit's configured appropriately (2.07) |Description: If the ulimits aren't set properly, the desired resource control might not be achieved and might even make the system unusable. |Run the docker in daemon mode and pass --default-ulimit as argument with respective ulimits as appropriate in your environment. Alternatively, you can also set a specific resource limitation to each container separately by using the `--ulimit` argument with respective ulimits as appropriate in your environment. |
+|Enable user namespace support (2.08) |Description: The Linux kernel user namespace support in Docker daemon provides additional security for the Docker host system. It allows a container to have a unique range of user and group IDs which are outside the traditional user and group range utilized by the host system. For example, the root user will have expected administrative privilege inside the container but can effectively be mapped to an unprivileged UID on the host system. |Please consult Docker documentation for various ways in which this can be configured depending upon your requirements. Your steps might also vary based on platform - For example, on Red Hat, sub-UIDs and sub-GIDs mapping creation does not work automatically. You might have to create your own mapping. However, the high-level steps are as below: **Step 1:** Ensure that the files `/etc/subuid` and `/etc/subgid` exist.```touch /etc/subuid /etc/subgid```**Step 2:** Start the docker daemon with `--userns-remap` flag ```dockerd --userns-remap=default``` |
+|Ensure base device size isn't changed until needed (2.10) |Description: Increasing the base device size allows all future images and containers to be of the new base device size, this may cause a denial of service by ending up in file system being over-allocated or full. |remove `--storage-opt dm.basesize` flag from the dockerd start command until you need it |
+|Ensure that authorization for Docker client commands is enabled (2.11) |Description: Docker’s out-of-the-box authorization model is all or nothing. Any user with permission to access the Docker daemon can run any Docker client command. The same is true for callers using Docker’s remote API to contact the daemon. If you require greater access control, you can create authorization plugins and add them to your Docker daemon configuration. Using an authorization plugin, a Docker administrator can configure granular access policies for managing access to Docker daemon. Third party integrations of Docker may implement their own authorization models to require authorization with the Docker daemon outside of docker's native authorization plugin (i.e. Kubernetes, Cloud Foundry, OpenShift). |**Step 1**: Install/Create an authorization plugin. **Step 2**: Configure the authorization policy as desired. **Step 3**: Start the docker daemon as below: ```dockerd --authorization-plugin=``` |
+|Ensure centralized and remote logging is configured (2.12) |Description: Centralized and remote logging ensures that all important log records are safe despite catastrophic events. Docker now supports various such logging drivers. Use the one that suits your environment the best. |**Step 1**: Setup the desired log driver by following its documentation. **Step 2**: Start the docker daemon with that logging driver. For example, ```dockerd --log-driver=syslog --log-opt syslog-address=tcp://192.xxx.xxx.xxx``` |
+|Ensure live restore is Enabled (2.14) |Description: One of the important security triads is availability. Setting `--live-restore` flag in the docker daemon ensures that container execution isn't interrupted when the docker daemon isn't available. This also means that it's now easier to update and patch the docker daemon without execution downtime. |Run the docker in daemon mode and pass `--live-restore` as an argument. For Example,```dockerd --live-restore``` |
+|Ensure Userland Proxy is Disabled (2.15) |Description: Docker engine provides two mechanisms for forwarding ports from the host to containers, hairpin NAT, and a userland proxy. In most circumstances, the hairpin NAT mode is preferred as it improves performance and makes use of native Linux iptables functionality instead of an additional component. Where hairpin NAT is available, the userland proxy should be disabled on startup to reduce the attack surface of the installation. |Run the Docker daemon as below: ```dockerd --userland-proxy=false``` |
+|Ensure experimental features are avoided in production (2.17) |Description: Experimental is now a runtime docker daemon flag instead of a separate build. Passing `--experimental` as a runtime flag to the docker daemon, activates experimental features. Experimental is now considered a stable release, but with a couple of features which might not have tested and guaranteed API stability. |Don't pass `--experimental` as a runtime parameter to the docker daemon. |
+|Ensure containers are restricted from acquiring new privileges. (2.18) |Description: A process can set the `no_new_priv` bit in the kernel. It persists across fork, clone and execve. The `no_new_priv` bit ensures that the process or its children processes don't gain any additional privileges via suid or sgid bits. This way numerous dangerous operations become a lot less dangerous because there's no possibility of subverting privileged binaries. Setting this at the daemon level ensures that by default all new containers are restricted from acquiring new privileges. |Run the Docker daemon as below: ```dockerd --no-new-privileges``` |
+|Ensure that docker.service file ownership is set to root:root. (3.01) |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.service``` |
+|Ensure that docker .service file permissions are set to 644 or more restrictive (3.02) |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` |
+|Ensure that docker.socket file ownership is set to root:root. (3.03) |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker remote API. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.socket``` |
+|Ensure that docker.socket file permissions are set to `644` or more restrictive (3.04) |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` |
+|Ensure that /etc/docker directory ownership is set to `root:root`. (3.05) |Description: /etc/docker directory contains certificates and keys in addition to various sensitive files. Hence, it should be owned and group-owned by `root` to maintain the integrity of the directory. | ```chown root:root /etc/docker``` This would set the ownership and group-ownership for the directory to `root`. |
+|Ensure that /etc/docker directory permissions are set to `755` or more restrictive (3.06) |Description: /etc/docker directory contains certificates and keys in addition to various sensitive files. Hence, it should only be writable by `root` to maintain the integrity of the directory. | ```chmod 755 /etc/docker``` This would set the permissions for the directory to `755`. |
+|Ensure that registry certificate file ownership is set to root:root (3.07) |Description: /etc/docker/certs.d/ directory contains Docker registry certificates. These certificate files must be owned and group-owned by `root` to maintain the integrity of the certificates. | ```chown root:root /etc/docker/certs.d//*``` This would set the ownership and group-ownership for the registry certificate files to `root`. |
+|Ensure that registry certificate file permissions are set to `444` or more restrictive (3.08) |Description: /etc/docker/certs.d/ directory contains Docker registry certificates. These certificate files must have permissions of `444` to maintain the integrity of the certificates. | ```chmod 444 /etc/docker/certs.d//*``` This would set the permissions for registry certificate files to `444`. |
+|Ensure that TLS CA certificate file ownership is set to root:root (3.09) |Description: The TLS CA certificate file should be protected from any tampering. It's used to authenticate Docker server based on given CA certificate. Hence, it must be owned and group-owned by `root` to maintain the integrity of the CA certificate. |```chown root:root``` This would set the ownership and group-ownership for the TLS CA certificate file to `root`. |
+|Ensure that TLS CA certificate file permissions are set to `444` or more restrictive (3.10) |Description: The TLS CA certificate file should be protected from any tampering. It's used to authenticate Docker server based on given CA certificate. Hence, it must have permissions of `444` to maintain the integrity of the CA certificate. | ```chmod 444``` This would set the file permissions of the TLS CA file to `444`. |
+|Ensure that Docker server certificate file ownership is set to root:root (3.11) |Description: The Docker server certificate file should be protected from any tampering. It's used to authenticate Docker server based on the given server certificate. Hence, it must be owned and group-owned by `root` to maintain the integrity of the certificate. | ```chown root:root``` This would set the ownership and group-ownership for the Docker server certificate file to `root`. |
+|Ensure that Docker server certificate file permissions are set to `444` or more restrictive (3.12) |Description: The Docker server certificate file should be protected from any tampering. It's used to authenticate Docker server based on the given server certificate. Hence, it must have permissions of `444` to maintain the integrity of the certificate. | ```chmod 444``` This would set the file permissions of the Docker server file to `444`. |
+|Ensure that Docker server certificate key file ownership is set to root:root (3.13) |Description: The Docker server certificate key file should be protected from any tampering or unneeded reads. It holds the private key for the Docker server certificate. Hence, it must be owned and group-owned by `root` to maintain the integrity of the Docker server certificate. | ```chown root:root``` This would set the ownership and group-ownership for the Docker server certificate key file to `root`. |
+|Ensure that Docker server certificate key file permissions are set to 400 (3.14) |Description: The Docker server certificate key file should be protected from any tampering or unneeded reads. It holds the private key for the Docker server certificate. Hence, it must have permissions of `400` to maintain the integrity of the Docker server certificate. | ```chmod 400``` This would set the Docker server certificate key file permissions to `400`. |
+|Ensure that Docker socket file ownership is set to root:docker (3.15) |Description: Docker daemon runs as `root`. The default Unix socket hence must be owned by `root`. If any other user or process owns this socket, then it might be possible for that non-privileged user or process to interact with Docker daemon. Also, such a non-privileged user or process might interact with containers. This is neither secure nor desired behavior. Additionally, the Docker installer creates a Unix group called `docker`. You can add users to this group, and then those users would be able to read and write to default Docker Unix socket. The membership to the `docker` group is tightly controlled by the system administrator. If any other group owns this socket, then it might be possible for members of that group to interact with Docker daemon. Also, such a group might not be as tightly controlled as the `docker` group. This is neither secure nor desired behavior. Hence, the default Docker Unix socket file must be owned by `root` and group-owned by `docker` to maintain the integrity of the socket file. | ```chown root:docker /var/run/docker.sock``` This would set the ownership to `root` and group-ownership to `docker` for default Docker socket file. |
+|Ensure that Docker socket file permissions are set to `660` or more restrictive (3.16) |Description: Only `root` and members of `docker` group should be allowed to read and write to default Docker Unix socket. Hence, the Docket socket file must have permissions of `660` or more restrictive. | ```chmod 660 /var/run/docker.sock``` This would set the file permissions of the Docker socket file to `660`. |
+|Ensure that daemon.json file ownership is set to root:root (3.17) |Description: `daemon.json` file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. | ```chown root:root /etc/docker/daemon.json``` This would set the ownership and group-ownership for the file to `root`. |
+|Ensure that daemon.json file permissions are set to 644 or more restrictive (3.18) |Description: `daemon.json` file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be writable only by `root` to maintain the integrity of the file. | ```chmod 644 /etc/docker/daemon.json``` This would set the file permissions for this file to `644`. |
+|Ensure that /etc/default/docker file ownership is set to root:root (3.19) |Description: `/etc/default/docker` file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. | ```chown root:root /etc/default/docker``` This would set the ownership and group-ownership for the file to `root`. |
+|Ensure that /etc/default/docker file permissions are set to 644 or more restrictive (3.20) |Description: /etc/default/docker file contains sensitive parameters that may alter the behavior of docker daemon. Hence, it should be writable only by `root` to maintain the integrity of the file. | ```chmod 644 /etc/default/docker``` This would set the file permissions for this file to `644`. |
+|Ensure a user for the container has been created (4.01) |Description: it's a good practice to run the container as a non-root user, if possible. Though user namespace mapping is now available, if a user is already defined in the container image, the container is run as that user by default and specific user namespace remapping isn't required. |Ensure that the Dockerfile for the container image contains: `USER {username or ID}` where username or ID refers to the user that could be found in the container base image. If there's no specific user created in the container base image, then add a useradd command to add the specific user before USER instruction. |
+|Ensure HEALTHCHECK instructions have been added to the container image (4.06) |Description: One of the important security triads is availability. Adding `HEALTHCHECK` instruction to your container image ensures that the docker engine periodically checks the running container instances against that instruction to ensure that the instances are still working. Based on the reported health status, the docker engine could then exit non-working containers and instantiate new ones. |Follow Docker documentation and rebuild your container image with `HEALTHCHECK` instruction. |
+|Ensure either SELinux or AppArmor is enabled as appropriate (5.01-2) |Description: AppArmor protects the Linux OS and applications from various threats by enforcing security policy which is also known as AppArmor profile. You can create your own AppArmor profile for containers or use the Docker's default AppArmor profile. This would enforce security policies on the containers as defined in the profile. SELinux provides a Mandatory Access Control (MAC) system that greatly augments the default Discretionary Access Control (DAC) model. You can thus add an extra layer of safety by enabling SELinux on your Linux host, if applicable. |After enabling the relevant Mandatory Access Control Plugin for your distro, run the docker daemon as ```docker run --interactive --tty --security-opt="apparmor:PROFILENAME" centos /bin/bash``` for AppArmor or ```docker run --interactive --tty --security-opt label=level:TopSecret centos /bin/bash``` for SELinux. |
+|Ensure Linux Kernel Capabilities are restricted within containers (5.03) |Description: Docker supports the addition and removal of capabilities, allowing the use of a non-default profile. This may make Docker more secure through capability removal, or less secure through the addition of capabilities. It's thus recommended to remove all capabilities except those explicitly required for your container process. For example, capabilities such as below are usually not needed for container process: ```NET_ADMIN SYS_ADMIN SYS_MODULE``` |Execute the below command to add needed capabilities: ```$> docker run --cap-add={"Capability 1","Capability 2"}``` For example,```docker run --interactive --tty --cap-add={"NET_ADMIN","SYS_ADMIN"} centos:latest /bin/bash``` Execute the below command to drop unneeded capabilities: ```$> docker run --cap-drop={"Capability 1","Capability 2"}``` For example,```docker run --interactive --tty --cap-drop={"SETUID","SETGID"} centos:latest /bin/bash``` Alternatively, You may choose to drop all capabilities and add only the needed ones: $> docker run --cap-drop=all --cap-add={"Capability 1","Capability 2"} For example, ```docker run --interactive --tty --cap-drop=all --cap-add={"NET_ADMIN","SYS_ADMIN"} centos:latest /bin/bash``` |
+|Ensure privileged containers aren't used (5.04) |Description: The `--privileged` flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker. |Don't run container with the `--privileged` flag. For example, don't start a container as below: ```docker run --interactive --tty --privileged centos /bin/bash``` |
+|Ensure sensitive host system directories aren't mounted on containers (5.05) |Description: If sensitive directories are mounted in read-write mode, it would be possible to make changes to files within those sensitive directories. The changes might bring down security implications or unwarranted changes that could put the Docker host in compromised state. |Don't mount host sensitive directories on containers especially in read-write mode. |
+|Ensure the host's network namespace isn't shared (5.09) |Description: This is potentially dangerous. It allows the container process to open low-numbered ports like any other `root` process. It also allows the container to access network services like D-bus on the Docker host. Thus, a container process can potentially do unexpected things such as shutting down the Docker host. You shouldn't use this option. |Don't pass `--net=host` option when starting the container. |
+|Ensure memory usage for container is limited (5.10) |Description: By default, container can use all of the memory on the host. You can use memory limit mechanism to prevent a denial of service arising from one container consuming all of the host’s resources such that other containers on the same host cannot perform their intended functions. Having no limit on memory can lead to issues where one container can easily make the whole system unstable and as a result unusable. |Run the container with only as much memory as required. Always run the container using the `--memory` argument. For example, you could run a container as below: ```docker run --interactive --tty --memory 256m centos /bin/bash``` In the above example, the container is started with a memory limit of 256 MB. Note: Please note that the output of the below command would return values in scientific notation if memory limits are in place. ```docker inspect --format='{{.Config.Memory}}' 7c5a2d4c7fe0``` For example, if the memory limit's set to `256 MB` for the above container instance, the output of the above command would be `2.68435456e+08` and NOT 256m. You should convert this value using a scientific calculator or programmatic methods. |
+|Ensure the container's root filesystem is mounted as read only (5.12) |Description: Enabling this option forces containers at runtime to explicitly define their data writing strategy to persist or not persist their data. This also reduces security attack vectors since the container instance's filesystem cannot be tampered with or written to unless it has explicit read-write permissions on its filesystem folder and directories. |Add a `--read-only` flag at a container's runtime to enforce the container's root filesystem to be mounted as read only.```docker run --read-only``` Enabling the `--read-only` option at a container's runtime should be used by administrators to force a container's executable processes to only write container data to explicit storage locations during the container's runtime. Examples of explicit storage locations during a container's runtime include, but not limited to: 1. Use the `--tmpfs` option to mount a temporary file system for non-persistent data writes. ```docker run --interactive --tty --read-only --tmpfs "/run" --tmpfs "/tmp" centos /bin/bash``` 2. Enabling Docker `rw` mounts at a container's runtime to persist container data directly on the Docker host filesystem. ```docker run --interactive --tty --read-only -v /opt/app/data:/run/app/data:rw centos /bin/bash``` 3. Utilizing Docker shared-storage volume plugins for Docker data volume to persist container data. ```docker volume create -d convoy --opt o=size=20GB my-named-volume``````docker run --interactive --tty --read-only -v my-named-volume:/run/app/data centos /bin/bash``` 4. Transmitting container data outside of the docker during the container's runtime for container data to persist container data. Examples include hosted databases, network file shares, and APIs. |
+|Ensure incoming container traffic is bound to a specific host interface (5.13) |Description: If you have multiple network interfaces on your host machine, the container can accept connections on the exposed ports on any network interface. This might not be desired and may not be secured. Many times a particular interface is exposed externally and services such as intrusion detection, intrusion prevention, firewall, load balancing, etc. are run on those interfaces to screen incoming public traffic. Hence, you shouldn't accept incoming connections on any interface. You should only allow incoming connections from a particular external interface. |Bind the container port to a specific host interface on the desired host port. For example, ```docker run --detach --publish 10.2.3.4:49153:80 nginx``` In the example above, the container port `80` is bound to the host port on `49153` and would accept incoming connection only from `10.2.3.4` external interface. |
+|Ensure 'on-failure' container restart policy is set to '5' or lower (5.14) |Description: If you indefinitely keep trying to start the container, it could possibly lead to a denial of service on the host. It could be an easy way to do a distributed denial of service attack especially if you have many containers on the same host. Additionally, ignoring the exit status of the container and `always` attempting to restart the container leads to non-investigation of the root cause behind containers getting terminated. If a container gets terminated, you should investigate on the reason behind it instead of just attempting to restart it indefinitely. Thus, it's recommended to use `on-failure` restart policy and limit it to maximum of `5` restart attempts. |If a container is desired to be restarted of its own then, for example, you could start the container as below: ```docker run --detach --restart=on-failure:5 nginx``` |
+|Ensure the host's process namespace isn't shared (5.15) |Description: PID namespace provides separation of processes. The PID Namespace removes the view of the system processes, and allows process ID's to be reused including PID `1`. If the host's PID namespace is shared with the container, it would basically allow processes within the container to see all of the processes on the host system. This breaks the benefit of process level isolation between the host and the containers. Someone having access to the container can eventually know all the processes running on the host system and can even kill the host system processes from within the container. This can be catastrophic. Hence, don't share the host's process namespace with the containers. |Don't start a container with `--pid=host` argument. For example, don't start a container as below: ```docker run --interactive --tty --pid=host centos /bin/bash``` |
+|Ensure the host's IPC namespace isn't shared (5.16) |Description: IPC namespace provides separation of IPC between the host and containers. If the host's IPC namespace is shared with the container, it would basically allow processes within the container to see all of the IPC on the host system. This breaks the benefit of IPC level isolation between the host and the containers. Someone having access to the container can eventually manipulate the host IPC. This can be catastrophic. Hence, don't share the host's IPC namespace with the containers. |Don't start a container with `--ipc=host` argument. For example, don't start a container as below: ```docker run --interactive --tty --ipc=host centos /bin/bas``` |
+|Ensure host devices aren't directly exposed to containers (5.17) |Description: The `--device` option exposes the host devices to the containers and consequently, the containers can directly access such host devices. You would not require the container to run in `privileged` mode to access and manipulate the host devices. By default, the container will be able to read, write and mknod these devices. Additionally, it's possible for containers to remove block devices from the host. Hence, don't expose host devices to containers directly. If at all, you would want to expose the host device to a container, use the sharing permissions appropriately: - r - read only - w - writable - m - mknod allowed |Don't directly expose the host devices to containers. If at all, you need to expose the host devices to containers, use the correct set of permissions: For example, don't start a container as below: ```docker run --interactive --tty --device=/dev/tty0:/dev/tty0:rwm --device=/dev/temp_sda:/dev/temp_sda:rwm centos bash``` For example, share the host device with correct permissions: ```docker run --interactive --tty --device=/dev/tty0:/dev/tty0:rw --device=/dev/temp_sda:/dev/temp_sda:r centos bash``` |
+|Ensure mount propagation mode isn't set to shared (5.19) |Description: A shared mount is replicated at all mounts and the changes made at any mount point are propagated to all mounts. Mounting a volume in shared mode does not restrict any other container to mount and make changes to that volume. This might be catastrophic if the mounted volume is sensitive to changes. Don't set mount propagation mode to shared until needed. |Don't mount volumes in shared mode propagation. For example, don't start container as below: ```docker run --volume=/hostPath:/containerPath:shared``` |
+|Ensure the host's UTS namespace isn't shared (5.20) |Description: Sharing the UTS namespace with the host provides full permission to the container to change the hostname of the host. This is insecure and shouldn't be allowed. |Don't start a container with `--uts=host` argument. For example, don't start a container as below: ```docker run --rm --interactive --tty --uts=host rhel7.2``` |
+|Ensure cgroup usage is confirmed (5.24) |Description: System administrators typically define cgroups under which containers are supposed to run. Even if cgroups aren't explicitly defined by the system administrators, containers run under `docker` cgroup by default. At run-time, it's possible to attach to a different cgroup other than the one that was expected to be used. This usage should be monitored and confirmed. By attaching to a different cgroup than the one that is expected, excess permissions and resources might be granted to the container and thus, can prove to be unsafe. |Don't use `--cgroup-parent` option in `docker run` command unless needed. |
+|Ensure the container is restricted from acquiring additional privileges (5.25) |Description: A process can set the `no_new_priv` bit in the kernel. It persists across fork, clone and execve. The `no_new_priv` bit ensures that the process or its children processes don't gain any additional privileges via suid or sgid bits. This way numerous dangerous operations become a lot less dangerous because there's no possibility of subverting privileged binaries. |For example, you should start your container as below: ```docker run --rm -it --security-opt=no-new-privileges ubuntu bash``` |
+|Ensure container health is checked at runtime (5.26) |Description: One of the important security triads is availability. If the container image you're using does not have a pre-defined `HEALTHCHECK` instruction, use the `--health-cmd` parameter to check container health at runtime. Based on the reported health status, you could take necessary actions. |Run the container using `--health-cmd` and the other parameters. For example, ```docker run -d --health-cmd='stat /etc/passwd || exit 1' nginx``` |
+|Ensure PIDs cgroup limit's used (5.28) |Description: Attackers could launch a fork bomb with a single command inside the container. This fork bomb can crash the entire system and requires a restart of the host to make the system functional again. PIDs cgroup `--pids-limit` will prevent this kind of attacks by restricting the number of forks that can happen inside a container at a given time. |Use `--pids-limit` flag while launching the container with an appropriate value. For example, ```docker run -it --pids-limit 100``` In the above example, the number of processes allowed to run at any given time is set to 100. After a limit of 100 concurrently running processes is reached, docker would restrict any new process creation. |
+|Ensure Docker's default bridge docker0 isn't used (5.29) |Description: Docker connects virtual interfaces created in the bridge mode to a common bridge called `docker0`. This default networking model is vulnerable to ARP spoofing and MAC flooding attacks since there's no filtering applied. |Follow Docker documentation and setup a user-defined network. Run all the containers in the defined network. |
+|Ensure the host's user namespace isn't shared (5.30) |Description: User namespaces ensure that a root process inside the container will be mapped to a non-root process outside the container. Sharing the user namespaces of the host with the container thus does not isolate users on the host with users on the containers. |Don't share user namespaces between host and containers. For example, don't run a container as below: ```docker run --rm -it --userns=host ubuntu bash``` |
+|Ensure the Docker socket isn't mounted inside any containers (5.31) |Description: If the docker socket is mounted inside a container it would allow processes running within the container to execute docker commands which effectively allows for full control of the host. |Ensure that no containers mount `docker.sock` as a volume. |
+|Ensure swarm services are bound to a specific host interface (7.03) |Description: When a swarm is initialized the default value for the `--listen-addr` flag is `0.0.0.0:2377` which means that the swarm services will listen on all interfaces on the host. If a host has multiple network interfaces this may be undesirable as it may expose the docker swarm services to networks which aren't involved in the operation of the swarm. By passing a specific IP address to the `--listen-addr`, a specific network interface can be specified limiting this exposure. |Remediation of this requires re-initialization of the swarm specifying a specific interface for the `--listen-addr` parameter. |
+|Ensure data exchanged between containers are encrypted on different nodes on the overlay network (7.04) |Description: By default, data exchanged between containers on different nodes on the overlay network isn't encrypted. This could potentially expose traffic between the container nodes. |Create overlay network with `--opt encrypted` flag. |
+|Ensure swarm manager is run in auto-lock mode (7.06) |Description: When Docker restarts, both the TLS key used to encrypt communication among swarm nodes, and the key used to encrypt and decrypt Raft logs on disk, are loaded into each manager node's memory. You should protect the mutual TLS encryption key and the key used to encrypt and decrypt Raft logs at rest. This protection could be enabled by initializing swarm with `--autolock` flag. With `--autolock` enabled, when Docker restarts, you must unlock the swarm first, using a key encryption key generated by Docker when the swarm was initialized. |If you're initializing swarm, use the below command. ```docker swarm init --autolock``` If you want to set `--autolock` on an existing swarm manager node, use the below command.```docker swarm update --autolock``` |
+
+> [!NOTE]
+> Availability of specific Azure Policy guest configuration settings may vary in Azure Government
+> and other national clouds.
+
+## Next steps
+
+Additional articles about Azure Policy and guest configuration:
+
+- [Understand the guest configuration feature of Azure Policy]Understand the guest configuration feature of Azure Polic(../concepts/guest-configuration.md).
+- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview.
+- Review other examples at [Azure Policy samples](./index.md).
+- Review [Understanding policy effects](../concepts/effects.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
diff --git a/articles/governance/policy/toc.yml b/articles/governance/policy/toc.yml
index 2df67a115ffef..a4d29eb9bb41d 100644
--- a/articles/governance/policy/toc.yml
+++ b/articles/governance/policy/toc.yml
@@ -165,6 +165,8 @@
href: ./samples/gov-nist-sp-800-171-r2.md
- name: Compute security baselines
items:
+ - name: Docker host security baseline
+ href: ./samples/guest-configuration-baseline-docker.md
- name: Linux security baseline
href: ./samples/guest-configuration-baseline-linux.md
- name: Windows security baseline
diff --git a/articles/healthcare-apis/fhir/get-started-with-fhir.md b/articles/healthcare-apis/fhir/get-started-with-fhir.md
index b0b59d7065ea5..90de02ff340ce 100644
--- a/articles/healthcare-apis/fhir/get-started-with-fhir.md
+++ b/articles/healthcare-apis/fhir/get-started-with-fhir.md
@@ -58,7 +58,7 @@ You can obtain an Azure AD access token using PowerShell, Azure CLI, REST CCI, o
#### Access using existing tools
- [Postman](../fhir/use-postman.md)
-- [Rest Client](../fhir/using-rest-client.md)
+- [REST Client](../fhir/using-rest-client.md)
- [cURL](../fhir/using-curl.md)
#### Load data
diff --git a/articles/iot-develop/troubleshoot-embedded-device-quickstarts.md b/articles/iot-develop/troubleshoot-embedded-device-quickstarts.md
index 5b7f17a0ed43f..8cdb3f04205c2 100644
--- a/articles/iot-develop/troubleshoot-embedded-device-quickstarts.md
+++ b/articles/iot-develop/troubleshoot-embedded-device-quickstarts.md
@@ -1,11 +1,12 @@
---
title: Troubleshooting the Azure RTOS embedded device quickstarts
description: Steps to help you troubleshoot common issues when using the Azure RTOS embedded device quickstarts
-author: JimacoMS4
-ms.author: v-jbrannian
+author: timlt
+ms.author: timlt
ms.service: iot-develop
ms.topic: troubleshooting
ms.date: 06/10/2021
+ms.custom: contperf-fy22q4
---
# Troubleshooting the Azure RTOS embedded device quickstarts
@@ -37,7 +38,7 @@ This issue can occur when you attempt to build the project. It's the result of t
### Description
-The issue can occur because the path to an object file exceeds the default maximum path length in Windows. Examine the build output for a message similar to the following:
+The issue can occur because the path to an object file exceeds the default maximum path length in Windows. Examine the build output for a message similar to the following example:
```output
-- Configuring done
@@ -62,13 +63,13 @@ CMake Warning in C:/embedded quickstarts/areallyreallyreallylongpath/getting-sta
You can try one of the following options to resolve this error:
* Clone the repository into a directory with a shorter path and try again.
-* Follow the instructions in [Maximum Path Length Limitation](/windows/win32/fileio/maximum-file-path-limitation) to enable long paths in Windows 10, version 1607 and later.
+* Follow the instructions in [Maximum Path Length Limitation](/windows/win32/fileio/maximum-file-path-limitation) to enable long paths in Windows 11 and Windows 10, version 1607 and later.
## Issue: Device can't connect to Iot hub
### Description
-The issue can occur after you've created Azure resources, and flashed your device. When you try to connect your newly flashed device to Azure IoT, you see a console message like the following:
+The issue can occur after you've created Azure resources, and flashed your device. When you try to connect your newly flashed device to Azure IoT, you see a console message like the following example:
```output
Unable to resolve DNS for MQTT Server
@@ -82,7 +83,7 @@ Unable to resolve DNS for MQTT Server
### Description
-After you flash a device that uses a Wi-Fi connection and try to connect to your Wi-Fi network, you get an error message that Wi-Fi is unable to connect.
+After you flash a device that uses a Wi-Fi connection, you get an error message that Wi-Fi is unable to connect.
### Resolution
@@ -93,7 +94,7 @@ After you flash a device that uses a Wi-Fi connection and try to connect to your
### Description
-You can't complete the process of flashing your device. You'll know this if you experience any of the following symptoms:
+You can't complete the process of flashing your device. The following symptoms indicate that flashing is incomplete:
* The **.bin* image file that you built doesn't copy to the device.
* The utility that you're using to flash the device gives a warning or error.
@@ -110,7 +111,7 @@ You can't complete the process of flashing your device. You'll know this if you
### Description
-After you flash your device and connect it to your computer, you get a message like the following in your terminal software:
+After you flash your device and connect it to your computer, you get output like the following message in your terminal software:
```output
Failed to initialize the port.
@@ -148,7 +149,7 @@ After you flash your device successfully and connect it to your computer, you se
### Description
-After you flash your device and connect it to your computer, you get a repeated message like the following in your terminal window:
+After you flash your device and connect it to your computer, you get output like the following message in your terminal window:
```output
Failed to publish temperature
@@ -162,7 +163,7 @@ Failed to publish temperature
### Description
-Because [Defender for IoT module](/defender-for-iot/device-builders/iot-security-azure-rtos) is enabled by default from the device end, you might observe extra messages that are caused by that.
+Because [Defender for IoT module](/azure/defender-for-iot/device-builders/iot-security-azure-rtos) is enabled by default from the device end, you might observe extra messages in the output.
### Resolution
@@ -174,4 +175,4 @@ If after reviewing the issues in this article, you still can't monitor your devi
* [STMicroelectronics B-L475E-IOT01](https://www.st.com/content/st_com/en/products/evaluation-tools/product-evaluation-tools/mcu-mpu-eval-tools/stm32-mcu-mpu-eval-tools/stm32-discovery-kits/b-l475e-iot01a.html)
* [NXP MIMXRT1060-EVK](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/mimxrt1060-evk-i-mx-rt1060-evaluation-kit:MIMXRT1060-EVK)
-* [Microchip ATSAME54-XPro](https://www.microchip.com/developmenttools/productdetails/atsame54-xpro)
\ No newline at end of file
+* [Microchip ATSAME54-XPro](https://www.microchip.com/developmenttools/productdetails/atsame54-xpro)
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/add-individual-enrollment-with-cert.png b/articles/iot-dps/media/quick-create-simulated-device-x509/add-individual-enrollment-with-cert.png
new file mode 100644
index 0000000000000..c024cef9fbaa3
Binary files /dev/null and b/articles/iot-dps/media/quick-create-simulated-device-x509/add-individual-enrollment-with-cert.png differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/cert-generator-java.png b/articles/iot-dps/media/quick-create-simulated-device-x509/cert-generator-java.png
deleted file mode 100644
index 33d5bcbd897d4..0000000000000
Binary files a/articles/iot-dps/media/quick-create-simulated-device-x509/cert-generator-java.png and /dev/null differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png b/articles/iot-dps/media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png
index f1274e551b4bc..d818bebc1f75a 100644
Binary files a/articles/iot-dps/media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png and b/articles/iot-dps/media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/copy-id-scope.png b/articles/iot-dps/media/quick-create-simulated-device-x509/copy-id-scope.png
index db380c52ff746..ea2623baa8c0b 100644
Binary files a/articles/iot-dps/media/quick-create-simulated-device-x509/copy-id-scope.png and b/articles/iot-dps/media/quick-create-simulated-device-x509/copy-id-scope.png differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/device-enrollment.PNG b/articles/iot-dps/media/quick-create-simulated-device-x509/device-enrollment.PNG
deleted file mode 100644
index 4319870f59690..0000000000000
Binary files a/articles/iot-dps/media/quick-create-simulated-device-x509/device-enrollment.PNG and /dev/null differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-csharp.png b/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-csharp.png
deleted file mode 100644
index e0d09cb026bae..0000000000000
Binary files a/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-csharp.png and /dev/null differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-java.png b/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-java.png
deleted file mode 100644
index 2240382ffa5dc..0000000000000
Binary files a/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-java.png and /dev/null differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-nodejs.png b/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-nodejs.png
deleted file mode 100644
index 726f0f45bf086..0000000000000
Binary files a/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-nodejs.png and /dev/null differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-python.png b/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-python.png
deleted file mode 100644
index f878050860871..0000000000000
Binary files a/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration-python.png and /dev/null differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration.png b/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration.png
deleted file mode 100644
index 3a843166e142a..0000000000000
Binary files a/articles/iot-dps/media/quick-create-simulated-device-x509/hub-registration.png and /dev/null differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/individual-enrollment-after-registration.png b/articles/iot-dps/media/quick-create-simulated-device-x509/individual-enrollment-after-registration.png
new file mode 100644
index 0000000000000..93c65dc9d80cc
Binary files /dev/null and b/articles/iot-dps/media/quick-create-simulated-device-x509/individual-enrollment-after-registration.png differ
diff --git a/articles/iot-dps/media/quick-create-simulated-device-x509/iot-hub-registration.png b/articles/iot-dps/media/quick-create-simulated-device-x509/iot-hub-registration.png
new file mode 100644
index 0000000000000..2dd105965a183
Binary files /dev/null and b/articles/iot-dps/media/quick-create-simulated-device-x509/iot-hub-registration.png differ
diff --git a/articles/iot-dps/quick-create-simulated-device-x509.md b/articles/iot-dps/quick-create-simulated-device-x509.md
index d0c6cd4173c2a..0d74fa848199d 100644
--- a/articles/iot-dps/quick-create-simulated-device-x509.md
+++ b/articles/iot-dps/quick-create-simulated-device-x509.md
@@ -3,7 +3,7 @@ title: Quickstart - Provision an X.509 certificate simulated device to Microsoft
description: Learn how to provision a simulated device that authenticates with an X.509 certificate in the Azure IoT Hub Device Provisioning Service
author: kgremban
ms.author: kgremban
-ms.date: 09/07/2021
+ms.date: 05/31/2022
ms.topic: quickstart
ms.service: iot-dps
services: iot-dps
@@ -27,19 +27,26 @@ This quickstart demonstrates a solution for a Windows-based workstation. However
* Complete the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md).
+::: zone pivot="programming-language-ansi-c"
+
The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
-::: zone pivot="programming-language-ansi-c"
+* Install [Visual Studio](https://visualstudio.microsoft.com/vs/) 2022 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015, Visual Studio 2017, and Visual Studio 19 are also supported. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
-* Install [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
+* Install the latest [CMake build system](https://cmake.org/download/). Make sure you check the option that adds the CMake executable to your path.
+
+ >[!IMPORTANT]
+ >Confirm that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. Also, be aware that older versions of the CMake build system fail to generate the solution file used in this article. Make sure to use the latest version of CMake.
::: zone-end
::: zone pivot="programming-language-csharp"
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/doc/devbox_setup.md) in the SDK documentation.
+
* Install [.NET SDK 6.0](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
- ```bash
+ ```cmd
dotnet --info
```
@@ -47,22 +54,24 @@ The following prerequisites are for a Windows development environment. For Linux
::: zone pivot="programming-language-nodejs"
-* Install [Node.js v4.0 or above](https://nodejs.org) on your machine.
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/blob/main/doc/node-devbox-setup.md) in the SDK documentation.
-* Install [OpenSSL](https://www.openssl.org/) on your machine and is added to the environment variables accessible to the command window. This library can either be built and installed from source or downloaded and installed from a [third party](https://wiki.openssl.org/index.php/Binaries) such as [this](https://sourceforge.net/projects/openssl/).
+* Install [Node.js v4.0 or above](https://nodejs.org) on your machine.
::: zone-end
::: zone pivot="programming-language-python"
-* [Python 3.6 or later](https://www.python.org/downloads/) on your machine.
+The following prerequisites are for a Windows development environment.
-* Install [OpenSSL](https://www.openssl.org/) on your machine and is added to the environment variables accessible to the command window. This library can either be built and installed from source or downloaded and installed from a [third party](https://wiki.openssl.org/index.php/Binaries) such as [this](https://sourceforge.net/projects/openssl/).
+* [Python 3.6 or later](https://www.python.org/downloads/) on your machine.
::: zone-end
::: zone pivot="programming-language-java"
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-java/blob/main/doc/java-devbox-setup.md) in the SDK documentation.
+
* Install the [Java SE Development Kit 8](/azure/developer/java/fundamentals/java-support-on-azure) or later on your machine.
* Download and install [Maven](https://maven.apache.org/install.html).
@@ -71,26 +80,30 @@ The following prerequisites are for a Windows development environment. For Linux
* Install the latest version of [Git](https://git-scm.com/download/). Make sure that Git is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes *Git Bash*, the command-line app that you can use to interact with your local Git repository.
+* Make sure [OpenSSL](https://www.openssl.org/) is installed on your machine. On Windows, your installation of Git includes an installation of OpenSSL. You can access OpenSSL from the Git Bash prompt. To verify that OpenSSL is installed, open a Git Bash prompt and enter `openssl version`.
+
+ >[!NOTE]
+ > Unless you're familiar with OpenSSL and already have it installed on your Windows machine, we recommend using OpenSSL from the Git Bash prompt. Alternatively, you can choose to download the source code and build OpenSSL. To learn more, see the [OpenSSL Downloads](https://www.openssl.org/source/) page. Or, you can download OpenSSL pre-built from a third-party. To learn more, see the [OpenSSL wiki](https://wiki.openssl.org/index.php/Binaries). Microsoft makes no guarantees about the validity of packages downloaded from third-parties. If you do choose to build or download OpenSSL make sure that the OpenSSL binary is accessible in your path and that the `OPENSSL_CNF` environment variable is set to the path of your *openssl.cnf* file.
+
+* Open both a Windows command prompt and a Git Bash prompt.
+
+ The steps in this quickstart assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You'll use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell.
+
## Prepare your development environment
::: zone pivot="programming-language-ansi-c"
In this section, you'll prepare a development environment that's used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The sample code attempts to provision the device, during the device's boot sequence.
-1. Download the latest [CMake build system](https://cmake.org/download/).
-
- >[!IMPORTANT]
- >Confirm that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. Also, be aware that older versions of the CMake build system fail to generate the solution file used in this article. Make sure to use the latest version of CMake.
-
-2. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
+1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
-3. Select the **Tags** tab at the top of the page.
+2. Select the **Tags** tab at the top of the page.
-4. Copy the tag name for the latest release of the Azure IoT C SDK.
+3. Copy the tag name for the latest release of the Azure IoT C SDK.
-5. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. (replace `` with the tag you copied in the previous step).
+4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. (replace `` with the tag you copied in the previous step).
- ```cmd/sh
+ ```cmd
git clone -b https://github.com/Azure/azure-iot-sdk-c.git
cd azure-iot-sdk-c
git submodule update --init
@@ -98,44 +111,45 @@ In this section, you'll prepare a development environment that's used to build t
This operation could take several minutes to complete.
-6. When the operation is complete, run the following commands from the `azure-iot-sdk-c` directory:
+5. When the operation is complete, run the following commands from the `azure-iot-sdk-c` directory:
- ```cmd/sh
+ ```cmd
mkdir cmake
cd cmake
```
-7. The code sample uses an X.509 certificate to provide attestation via X.509 authentication. Run the following command to build a version of the SDK specific to your development platform that includes the device provisioning client. A Visual Studio solution for the simulated device is generated in the `cmake` directory.
+6. The code sample uses an X.509 certificate to provide attestation via X.509 authentication. Run the following command to build a version of the SDK specific to your development platform that includes the device provisioning client. A Visual Studio solution for the simulated device is generated in the `cmake` directory.
+
+ When specifying the path used with `-Dhsm_custom_lib` in the command below, make sure to use the absolute path to the library in the `cmake` directory you previously created. The path shown below assumes that you cloned the C SDK in the root directory of the C drive. If you used another directory, adjust the path accordingly.
```cmd
- cmake -Duse_prov_client:BOOL=ON ..
+ cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
```
>[!TIP]
>If `cmake` does not find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
-8. When the build succeeds, the last few output lines look similar to the following output:
-
- ```cmd/sh
- $ cmake -Duse_prov_client:BOOL=ON ..
- -- Building for: Visual Studio 16 2019
- -- The C compiler identification is MSVC 19.23.28107.0
- -- The CXX compiler identification is MSVC 19.23.28107.0
+7. When the build succeeds, the last few output lines look similar to the following output:
+ ```cmd
+ cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
+ -- Building for: Visual Studio 17 2022
+ -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.22000.
+ -- The C compiler identification is MSVC 19.32.31329.0
+ -- The CXX compiler identification is MSVC 19.32.31329.0
+
...
-- Configuring done
-- Generating done
- -- Build files have been written to: C:/code/azure-iot-sdk-c/cmake
+ -- Build files have been written to: C:/azure-iot-sdk-c/cmake
```
::: zone-end
::: zone pivot="programming-language-csharp"
-1. Open a Git CMD or Git Bash command line environment.
-
-2. Clone the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT Samples for C#](https://github.com/Azure-Samples/azure-iot-samples-csharp) GitHub repository using the following command:
```cmd
git clone https://github.com/Azure-Samples/azure-iot-samples-csharp.git
@@ -145,9 +159,7 @@ In this section, you'll prepare a development environment that's used to build t
::: zone pivot="programming-language-nodejs"
-1. Open a Git CMD or Git Bash command line environment.
-
-2. Clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
```cmd
git clone https://github.com/Azure/azure-iot-sdk-node.git
@@ -157,9 +169,7 @@ In this section, you'll prepare a development environment that's used to build t
::: zone pivot="programming-language-python"
-1. Open a Git CMD or Git Bash command line environment.
-
-2. Clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
```cmd
git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
@@ -169,189 +179,230 @@ In this section, you'll prepare a development environment that's used to build t
::: zone pivot="programming-language-java"
-1. Open a Git CMD or Git Bash command line environment.
-
-2. Clone the [Azure IoT Samples for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
+1. In your Windows command prompt, clone the [Azure IoT Samples for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
```cmd
git clone https://github.com/Azure/azure-iot-sdk-java.git --recursive
```
-3. Go to the root `azure-iot-sdk-`java` directory and build the project to download all needed packages.
+2. Go to the root `azure-iot-sdk-java` directory and build the project to download all needed packages.
- ```cmd/sh
+ ```cmd
cd azure-iot-sdk-java
mvn install -DskipTests=true
```
-4. Go to the certificate generator project and build the project.
-
- ```cmd/sh
- cd azure-iot-sdk-java/provisioning/provisioning-tools/provisioning-x509-cert-generator
- mvn clean install
- ```
-
::: zone-end
## Create a self-signed X.509 device certificate
-In this section, you'll use sample code from the Azure IoT SDK to create a self-signed X.509 certificate. This certificate must be uploaded to your provisioning service, and verified by the service.
+In this section, you'll use OpenSSL to create a self-signed X.509 certificate and a private key. This certificate will be uploaded to your provisioning service instance and verified by the service.
> [!CAUTION]
-> Use certificates created with the SDK tooling for development testing only.
+> Use certificates created with OpenSSL in this quickstart for development testing only.
> Do not use these certificates in production.
-> The SDK generated certificates contain hard-coded passwords, such as *1234*, and expire after 30 days.
-> To learn about obtaining certificates suitable for production use, see [How to get an X.509 CA certificate](../iot-hub/iot-hub-x509ca-overview.md#how-to-get-an-x509-ca-certificate) in the Azure IoT Hub documentation.
+> These certificates expire after 30 days and may contain hard-coded passwords, such as *1234*.
+> To learn about obtaining certificates suitable for use in production, see [How to get an X.509 CA certificate](../iot-hub/iot-hub-x509ca-overview.md#how-to-get-an-x509-ca-certificate) in the Azure IoT Hub documentation.
>
-To create the X.509 certificate:
-
-::: zone pivot="programming-language-ansi-c"
+Perform the steps in this section in your Git Bash prompt.
-### Clone the Azure IoT C SDK
+1. In your Git Bash prompt, navigate to a directory where you'd like to create your certificates.
-The [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) contains test tooling that can help you create an X.509 certificate chain, upload a root or intermediate certificate from that chain, and do proof-of-possession with the service to verify the certificate.
+2. Run the following command:
-If you've already cloned the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository, skip to the [next section](#create-a-test-certificate).
+ # [Windows](#tab/windows)
-1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
+ ```bash
+ winpty openssl req -outform PEM -x509 -sha256 -newkey rsa:4096 -keyout device-key.pem -out device-cert.pem -days 30 -extensions usr_cert -addext extendedKeyUsage=clientAuth -subj "//CN=my-x509-device"
+ ```
-2. Copy the tag name for the latest release of the Azure IoT C SDK.
+ > [!IMPORTANT]
+ > The extra forward slash given for the subject name (`//CN=my-x509-device`) is only required to escape the string with Git on Windows platforms.
-3. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. (replace `` with the tag you copied in the previous step).
+ # [Linux](#tab/linux)
- ```cmd/sh
- git clone -b https://github.com/Azure/azure-iot-sdk-c.git
- cd azure-iot-sdk-c
- git submodule update --init
+ ```bash
+ openssl req -outform PEM -x509 -sha256 -newkey rsa:4096 -keyout device-key.pem -out device-cert.pem -days 30 -extensions usr_cert -addext extendedKeyUsage=clientAuth -subj "/CN=my-x509-device"
```
- This operation may take several minutes to complete.
-
-4. The test tooling should now be located in the *azure-iot-sdk-c/tools/CACertificates* of the repository that you cloned.
+ ---
-### Create a test certificate
+3. When asked to **Enter PEM pass phrase:**, use the pass phrase `1234`.
-Follow the steps in [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md).
+4. When asked **Verifying - Enter PEM pass phrase:**, use the pass phrase `1234` again.
-In addition to the tooling in the C SDK, the [Group certificate verification sample](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/master/provisioning/Samples/service/GroupCertificateVerificationSample) in the *Microsoft Azure IoT SDK for .NET* shows how to do proof-of-possession in C# with an existing X.509 intermediate or root CA certificate.
+ A public key certificate file (*device-cert.pem*) and private key file (*device-key.pem*) should now be generated in the directory where you ran the `openssl` command.
-::: zone-end
+ The certificate file has its subject common name (CN) set to `my-x509-device`. For X.509-based enrollments, the [Registration ID](./concepts-service.md#registration-id) is set to the common name. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format.
-::: zone pivot="programming-language-csharp"
+5. The certificate file is Base64 encoded. To view the subject common name (CN) and other properties of the certificate file, enter the following command:
-1. In a PowerShell prompt, change directories to the project directory for the X.509 device provisioning sample.
+ # [Windows](#tab/windows)
- ```powershell
- cd .\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample
+ ```bash
+ winpty openssl x509 -in device-cert.pem -text -noout
```
-2. The sample code is set up to use X.509 certificates that are stored within a password-protected PKCS12 formatted file (`certificate.pfx`). Additionally, you'll need a public key certificate file (`certificate.cer`) to create an individual enrollment later in this quickstart. To generate the self-signed certificate and its associated `.cer` and `.pfx` files, run the following command:
+ # [Linux](#tab/linux)
- ```powershell
- PS D:\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample> .\GenerateTestCertificate.ps1 iothubx509device1
+ ```bash
+ openssl x509 -in device-cert.pem -text -noout
```
- The certificate generated by this command has a subject common name (CN) of _iothubx509device1_. For X.509-based enrollments, the [Registration ID](./concepts-service.md#registration-id) is set to the common name. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format.
-
-3. The script prompts you for a PFX password. Remember this password, as you will use it later when you run the sample. Optionally, you can run `certutil` to dump the certificate and verify the subject name.
+ ---
- ```powershell
- PS D:\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample> certutil .\certificate.pfx
- Enter PFX password:
- ================ Certificate 0 ================
- ================ Begin Nesting Level 1 ================
- Element 0:
- Serial Number: 7b4a0e2af6f40eae4d91b3b7ff05a4ce
- Issuer: CN=iothubx509device1, O=TEST, C=US
- NotBefore: 2/1/2021 6:18 PM
- NotAfter: 2/1/2022 6:28 PM
- Subject: CN=iothubx509device1, O=TEST, C=US
- Signature matches Public Key
- Root Certificate: Subject matches Issuer
- Cert Hash(sha1): e3eb7b7cc1e2b601486bf8a733887a54cdab8ed6
- ---------------- End Nesting Level 1 ----------------
- Provider = Microsoft Strong Cryptographic Provider
- Signature test passed
- CertUtil: -dump command completed successfully.
+ ```output
+ Certificate:
+ Data:
+ Version: 3 (0x2)
+ Serial Number:
+ 77:3e:1d:e4:7e:c8:40:14:08:c6:09:75:50:9c:1a:35:6e:19:52:e2
+ Signature Algorithm: sha256WithRSAEncryption
+ Issuer: CN = my-x509-device
+ Validity
+ Not Before: May 5 21:41:42 2022 GMT
+ Not After : Jun 4 21:41:42 2022 GMT
+ Subject: CN = my-x509-device
+ Subject Public Key Info:
+ Public Key Algorithm: rsaEncryption
+ RSA Public-Key: (4096 bit)
+ Modulus:
+ 00:d2:94:37:d6:1b:f7:43:b4:21:c6:08:1a:d6:d7:
+ e6:40:44:4e:4d:24:41:6c:3e:8c:b2:2c:b0:23:29:
+ ...
+ 23:6e:58:76:45:18:03:dc:2e:9d:3f:ac:a3:5c:1f:
+ 9f:66:b0:05:d5:1c:fe:69:de:a9:09:13:28:c6:85:
+ 0e:cd:53
+ Exponent: 65537 (0x10001)
+ X509v3 extensions:
+ X509v3 Basic Constraints:
+ CA:FALSE
+ Netscape Comment:
+ OpenSSL Generated Certificate
+ X509v3 Subject Key Identifier:
+ 63:C0:B5:93:BF:29:F8:57:F8:F9:26:44:70:6F:9B:A4:C7:E3:75:18
+ X509v3 Authority Key Identifier:
+ keyid:63:C0:B5:93:BF:29:F8:57:F8:F9:26:44:70:6F:9B:A4:C7:E3:75:18
+
+ X509v3 Extended Key Usage:
+ TLS Web Client Authentication
+ Signature Algorithm: sha256WithRSAEncryption
+ 82:8a:98:f8:47:00:85:be:21:15:64:b9:22:b0:13:cc:9e:9a:
+ ed:f5:93:b9:4b:57:0f:79:85:9d:89:47:69:95:65:5e:b3:b1:
+ ...
+ cc:b2:20:9a:b7:f2:5e:6b:81:a1:04:93:e9:2b:92:62:e0:1c:
+ ac:d2:49:b9:36:d2:b0:21
```
-::: zone-end
+::: zone pivot="programming-language-ansi-c"
-::: zone pivot="programming-language-nodejs"
+6. The sample code requires a private key that isn't encrypted. Run the following command to create an unencrypted private key:
-1. Open a command prompt, and go to the certificate generator script and build the project:
+ # [Windows](#tab/windows)
- ```cmd/sh
- cd azure-iot-sdk-node/provisioning/tools
- npm install
+ ```bash
+ winpty openssl rsa -in device-key.pem -out unencrypted-device-key.pem
```
-2. Create a _leaf_ X.509 certificate by running the script using your own _certificate-name_. For X.509-based enrollments, the leaf certificate's common name becomes the [Registration ID](./concepts-service.md#registration-id). The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The _certificate-name_ parameter must adhere to this format.
+ # [Linux](#tab/linux)
- ```cmd/sh
- node create_test_cert.js device {certificate-name}
+ ```bash
+ openssl rsa -in device-key.pem -out unencrypted-device-key.pem
```
+ ---
+
+7. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`.
+
+Keep the Git Bash prompt open. You'll need it later in this quickstart.
+
::: zone-end
-::: zone pivot="programming-language-python"
+::: zone pivot="programming-language-csharp"
-1. In the Git Bash prompt, run the following command:
+The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS12 formatted file (`certificate.pfx`). You'll still need the PEM formatted public key certificate file (`device-cert.pem`) that you just created to create an individual enrollment entry later in this quickstart.
+
+1. To generate the PKCS12 formatted file expected by the sample, enter the following command:
# [Windows](#tab/windows)
```bash
- winpty openssl req -outform PEM -x509 -sha256 -newkey rsa:4096 -keyout ./python-device.key.pem -out ./python-device.pem -days 365 -extensions usr_cert -subj "//CN=Python-device-01"
+ winpty openssl pkcs12 -inkey device-key.pem -in device-cert.pem -export -out certificate.pfx
```
- > [!IMPORTANT]
- > The extra forward slash given for the subject name (`//CN=Python-device-01`) is only required to escape the string with Git on Windows platforms.
-
# [Linux](#tab/linux)
```bash
- openssl req -outform PEM -x509 -sha256 -newkey rsa:4096 -keyout ./python-device.key.pem -out ./python-device.pem -days 365 -extensions usr_cert -subj "/CN=Python-device-01"
+ openssl pkcs12 -inkey device-key.pem -in device-cert.pem -export -out certificate.pfx
```
---
-2. When asked to **Enter PEM pass phrase:**, use the pass phrase `1234`.
+1. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`.
-3. When asked **Verifying - Enter PEM pass phrase:**, use the pass phrase `1234` again.
+1. When asked to **Enter Export Password:**, use the password `1234`.
-A test certificate file (*python-device.pem*) and private key file (*python-device.key.pem*) should now be generated in the directory where you ran the `openssl` command.
+1. When asked **Verifying - Enter Export Password:**, use the password `1234` again.
-The certificate file has its subject common name (CN) set to `Python-device-01`. For an X.509-based enrollments, the [Registration ID](./concepts-service.md#registration-id) is set to the common name. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format.
+ A PKCS12 formatted certificate file (*certificate.pfx*) should now be generated in the directory where you ran the `openssl` command.
+
+1. Copy the PKCS12 formatted certificate file to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the sample repo.
+
+ ```bash
+ cp certificate.pfx ./azure-iot-samples-csharp/provisioning/Samples/device/X509Sample
+ ```
+
+You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
::: zone-end
-::: zone pivot="programming-language-java"
+::: zone pivot="programming-language-nodejs"
-1. Using the command prompt from previous steps, go to the `target` folder.
+6. Copy the device certificate and private key to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the SDK.
-2. Run the .jar file created in the previous section.
+ ```bash
+ cp device-cert.pem ./azure-iot-sdk-node/provisioning/device/samples
+ cp device-key.pem ./azure-iot-sdk-node/provisioning/device/samples
+ ```
- ```cmd/sh
- cd target
- java -jar ./provisioning-x509-cert-generator-{version}-with-deps.jar
+You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
+
+::: zone-end
+
+::: zone pivot="programming-language-python"
+
+6. Copy the device certificate and private key to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the SDK.
+
+ ```bash
+ cp device-cert.pem ./azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios
+ cp device-key.pem ./azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios
```
-3. Enter **N** for _Do you want to input common name_. This creates a certificate with a subject common name (CN) of _microsoftriotcore_.
+You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
- For an X.509-based enrollments, the [Registration ID](./concepts-service.md#registration-id) is set to the common name. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format.
+::: zone-end
+::: zone pivot="programming-language-java"
-4. Copy the output of `Client Cert` to the clipboard, starting from *-----BEGIN CERTIFICATE-----* through *-----END CERTIFICATE-----*.
+6. The Java sample code requires a private key that isn't encrypted. Run the following command to create an unencrypted private key:
- ![Individual certificate generator](./media/quick-create-simulated-device-x509/cert-generator-java.png)
+ # [Windows](#tab/windows)
-5. Create a file named *_X509individual.pem_* on your Windows machine.
+ ```bash
+ winpty openssl pkey -in device-key.pem -out unencrypted-device-key.pem
+ ```
-6. Open *_X509individual.pem_* in an editor of your choice, and copy the clipboard contents to this file.
+ # [Linux](#tab/linux)
-7. Save the file and close your editor.
+ ```bash
+ openssl pkey -in device-key.pem -out unencrypted-device-key.pem
+ ```
+
+ ---
-8. In the command prompt, enter **N** for _Do you want to input Verification Code_ and keep the program output open for reference later in the quickstart. Copy the `Client Cert` and `Client Cert Private Key` values, for use in the next section.
+7. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`.
+
+Keep the Git Bash prompt open. You'll need it later in this quickstart.
::: zone-end
@@ -374,161 +425,180 @@ This article demonstrates an individual enrollment for a single device to be pro
5. At the top of the page, select **+ Add individual enrollment**.
-::: zone pivot="programming-language-ansi-c"
-
6. In the **Add Enrollment** page, enter the following information.
* **Mechanism:** Select **X.509** as the identity attestation *Mechanism*.
- * **Primary certificate .pem or .cer file:** Choose **Select a file** to select the certificate file, *X509testcert.pem* that you created in the previous section.
- * **IoT Hub Device ID:** Enter *test-docs-cert-device* to give the device an ID.
+ * **Primary certificate .pem or .cer file:** Choose **Select a file** and navigate to and select the certificate file, *device-cert.pem*, that you created in the previous section.
+ * Leave **IoT Hub Device ID:** blank. Your device will be provisioned with its device ID set to the common name (CN) in the X.509 certificate, *my-x509-device*. This common name will also be the name used for the registration ID for the individual enrollment entry.
+ * Optionally, you can provide the following information:
+ * Select an IoT hub linked with your provisioning service.
+ * Update the **Initial device twin state** with the desired initial configuration for the device.
- :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/add-individual-enrollment-with-cert.png" alt-text="Screenshot that shows adding an individual enrollment with X.509 attestation to D P S in Azure portal.":::
-::: zone-end
+7. Select **Save**. You'll be returned to **Manage enrollments**.
-::: zone pivot="programming-language-csharp"
+8. Select **Individual Enrollments**. Your X.509 enrollment entry, *my-x509-device*, should appear in the list.
-6. In the **Add Enrollment** page, enter the following information.
+## Prepare and run the device provisioning code
- * **Mechanism:** Select **X.509** as the identity attestation *Mechanism*.
- * **Primary certificate .pem or .cer file:** Choose **Select a file** to select the certificate file, *certificate.cer* that you created in the previous section.
- * Leave **IoT Hub Device ID:** blank. Your device will be provisioned with its device ID set to the common name (CN) in the X.509 certificate, *iothubx509device1*. This common name will also be the name used for the registration ID for the individual enrollment entry.
- * Optionally, you can provide the following information:
- * Select an IoT hub linked with your provisioning service.
- * Update the **Initial device twin state** with the desired initial configuration for the device.
+In this section, you'll update the sample code to send the device's boot sequence to your Device Provisioning Service instance. This boot sequence will cause the device to be recognized and assigned to an IoT hub linked to the DPS instance.
- :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation.":::
+::: zone pivot="programming-language-ansi-c"
-::: zone-end
+In this section, you'll use your Git Bash prompt and the Visual Studio IDE.
-::: zone pivot="programming-language-nodejs"
+### Configure the provisioning device code
-6. In the **Add Enrollment** page, enter the following information.
+In this section, you update the sample code with your Device Provisioning Service instance information.
- * **Mechanism:** Select **X.509** as the identity attestation *Mechanism*.
- * **Primary certificate .pem or .cer file:** Choose **Select a file** to select the certificate file, *{certificate-name}_cert.pem* that you created in the previous section.
- * Optionally, you can provide the following information:
- * Select an IoT hub linked with your provisioning service.
- * Enter a unique device ID. Make sure to avoid sensitive data while naming your device.
- * Update the **Initial device twin state** with the desired initial configuration for the device.
- :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation.":::
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
-::: zone-end
+1. Copy the **ID Scope** value.
-::: zone pivot="programming-language-python"
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the I D scope on Azure portal.":::
-6. In the **Add Enrollment** page, enter the following information.
+1. Launch Visual Studio and open the new solution file that was created in the `cmake` directory you created in the root of the azure-iot-sdk-c git repository. The solution file is named `azure_iot_sdks.sln`.
- * **Mechanism:** Select **X.509** as the identity attestation *Mechanism*.
- * **Primary certificate .pem or .cer file:** Choose **Select a file** to select the certificate file, *python-device.pem* if you are using the test certificate created earlier.
- * Optionally, you can provide the following information:
- * Select an IoT hub linked with your provisioning service.
- * Update the **Initial device twin state** with the desired initial configuration for the device.
+1. In Solution Explorer for Visual Studio, navigate to **Provision_Samples > prov_dev_client_sample > Source Files** and open *prov_dev_client_sample.c*.
- :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation.":::
+1. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied in step 2.
-::: zone-end
+ ```c
+ static const char* id_scope = "0ne00000A0A";
+ ```
-::: zone pivot="programming-language-java"
+1. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_X509` as shown below.
-6. In the **Add Enrollment** panel, enter the following information:
- * Select **X.509** as the identity attestation *Mechanism*.
- * Under the *Primary certificate .pem or .cer file*, choose *Select a file* to select the certificate file *X509individual.pem* created in the previous steps.
- * Optionally, you may provide the following information:
- * Select an IoT hub linked with your provisioning service.
- * Enter a unique device ID. Make sure to avoid sensitive data while naming your device.
- * Update the **Initial device twin state** with the desired initial configuration for the device.
+ ```c
+ SECURE_DEVICE_TYPE hsm_type;
+ //hsm_type = SECURE_DEVICE_TYPE_TPM;
+ hsm_type = SECURE_DEVICE_TYPE_X509;
+ //hsm_type = SECURE_DEVICE_TYPE_SYMMETRIC_KEY;
+ ```
- :::image type="content" source="./media/quick-create-simulated-device-x509/device-enrollment.png" alt-text="Add device as individual enrollment with X.509 attestation.":::
+1. Save your changes.
-::: zone-end
+1. Right-click the **prov_dev_client_sample** project and select **Set as Startup Project**.
-7. Select **Save**. You'll be returned to **Manage enrollments**.
+### Configure the custom HSM stub code
-8. Select **Individual Enrollments**. Your X.509 enrollment entry should appear in the registration table.
+The specifics of interacting with actual secure hardware-based storage vary depending on the hardware. As a result, the certificate and private key used by the simulated device in this quickstart will be hardcoded in the custom Hardware Security Module (HSM) stub code.
-## Prepare and run the device provisioning code
+To update the custom HSM stub code to simulate the identity of the device with ID `my-x509-device`:
-::: zone pivot="programming-language-ansi-c"
+1. In Solution Explorer for Visual Studio, navigate to **Provision_Samples > custom_hsm_example > Source Files** and open *custom_hsm_example.c*.
-In this section, we'll update the sample code to send the device's boot sequence to your Device Provisioning Service instance. This boot sequence will cause the device to be recognized and assigned to an IoT hub linked to the Device Provisioning Service instance.
+1. Update the string value of the `COMMON_NAME` string constant using the common name you used when generating the device certificate, `my-x509-device`.
-1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+ ```c
+ static const char* const COMMON_NAME = "my-x509-device";
+ ```
-2. Copy the **_ID Scope_** value.
+1. Update the string value of the `CERTIFICATE` constant string using the device certificate, *device-cert.pem*, that you generated previously.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Copy ID Scope from the portal.":::
+ The syntax of certificate text in the sample must follow the pattern below with no extra spaces or parsing done by Visual Studio.
-3. In Visual Studio's *Solution Explorer* window, navigate to the **Provision\_Samples** folder. Expand the sample project named **prov\_dev\_client\_sample**. Expand **Source Files**, and open **prov\_dev\_client\_sample.c**.
+ ```c
+ static const char* const CERTIFICATE = "-----BEGIN CERTIFICATE-----\n"
+ "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n"
+ ...
+ "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n"
+ "-----END CERTIFICATE-----";
+ ```
-4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `CERTIFICATE` string constant value and write it to the output.
- ```c
- static const char* id_scope = "0ne00002193";
+ ```Bash
+ sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' device-cert.pem
```
-5. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_X509` instead of `SECURE_DEVICE_TYPE_TPM` as shown below.
+ Copy and paste the output certificate text for the constant value.
+
+1. Update the string value of the `PRIVATE_KEY` constant with the unencrypted private key for your device certificate, *unencrypted-device-key.pem*.
+
+ The syntax of the private key text must follow the pattern below with no extra spaces or parsing done by Visual Studio.
```c
- SECURE_DEVICE_TYPE hsm_type;
- //hsm_type = SECURE_DEVICE_TYPE_TPM;
- hsm_type = SECURE_DEVICE_TYPE_X509;
+ static const char* const PRIVATE_KEY = "-----BEGIN RSA PRIVATE KEY-----\n"
+ "MIIJJwIBAAKCAgEAtjvKQjIhp0EE1PoADL1rfF/W6v4vlAzOSifKSQsaPeebqg8U\n"
+ ...
+ "X7fi9OZ26QpnkS5QjjPTYI/wwn0J9YAwNfKSlNeXTJDfJ+KpjXBcvaLxeBQbQhij\n"
+ "-----END RSA PRIVATE KEY-----";
```
-6. Right-click the **prov\_dev\_client\_sample** project and select **Set as Startup Project**.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `PRIVATE_KEY` string constant value and write it to the output.
-7. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. In the prompt to rebuild the project, select **Yes** to rebuild the project before running.
+ ```Bash
+ sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' unencrypted-device-key.pem
+ ```
- The following output is an example of the provisioning device client sample successfully booting up, and connecting to the provisioning Service instance to get IoT hub information and registering:
+ Copy and paste the output private key text for the constant value.
- ```cmd
- Provisioning API Version: 1.2.7
+1. Save your changes.
+
+1. Right-click the **custom_hsm_-_example** project and select **Build**.
+
+ > [!IMPORTANT]
+ > You must build the **custom_hsm_example** project before you build the rest of the solution in the next section.
- Registering... Press enter key to interrupt.
+### Run the sample
+1. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. If you're prompted to rebuild the project, select **Yes** to rebuild the project before running.
+
+ The following output is an example of the simulated device `my-x509-device` successfully booting up, and connecting to the provisioning service. The device is assigned to an IoT hub and registered:
+
+ ```output
+ Provisioning API Version: 1.8.0
+
+ Registering Device
+
Provisioning Status: PROV_DEVICE_REG_STATUS_CONNECTED
Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
- Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
-
- Registration Information received from service:
- test-docs-hub.azure-devices.net, deviceId: test-docs-cert-device
+
+ Registration Information received from service: contoso-iot-hub-2.azure-devices.net, deviceId: my-x509-device
+ Press enter key to exit:
```
::: zone-end
::: zone pivot="programming-language-csharp"
+In this section, you'll use your Windows command prompt.
+
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
-2. Copy the **_ID Scope_** value.
+2. Copy the **ID Scope** value.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Copy ID Scope from the portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the I D scope on Azure portal.":::
-3. Open a command prompt window.
+3. In your Windows command prompt, change to the X509Sample directory. This is located in the *.\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample* directory off the directory where you cloned the samples on your computer.
-4. Type the following command to build and run the X.509 device provisioning sample (replace the `` value with the ID Scope that you copied in the previous section.). The certificate file will default to *./certificate.pfx* and prompt for the .pfx password. Type in your password.
+4. Enter the following command to build and run the X.509 device provisioning sample (replace the `` value with the ID Scope that you copied in the previous section. The certificate file will default to *./certificate.pfx* and prompt for the .pfx password.
- ```powershell
+ ```cmd
dotnet run -- -s
```
- If you want to pass everything as a parameter, you can use the following example format.
+ If you want to pass the certificate and password as a parameter, you can use the following format.
- ```powershell
+ ```cmd
dotnet run -- -s 0ne00000A0A -c certificate.pfx -p 1234
```
-5. The device will now connect to DPS and be assigned to an IoT Hub. Then, the device will send a telemetry message to the hub.
+5. The device will connect to DPS and be assigned to an IoT hub. Then, the device will send a telemetry message to the IoT hub.
```output
Loading the certificate...
- Found certificate: 10952E59D13A3E388F88E534444484F52CD3D9E4 CN=iothubx509device1, O=TEST, C=US; PrivateKey: True
- Using certificate 10952E59D13A3E388F88E534444484F52CD3D9E4 CN=iothubx509device1, O=TEST, C=US
+ Enter the PFX password for certificate.pfx:
+ ****
+ Found certificate: A33DB11B8883DEE5B1690ACFEAAB69E8E928080B CN=my-x509-device; PrivateKey: True
+ Using certificate A33DB11B8883DEE5B1690ACFEAAB69E8E928080B CN=my-x509-device
Initializing the device provisioning client...
- Initialized for registration Id iothubx509device1.
+ Initialized for registration Id my-x509-device.
Registering with the device provisioning service...
Registration status: Assigned.
- Device iothubx509device2 registered to sample-iot-hub1.azure-devices.net.
+ Device my-x509-device registered to MyExampleHub.azure-devices.net.
Creating X509 authentication for IoT Hub...
Testing the provisioned device with IoT Hub...
Sending a telemetry message...
@@ -539,38 +609,33 @@ In this section, we'll update the sample code to send the device's boot sequence
::: zone pivot="programming-language-nodejs"
-1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+In this section, you'll use your Windows command prompt.
-2. Copy the **_ID Scope_** and **Global device endpoint** values.
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Copy ID Scope from the portal.":::
+1. Copy the **ID Scope** and **Global device endpoint** values.
-3. Copy your _certificate_ and _key_ to the sample folder.
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the I D scope and global device endpoint on Azure portal.":::
- ```cmd/sh
- copy .\{certificate-name}_cert.pem ..\device\samples\{certificate-name}_cert.pem
- copy .\{certificate-name}_key.pem ..\device\samples\{certificate-name}_key.pem
- ```
+1. In your Windows command prompt, go to the sample directory, and install the packages needed by the sample. The path shown is relative to the location where you cloned the SDK.
-4. Navigate to the device test script and build the project.
-
- ```cmd/sh
- cd ..\device\samples
+ ```cmd
+ cd ./azure-iot-sdk-node/provisioning/device/samples
npm install
```
-5. Edit the **register\_x509.js** file with the following changes:
+1. Edit the **register_x509.js** file and make the following changes:
- * Replace `provisioning host` with the **_Global Device Endpoint_** noted in **Step 1** above.
- * Replace `id scope` with the **_ID Scope_** noted in **Step 1** above.
- * Replace `registration id` with the **_Registration ID_** noted in the previous section.
- * Replace `cert filename` and `key filename` with the files you copied in **Step 2** above.
+ * Replace `provisioning host` with the **Global Device Endpoint** noted in **Step 1** above.
+ * Replace `id scope` with the **ID Scope** noted in **Step 1** above.
+ * Replace `registration id` with the **Registration ID** noted in the previous section.
+ * Replace `cert filename` and `key filename` with the files you generated previously, *device-cert.pem* and *device-key.pem*.
-6. Save the file.
+1. Save the file.
-7. Execute the script and verify that the device was provisioned successfully.
+1. Run the sample and verify that the device was provisioned successfully.
- ```cmd/sh
+ ```cmd
node register_x509.js
```
@@ -581,82 +646,65 @@ In this section, we'll update the sample code to send the device's boot sequence
::: zone pivot="programming-language-python"
-The Python provisioning sample, [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/provision_x509.py) is located in the `azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios` directory. This sample uses six environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
-
-| Variable name | Description |
-| :------------------------- | :---------------------------------------------- |
-| `PROVISIONING_HOST` | This value is the global endpoint used for connecting to your DPS resource |
-| `PROVISIONING_IDSCOPE` | This value is the ID Scope for your DPS resource |
-| `DPS_X509_REGISTRATION_ID` | This value is the ID for your device. It must also match the subject name on the device certificate |
-| `X509_CERT_FILE` | Your device certificate filename |
-| `X509_KEY_FILE` | The private key filename for your device certificate |
-| `PASS_PHRASE` | The pass phrase you used to encrypt the certificate and private key file (`1234`). |
+In this section, you'll use your Windows command prompt.
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
-2. Copy the **_ID Scope_** and **Global device endpoint** values.
+1. Copy the **ID Scope** and **Global device endpoint** values.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Copy ID Scope from the portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the I D scope and global device endpoint on Azure portal.":::
-3. In your Git Bash prompt, use the following commands to add the environment variables for the global device endpoint and ID Scope.
+1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
- ```bash
- $export PROVISIONING_HOST=global.azure-devices-provisioning.net
- $export PROVISIONING_IDSCOPE=
+ ```cmd
+ cd ./azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios
```
-4. The registration ID for the IoT device must match subject name on its device certificate. If you generated a self-signed test certificate, `Python-device-01` is both the subject name and the registration ID for the device.
+ This sample uses six environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
- If you already have a device certificate, you can use `certutil` to verify the subject common name used for your device, as shown below:
+ | Variable name | Description |
+ | :------------------------- | :---------------------------------------------- |
+ | `PROVISIONING_HOST` | This value is the global endpoint used for connecting to your DPS resource |
+ | `PROVISIONING_IDSCOPE` | This value is the ID Scope for your DPS resource |
+ | `DPS_X509_REGISTRATION_ID` | This value is the ID for your device. It must also match the subject name on the device certificate |
+ | `X509_CERT_FILE` | Your device certificate filename |
+ | `X509_KEY_FILE` | The private key filename for your device certificate |
+ | `PASS_PHRASE` | The pass phrase you used to encrypt the certificate and private key file (`1234`). |
- ```bash
- $ certutil python-device.pem
- X509 Certificate:
- Version: 3
- Serial Number: fa33152fe1140dc8
- Signature Algorithm:
- Algorithm ObjectId: 1.2.840.113549.1.1.11 sha256RSA
- Algorithm Parameters:
- 05 00
- Issuer:
- CN=Python-device-01
- Name Hash(sha1): 1dd88de40e9501fb64892b698afe12d027011000
- Name Hash(md5): a62c784820daa931b9d3977739b30d12
-
- NotBefore: 1/29/2021 7:05 PM
- NotAfter: 1/29/2022 7:05 PM
-
- Subject:
- ===> CN=Python-device-01 <===
- Name Hash(sha1): 1dd88de40e9501fb64892b698afe12d027011000
- Name Hash(md5): a62c784820daa931b9d3977739b30d12
- ```
-
-5. In the Git Bash prompt, set the environment variable for the registration ID as follows:
+1. Add the environment variables for the global device endpoint and ID Scope.
- ```bash
- $export DPS_X509_REGISTRATION_ID=Python-device-01
+ ```cmd
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ set PROVISIONING_IDSCOPE=
```
-6. In the Git Bash prompt, set the environment variables for the certificate file, private key file, and pass phrase.
+1. The registration ID for the IoT device must match subject name on its device certificate. If you generated a self-signed test certificate, `my-x509-device` is both the subject name and the registration ID for the device.
- ```bash
- $export X509_CERT_FILE=./python-device.pem
- $export X509_KEY_FILE=./python-device.key.pem
- $export PASS_PHRASE=1234
+1. Set the environment variable for the registration ID as follows:
+
+ ```cmd
+ set DPS_X509_REGISTRATION_ID=my-x509-device
```
-7. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
+1. Set the environment variables for the certificate file, private key file, and pass phrase.
+
+ ```cmd
+ set X509_CERT_FILE=./device-cert.pem
+ set X509_KEY_FILE=./device-key.pem
+ set PASS_PHRASE=1234
+ ```
-8. Save your changes.
+1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
-9. Run the sample. The sample will connect, provision the device to a hub, and send some test messages to the hub.
+1. Save your changes.
- ```bash
- $ winpty python azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios/provision_x509.py
+1. Run the sample. The sample will connect to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
+
+ ```cmd
+ $ python azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios/provision_x509.py
RegistrationStage(RequestAndResponseOperation): Op will transition into polling after interval 2. Setting timer.
The complete registration result is
- Python-device-01
+ my-x509-device
TestHub12345.azure-devices.net
initialAssignment
null
@@ -687,98 +735,153 @@ The Python provisioning sample, [provision_x509.py](https://github.com/Azure/azu
::: zone pivot="programming-language-java"
+In this section, you'll use both your Windows command prompt and your Git Bash prompt.
+
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
-2. Copy the **_ID Scope_** and **Global device endpoint** values.
+1. Copy the **ID Scope** and **Global device endpoint** values.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Copy ID Scope from the portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the I D scope and global device endpoint on Azure portal.":::
-3. Open a command prompt. Navigate to the sample project folder of the Java SDK repository.
+1. In your Windows command prompt, navigate to the sample project folder. The path shown is relative to the location where you cloned the SDK
- ```cmd/sh
- cd azure-iot-sdk-java/provisioning/provisioning-samples/provisioning-X509-sample
+ ```cmd
+ cd .\azure-iot-sdk-java\provisioning\provisioning-samples\provisioning-X509-sample
```
-4. Enter the provisioning service and X.509 identity information in your code. This is used during provisioning, for attestation of the simulated device, prior to device registration:
+1. Enter the provisioning service and X.509 identity information in the sample code. This is used during provisioning, for attestation of the simulated device, prior to device registration.
- * Edit the file `/src/main/java/samples/com/microsoft/azure/sdk/iot/ProvisioningX509Sample.java`, to include your _ID Scope_ and _Provisioning Service Global Endpoint_ as noted previously. Also include _Client Cert_ and _Client Cert Private Key_ as noted in the previous section.
+ 1. Open the file `.\src\main\java\samples\com/microsoft\azure\sdk\iot\ProvisioningX509Sample.java` in your favorite editor.
- ```java
- private static final String idScope = "[Your ID scope here]";
- private static final String globalEndpoint = "[Your Provisioning Service Global Endpoint here]";
- private static final ProvisioningDeviceClientTransportProtocol PROVISIONING_DEVICE_CLIENT_TRANSPORT_PROTOCOL = ProvisioningDeviceClientTransportProtocol.HTTPS;
- private static final String leafPublicPem = "";
- private static final String leafPrivateKey = "";
- ```
+ 1. Update the following values with the **ID Scope** and **Provisioning Service Global Endpoint** that you copied previously.
- * Use the following format when copying/pasting your certificate and private key:
+ ```java
+ private static final String idScope = "[Your ID scope here]";
+ private static final String globalEndpoint = "[Your Provisioning Service Global Endpoint here]";
+ private static final ProvisioningDeviceClientTransportProtocol PROVISIONING_DEVICE_CLIENT_TRANSPORT_PROTOCOL = ProvisioningDeviceClientTransportProtocol.HTTPS;
- ```java
- private static final String leafPublicPem = "-----BEGIN CERTIFICATE-----\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "-----END CERTIFICATE-----\n";
- private static final String leafPrivateKey = "-----BEGIN PRIVATE KEY-----\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n" +
- "XXXXXXXXXX\n" +
- "-----END PRIVATE KEY-----\n";
- ```
+ 1. Update the value of the `leafPublicPem` constant string with the value of your certificate, *device-cert.pem*.
-5. Build the sample, and then go to the `target` folder and execute the created .jar file.
+ The syntax of certificate text must follow the pattern below with no extra spaces or characters.
- ```cmd/sh
- mvn clean install
- cd target
- java -jar ./provisioning-x509-sample-{version}-with-deps.jar
- ```
+ ```java
+ private static final String leafPublicPem = "-----BEGIN CERTIFICATE-----\n" +
+ "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n" +
+ ...
+ "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n" +
+ "-----END CERTIFICATE-----";
+ ```
-::: zone-end
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPublicPem` string constant value and write it to the output.
-## Confirm your device provisioning registration
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' device-cert.pem
+ ```
-1. Go to the [Azure portal](https://portal.azure.com).
+ Copy and paste the output certificate text for the constant value.
-2. On the left-hand menu or on the portal page, select **All resources**.
+ 1. Update the string value of the `leafPrivateKey` constant with the unencrypted private key for your device certificate, *unencrypted-device-key.pem*.
+
+ The syntax of the private key text must follow the pattern below with no extra spaces or characters.
-3. Select the IoT hub to which your device was assigned.
+ ```java
+ private static final String leafPrivateKey = "-----BEGIN PRIVATE KEY-----\n" +
+ "MIIJJwIBAAKCAgEAtjvKQjIhp0EE1PoADL1rfF/W6v4vlAzOSifKSQsaPeebqg8U\n" +
+ ...
+ "X7fi9OZ26QpnkS5QjjPTYI/wwn0J9YAwNfKSlNeXTJDfJ+KpjXBcvaLxeBQbQhij\n" +
+ "-----END PRIVATE KEY-----";
+ ```
-4. In the **Explorers** menu, select **IoT Devices**.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPrivateKey` string constant value and write it to the output.
-5. If your device was provisioned successfully, the device ID should appear in the list, with **Status** set as *enabled*. If you don't see your device, select **Refresh** at the top of the page.
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' unencrypted-device-key.pem
+ ```
- :::zone pivot="programming-language-ansi-c"
+ Copy and paste the output private key text for the constant value.
- :::image type="content" source="./media/quick-create-simulated-device-x509/hub-registration.png" alt-text="Device is registered with the IoT hub":::
+ 1. Save your changes.
- ::: zone-end
- :::zone pivot="programming-language-csharp"
+1. Build the sample, and then go to the `target` folder.
- :::image type="content" source="./media/quick-create-simulated-device-x509/hub-registration-csharp.png" alt-text="CSharp device is registered with the IoT hub":::
+ ```cmd
+ mvn clean install
+ cd target
+ ```
+
+1. The build outputs .jar file in the `target` folder with the following file format: `provisioning-x509-sample-{version}-with-deps.jar`; for example: `provisioning-x509-sample-1.8.1-with-deps.jar`. Execute the .jar file. You may need to replace the version in the command below.
+
+ ```cmd
+ java -jar ./provisioning-x509-sample-1.8.1-with-deps.jar
+ ```
+
+ The sample will connect to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
+
+ ```output
+ Starting...
+ Beginning setup.
+ WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
+ 2022-05-11 09:42:05,025 DEBUG (main) [com.microsoft.azure.sdk.iot.provisioning.device.ProvisioningDeviceClient] - Initialized a ProvisioningDeviceClient instance using SDK version 2.0.0
+ 2022-05-11 09:42:05,027 DEBUG (main) [com.microsoft.azure.sdk.iot.provisioning.device.ProvisioningDeviceClient] - Starting provisioning thread...
+ Waiting for Provisioning Service to register
+ 2022-05-11 09:42:05,030 INFO (global.azure-devices-provisioning.net-6255a8ba-CxnPendingConnectionId-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Opening the connection to device provisioning service...
+ 2022-05-11 09:42:05,252 INFO (global.azure-devices-provisioning.net-6255a8ba-Cxn6255a8ba-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Connection to device provisioning service opened successfully, sending initial device registration message
+ 2022-05-11 09:42:05,286 INFO (global.azure-devices-provisioning.net-6255a8ba-Cxn6255a8ba-azure-iot-sdk-RegisterTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.RegisterTask] - Authenticating with device provisioning service using x509 certificates
+ 2022-05-11 09:42:06,083 INFO (global.azure-devices-provisioning.net-6255a8ba-Cxn6255a8ba-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Waiting for device provisioning service to provision this device...
+ 2022-05-11 09:42:06,083 INFO (global.azure-devices-provisioning.net-6255a8ba-Cxn6255a8ba-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Current provisioning status: ASSIGNING
+ Waiting for Provisioning Service to register
+ 2022-05-11 09:42:15,685 INFO (global.azure-devices-provisioning.net-6255a8ba-Cxn6255a8ba-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Device provisioning service assigned the device successfully
+ IotHUb Uri : MyExampleHub.azure-devices.net
+ Device ID : java-device-01
+ 2022-05-11 09:42:25,057 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
+ 2022-05-11 09:42:25,080 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
+ 2022-05-11 09:42:25,087 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Initialized a DeviceClient instance using SDK version 2.0.3
+ 2022-05-11 09:42:25,129 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - Opening MQTT connection...
+ 2022-05-11 09:42:25,150 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT CONNECT packet...
+ 2022-05-11 09:42:25,982 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT CONNECT packet was acknowledged
+ 2022-05-11 09:42:25,983 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT SUBSCRIBE packet for topic devices/java-device-01/messages/devicebound/#
+ 2022-05-11 09:42:26,068 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT SUBSCRIBE packet for topic devices/java-device-01/messages/devicebound/# was acknowledged
+ 2022-05-11 09:42:26,068 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - MQTT connection opened successfully
+ 2022-05-11 09:42:26,070 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - The connection to the IoT Hub has been established
+ 2022-05-11 09:42:26,071 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Updating transport status to new status CONNECTED with reason CONNECTION_OK
+ 2022-05-11 09:42:26,071 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceIO] - Starting worker threads
+ 2022-05-11 09:42:26,073 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking connection status callbacks with new status details
+ 2022-05-11 09:42:26,074 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Client connection opened successfully
+ 2022-05-11 09:42:26,075 INFO (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Device client opened successfully
+ Sending message from device to IoT Hub...
+ 2022-05-11 09:42:26,077 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] )
+ Press any key to exit...
+ 2022-05-11 09:42:26,079 DEBUG (MyExampleHub.azure-devices.net-java-device-01-ee6c362d-Cxn7a1fb819-e46d-4658-9b03-ca50c88c0440-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] )
+ 2022-05-11 09:42:26,422 DEBUG (MQTT Call: java-device-01) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] )
+ 2022-05-11 09:42:26,425 DEBUG (MyExampleHub.azure-devices.net-java-device-01-ee6c362d-Cxn7a1fb819-e46d-4658-9b03-ca50c88c0440-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] ) with status OK
+ Message sent!
+ ```
+
+::: zone-end
+
+## Confirm your device provisioning registration
- ::: zone-end
+To see which IoT hub your device was provisioned to, examine the registration details of the individual enrollment you created previously:
- :::zone pivot="programming-language-nodejs"
+1. In Azure portal, go to your Device Provisioning Service.
- :::image type="content" source="./media/quick-create-simulated-device-x509/hub-registration-nodejs.png" alt-text="Node.js device is registered with the IoT hub":::
+1. In the **Settings** menu, select **Manage enrollments**.
- ::: zone-end
+1. Select **Individual Enrollments**. The X.509 enrollment entry that you created previously, *my-x509-device*, should appear in the list.
- :::zone pivot="programming-language-python"
+1. Select the enrollment entry. The IoT hub that your device was assigned to and its device ID appears under **Registration Status**.
- :::image type="content" source="./media/quick-create-simulated-device-x509/hub-registration-python.png" alt-text="Python device is registered with the IoT hub":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/individual-enrollment-after-registration.png" alt-text="Screenshot that shows the individual enrollment registration status tab for the device on Azure portal.":::
- ::: zone-end
+To verify the device on your IoT hub:
- ::: zone pivot="programming-language-java"
+1. In Azure portal, go to the IoT hub that your device was assigned to.
- :::image type="content" source="./media/quick-create-simulated-device-x509/hub-registration-java.png" alt-text="Java device is registered with the IoT hub":::
+1. In the **Device management** menu, select **Devices**.
- ::: zone-end
+1. If your device was provisioned successfully, its device ID, *my-x509-device*, should appear in the list, with **Status** set as *enabled*. If you don't see your device, select **Refresh**.
+ :::image type="content" source="./media/quick-create-simulated-device-x509/iot-hub-registration.png" alt-text="Screenshot that shows the device is registered with the I o T hub in Azure portal.":::
::: zone pivot="programming-language-csharp,programming-language-nodejs,programming-language-python,programming-language-java"
diff --git a/articles/iot-hub/iot-hub-devguide-device-twins.md b/articles/iot-hub/iot-hub-devguide-device-twins.md
index fb25bb18d2a7d..c27179d909bfc 100644
--- a/articles/iot-hub/iot-hub-devguide-device-twins.md
+++ b/articles/iot-hub/iot-hub-devguide-device-twins.md
@@ -364,8 +364,6 @@ This information is kept at every level (not just the leaves of the JSON structu
Tags, desired properties, and reported properties all support optimistic concurrency. If you need to guarantee order of twin property updates, consider implementing synchronization at the application level by waiting for reported properties callback before sending the next update.
-Tags have an ETag, as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the tag's JSON representation. You can use ETags in conditional update operations from the solution back end to ensure consistency.
-
Device twins have an ETag (`etag` property), as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the twin's JSON representation. You can use the `etag` property in conditional update operations from the solution back end to ensure consistency. This is the only option for ensuring consistency in operations that involve the `tags` container.
Device twin desired and reported properties also have a `$version` value that is guaranteed to be incremental. Similarly to an ETag, the version can be used by the updating party to enforce consistency of updates. For example, a device app for a reported property or the solution back end for a desired property.
diff --git a/articles/iot-hub/iot-hub-devguide-module-twins.md b/articles/iot-hub/iot-hub-devguide-module-twins.md
index 580c815af5ab9..3f91a5ce99d2c 100644
--- a/articles/iot-hub/iot-hub-devguide-module-twins.md
+++ b/articles/iot-hub/iot-hub-devguide-module-twins.md
@@ -345,7 +345,7 @@ This information is kept at every level (not just the leaves of the JSON structu
## Optimistic concurrency
-Tags, desired, and reported properties all support optimistic concurrency.
+Tags, desired properties, and reported properties all support optimistic concurrency. If you need to guarantee order of twin property updates, consider implementing synchronization at the application level by waiting for reported properties callback before sending the next update.
Module twins have an ETag (`etag` property), as per [RFC7232](https://tools.ietf.org/html/rfc7232), that represents the twin's JSON representation. You can use the `etag` property in conditional update operations from the solution back end to ensure consistency. This is the only option for ensuring consistency in operations that involve the `tags` container.
diff --git a/articles/key-vault/general/security-features.md b/articles/key-vault/general/security-features.md
index ed1e82c06593b..c645f4a820c71 100755
--- a/articles/key-vault/general/security-features.md
+++ b/articles/key-vault/general/security-features.md
@@ -38,7 +38,7 @@ Azure Private Link Service enables you to access Azure Key Vault and Azure hoste
- Despite known vulnerabilities in TLS protocol, there is no known attack that would allow a malicious agent to extract any information from your key vault when the attacker initiates a connection with a TLS version that has vulnerabilities. The attacker would still need to authenticate and authorize itself, and as long as legitimate clients always connect with recent TLS versions, there is no way that credentials could have been leaked from vulnerabilities at old TLS versions.
> [!NOTE]
-> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework. To meet with compliance obligations and to improve security posture, Key Vault connections via TLS 1.0 & 1.1 will be deprecated starting on 31st May 2022 and disallowed later in the future.
+> For Azure Key Vault, ensure that the application accessing the Keyvault service should be running on a platform that supports TLS 1.2 or recent version. If the application is dependent on .Net framework, it should be updated as well. You can also make the registry changes mentioned in [this article](/troubleshoot/azure/active-directory/enable-support-tls-environment) to explicitly enable the use of TLS 1.2 at OS level and for .Net framework. To meet with compliance obligations and to improve security posture, Key Vault connections via TLS 1.0 & 1.1 are considered a security risk, and any connections using old TLS protocols will be disallowed in 2023.
## Key Vault authentication options
diff --git a/articles/key-vault/secrets/tutorial-rotation-dual.md b/articles/key-vault/secrets/tutorial-rotation-dual.md
index 73f98f249a9b8..1089f4702ad4d 100644
--- a/articles/key-vault/secrets/tutorial-rotation-dual.md
+++ b/articles/key-vault/secrets/tutorial-rotation-dual.md
@@ -19,7 +19,7 @@ The best way to authenticate to Azure services is by using a [managed identity](
This tutorial shows how to automate the periodic rotation of secrets for databases and services that use two sets of authentication credentials. Specifically, this tutorial shows how to rotate Azure Storage account keys stored in Azure Key Vault as secrets. You'll use a function triggered by Azure Event Grid notification.
> [!NOTE]
-> Storage account keys can be automatically managed in Key Vault if you provide shared access signature tokens for delegated access to the storage account. There are services that require storage account connection strings with access keys. For that scenario, we recommend this solution.
+> For Storage account services, using Azure Active Directory to authorize requests is recommended. For more information, see [Authorize access to blobs using Azure Active Directory](../../storage/blobs/authorize-access-azure-active-directory.md). There are services that require storage account connection strings with access keys. For that scenario, we recommend this solution.
Here's the rotation solution described in this tutorial:
@@ -38,6 +38,9 @@ In this solution, Azure Key Vault stores storage account individual access keys
* Azure Key Vault.
* Two Azure storage accounts.
+> [!NOTE]
+> Rotation of shared storage account key revokes account level shared access signature (SAS) generated based on that key. After storage account key rotation, you must regenerate account-level SAS tokens to avoid disruptions to applications.
+
You can use this deployment link if you don't have an existing key vault and existing storage accounts:
[![Link that's labelled Deploy to Azure.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FInitial-Setup%2Fazuredeploy.json)
diff --git a/articles/lab-services/how-to-create-schedules-within-canvas.md b/articles/lab-services/how-to-create-schedules-within-canvas.md
index 44ac3df8f15bf..cd8a44369f30d 100644
--- a/articles/lab-services/how-to-create-schedules-within-canvas.md
+++ b/articles/lab-services/how-to-create-schedules-within-canvas.md
@@ -19,7 +19,7 @@ Here is how schedules affect lab VM:
The scheduled running time of VMs does not count against the [quota](classroom-labs-concepts.md#quota) given to a user. The quota is for the time outside of schedule hours that a student spends on VMs.
-Educators can create, edit, and delete lab schedules within Canvas as in the Azure Lab Services portal. For more information on scheduling, see [Creating and managing schedules](how-to-create-schedules-within-canvas.md).
+Educators can create, edit, and delete lab schedules within Canvas as in the Azure Lab Services portal. For more information on scheduling, see [Creating and managing schedules](how-to-create-schedules.md).
> [!IMPORTANT]
> Schedules will apply at the course level. If you have many sections of a course, consider using [automatic shutdown policies](how-to-configure-auto-shutdown-lab-plans.md) and/or [quotas hours](how-to-configure-student-usage.md#set-quotas-for-users).
diff --git a/articles/lab-services/how-to-create-schedules-within-teams.md b/articles/lab-services/how-to-create-schedules-within-teams.md
index 21e7a8988bc30..3f62a6e49e3d1 100644
--- a/articles/lab-services/how-to-create-schedules-within-teams.md
+++ b/articles/lab-services/how-to-create-schedules-within-teams.md
@@ -19,7 +19,7 @@ Here's how schedules affect lab virtual machines:
> [!IMPORTANT]
> The scheduled run time of VMs doesn't count against the quota allotted to a user. The alloted quota is for the time outside of schedule hours that a student spends on VMs.
-Users can create, edit, and delete lab schedules within Teams as in the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). For more information, see [creating and managing schedules](how-to-create-schedules-within-teams.md).
+Users can create, edit, and delete lab schedules within Teams as in the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). For more information, see [creating and managing schedules](how-to-create-schedules.md).
## Automatic shutdown and disconnect settings
diff --git a/articles/load-testing/index.yml b/articles/load-testing/index.yml
index 3faf39d69b49c..ec771846dddab 100644
--- a/articles/load-testing/index.yml
+++ b/articles/load-testing/index.yml
@@ -106,5 +106,7 @@ landingContent:
linkLists:
- linkListType: reference
links:
+ - text: REST API
+ url: /rest/api/loadtesting/
- text: Test configuration YAML
url: reference-test-config-yaml.md
diff --git a/articles/load-testing/toc.yml b/articles/load-testing/toc.yml
index 5db1a9e865643..c5a276ff6fe29 100644
--- a/articles/load-testing/toc.yml
+++ b/articles/load-testing/toc.yml
@@ -65,6 +65,8 @@
href: monitor-load-testing.md
- name: Reference
items:
+ - name: REST API
+ href: /rest/api/loadtesting/
- name: Test configuration YAML
href: reference-test-config-yaml.md
- name: Monitor data reference
diff --git a/articles/logic-apps/logic-apps-scenario-social-serverless.md b/articles/logic-apps/logic-apps-scenario-social-serverless.md
index 2380e38fdbe8c..9aa0ab485141b 100644
--- a/articles/logic-apps/logic-apps-scenario-social-serverless.md
+++ b/articles/logic-apps/logic-apps-scenario-social-serverless.md
@@ -3,8 +3,6 @@ title: Create customer insights dashboard
description: Manage customer feedback, social media data, and more by building a customer dashboard with Azure Logic Apps and Azure Functions.
services: logic-apps
ms.suite: integration
-author: jeffhollan
-ms.author: jehollan
ms.reviewer: estfan, azla
ms.topic: how-to
ms.date: 03/15/2018
diff --git a/articles/machine-learning/component-reference/import-data.md b/articles/machine-learning/component-reference/import-data.md
index be4c53c534b3d..db87ed03eb828 100644
--- a/articles/machine-learning/component-reference/import-data.md
+++ b/articles/machine-learning/component-reference/import-data.md
@@ -19,7 +19,7 @@ This article describes a component in Azure Machine Learning designer.
Use this component to load data into a machine learning pipeline from existing cloud data services.
> [!Note]
-> All functionality provided by this component can be done by **datastore** and **datasets** in the worksapce landing page. We recommend you use **datastore** and **dataset** which includes additional features like data monitoring. To learn more, see [How to Access Data](../v1/how-to-access-data.md) and [How to Register Datasets](../v1/how-to-create-register-datasets.md) article.
+> All functionality provided by this component can be done by **datastore** and **datasets** in the workspace landing page. We recommend you use **datastore** and **dataset** which includes additional features like data monitoring. To learn more, see [How to Access Data](../v1/how-to-access-data.md) and [How to Register Datasets](../v1/how-to-create-register-datasets.md) article.
> After you register a dataset, you can find it in the **Datasets** -> **My Datasets** category in designer interface. This component is reserved for Studio(classic) users to for a familiar experience.
>
diff --git a/articles/machine-learning/media/concept-component/archive-component.png b/articles/machine-learning/media/concept-component/archive-component.png
index 476facba7561b..538deadc540c8 100644
Binary files a/articles/machine-learning/media/concept-component/archive-component.png and b/articles/machine-learning/media/concept-component/archive-component.png differ
diff --git a/articles/machine-learning/media/concept-component/ui-create-component.png b/articles/machine-learning/media/concept-component/ui-create-component.png
index a34606fc198f6..51255dd21ceda 100644
Binary files a/articles/machine-learning/media/concept-component/ui-create-component.png and b/articles/machine-learning/media/concept-component/ui-create-component.png differ
diff --git a/articles/machine-learning/media/concept-environments/ml-environment.png b/articles/machine-learning/media/concept-environments/ml-environment.png
index 2e51a515bab85..0670c661b21ea 100644
Binary files a/articles/machine-learning/media/concept-environments/ml-environment.png and b/articles/machine-learning/media/concept-environments/ml-environment.png differ
diff --git a/articles/machine-learning/media/concept-workspace/azure-machine-learning-taxonomy.png b/articles/machine-learning/media/concept-workspace/azure-machine-learning-taxonomy.png
index a460ba4e00450..27755b83e548c 100644
Binary files a/articles/machine-learning/media/concept-workspace/azure-machine-learning-taxonomy.png and b/articles/machine-learning/media/concept-workspace/azure-machine-learning-taxonomy.png differ
diff --git a/articles/machine-learning/media/how-to-auto-train-forecast/enable_dnn.png b/articles/machine-learning/media/how-to-auto-train-forecast/enable_dnn.png
index 0ccbe7fb413e0..31b3d12332389 100644
Binary files a/articles/machine-learning/media/how-to-auto-train-forecast/enable_dnn.png and b/articles/machine-learning/media/how-to-auto-train-forecast/enable_dnn.png differ
diff --git a/articles/machine-learning/media/how-to-autoscale-endpoints/select-endpoint.png b/articles/machine-learning/media/how-to-autoscale-endpoints/select-endpoint.png
index c702d0c88ff10..de7f4ff9af2b1 100644
Binary files a/articles/machine-learning/media/how-to-autoscale-endpoints/select-endpoint.png and b/articles/machine-learning/media/how-to-autoscale-endpoints/select-endpoint.png differ
diff --git a/articles/machine-learning/media/how-to-create-attach-studio/compute-nodes.png b/articles/machine-learning/media/how-to-create-attach-studio/compute-nodes.png
index 5541fe7ca0f4f..1e54d00243bc6 100644
Binary files a/articles/machine-learning/media/how-to-create-attach-studio/compute-nodes.png and b/articles/machine-learning/media/how-to-create-attach-studio/compute-nodes.png differ
diff --git a/articles/machine-learning/media/how-to-create-attach-studio/details.png b/articles/machine-learning/media/how-to-create-attach-studio/details.png
index 2b0b26f5e6764..027e795103013 100644
Binary files a/articles/machine-learning/media/how-to-create-attach-studio/details.png and b/articles/machine-learning/media/how-to-create-attach-studio/details.png differ
diff --git a/articles/machine-learning/media/how-to-create-attach-studio/view-compute-targets.png b/articles/machine-learning/media/how-to-create-attach-studio/view-compute-targets.png
index 28e1b6704e7ef..45c3ba96cbb36 100644
Binary files a/articles/machine-learning/media/how-to-create-attach-studio/view-compute-targets.png and b/articles/machine-learning/media/how-to-create-attach-studio/view-compute-targets.png differ
diff --git a/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png b/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png
index 2e3efc05a5d57..8a35d3d8be7e5 100644
Binary files a/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png and b/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png differ
diff --git a/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png b/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png
index 67c8f519b515d..92bbe5c3783f3 100644
Binary files a/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png and b/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png differ
diff --git a/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png b/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png
index cf0a09f891e48..2b84f3adf27dc 100644
Binary files a/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png and b/articles/machine-learning/media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png differ
diff --git a/articles/machine-learning/media/how-to-deploy-model-designer/download-artifacts-in-models-page.png b/articles/machine-learning/media/how-to-deploy-model-designer/download-artifacts-in-models-page.png
index 708b0f370aa05..8d79a0851a635 100644
Binary files a/articles/machine-learning/media/how-to-deploy-model-designer/download-artifacts-in-models-page.png and b/articles/machine-learning/media/how-to-deploy-model-designer/download-artifacts-in-models-page.png differ
diff --git a/articles/machine-learning/media/how-to-deploy-model-designer/open-deploy-wizard.png b/articles/machine-learning/media/how-to-deploy-model-designer/open-deploy-wizard.png
index 9fd0b43e980f4..952ed691a9eaa 100644
Binary files a/articles/machine-learning/media/how-to-deploy-model-designer/open-deploy-wizard.png and b/articles/machine-learning/media/how-to-deploy-model-designer/open-deploy-wizard.png differ
diff --git a/articles/machine-learning/media/how-to-deploy-with-triton/deploy-from-models-page.png b/articles/machine-learning/media/how-to-deploy-with-triton/deploy-from-models-page.png
index a5d251a0b76ce..7485f5c148a66 100644
Binary files a/articles/machine-learning/media/how-to-deploy-with-triton/deploy-from-models-page.png and b/articles/machine-learning/media/how-to-deploy-with-triton/deploy-from-models-page.png differ
diff --git a/articles/machine-learning/media/how-to-enable-studio-virtual-network/default-datastores.png b/articles/machine-learning/media/how-to-enable-studio-virtual-network/default-datastores.png
index da132d47928bf..ed5c981f3bdd6 100644
Binary files a/articles/machine-learning/media/how-to-enable-studio-virtual-network/default-datastores.png and b/articles/machine-learning/media/how-to-enable-studio-virtual-network/default-datastores.png differ
diff --git a/articles/machine-learning/media/how-to-enable-studio-virtual-network/enable-managed-identity.png b/articles/machine-learning/media/how-to-enable-studio-virtual-network/enable-managed-identity.png
index da31a0af6af66..57e93f34106e8 100644
Binary files a/articles/machine-learning/media/how-to-enable-studio-virtual-network/enable-managed-identity.png and b/articles/machine-learning/media/how-to-enable-studio-virtual-network/enable-managed-identity.png differ
diff --git a/articles/machine-learning/media/how-to-export-delete-data/unregister-dataset.png b/articles/machine-learning/media/how-to-export-delete-data/unregister-dataset.png
index 60eb61805922e..16976226b7e77 100644
Binary files a/articles/machine-learning/media/how-to-export-delete-data/unregister-dataset.png and b/articles/machine-learning/media/how-to-export-delete-data/unregister-dataset.png differ
diff --git a/articles/machine-learning/media/how-to-trigger-published-pipeline/scheduled-pipelines.png b/articles/machine-learning/media/how-to-trigger-published-pipeline/scheduled-pipelines.png
index d27d9cb5a6ad4..6d2ca8cacfb6f 100644
Binary files a/articles/machine-learning/media/how-to-trigger-published-pipeline/scheduled-pipelines.png and b/articles/machine-learning/media/how-to-trigger-published-pipeline/scheduled-pipelines.png differ
diff --git a/articles/machine-learning/migrate-rebuild-integrate-with-client-app.md b/articles/machine-learning/migrate-rebuild-integrate-with-client-app.md
index b3621620564ad..05c5101018fb0 100644
--- a/articles/machine-learning/migrate-rebuild-integrate-with-client-app.md
+++ b/articles/machine-learning/migrate-rebuild-integrate-with-client-app.md
@@ -1,14 +1,15 @@
---
-title: 'ML Studio (classic): Migrate to Azure Machine Learning - Consume pipeline endpoints'
-description: Integrate pipeline endpoints with client applications in Azure Machine Learning.
+title: 'Migrate to Azure Machine Learning - Consume pipeline endpoints'
+description: Learn how to integrate pipeline endpoints with client applications in Azure Machine Learning as part of migrating from Machine Learning Studio (Classic).
services: machine-learning
ms.service: machine-learning
ms.subservice: studio-classic
ms.topic: how-to
+ms.custom: kr2b-contr-experiment
author: xiaoharper
ms.author: zhanxia
-ms.date: 03/08/2021
+ms.date: 05/31/2022
---
# Consume pipeline endpoints from client applications
@@ -17,7 +18,7 @@ ms.date: 03/08/2021
In this article, you learn how to integrate client applications with Azure Machine Learning endpoints. For more information on writing application code, see [Consume an Azure Machine Learning endpoint](how-to-consume-web-service.md).
-This article is part of the Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see [the migration overview article](migrate-overview.md).
+This article is part of the ML Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see [the migration overview article](migrate-overview.md).
## Prerequisites
@@ -25,10 +26,9 @@ This article is part of the Studio (classic) to Azure Machine Learning migration
- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md#create-a-workspace).
- An [Azure Machine Learning real-time endpoint or pipeline endpoint](migrate-rebuild-web-service.md).
+## Consume a real-time endpoint
-## Consume a real-time endpoint
-
-If you deployed your model as a **real-time endpoint**, you can find its REST endpoint, and pre-generated consumption code in C#, Python, and R:
+If you deployed your model as a *real-time endpoint*, you can find its REST endpoint, and pre-generated consumption code in C#, Python, and R:
1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com)).
1. Go the **Endpoints** tab.
@@ -38,7 +38,6 @@ If you deployed your model as a **real-time endpoint**, you can find its REST en
> [!NOTE]
> You can also find the Swagger specification for your endpoint in the **Details** tab. Use the Swagger definition to understand your endpoint schema. For more information on Swagger definition, see [Swagger official documentation](https://swagger.io/docs/specification/2-0/what-is-swagger/).
-
## Consume a pipeline endpoint
There are two ways to consume a pipeline endpoint:
@@ -60,15 +59,14 @@ Call the REST endpoint from your client application. You can use the Swagger spe
You can call your Azure Machine Learning pipeline as a step in an Azure Data Factory pipeline. For more information, see [Execute Azure Machine Learning pipelines in Azure Data Factory](../data-factory/transform-data-machine-learning-service.md).
-
## Next steps
In this article, you learned how to find schema and sample code for your pipeline endpoints. For more information on consuming endpoints from the client application, see [Consume an Azure Machine Learning endpoint](how-to-consume-web-service.md).
-See the rest of the articles in the Azure Machine Learning migration series:
-1. [Migration overview](migrate-overview.md).
-1. [Migrate dataset](migrate-register-dataset.md).
-1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
-1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
-1. **Integrate an Azure Machine Learning web service with client apps**.
-1. [Migrate Execute R Script](migrate-execute-r-script.md).
\ No newline at end of file
+See the rest of the articles in the Azure Machine Learning migration series:
+
+- [Migration overview](migrate-overview.md).
+- [Migrate dataset](migrate-register-dataset.md).
+- [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+- [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+- [Migrate Execute R Script](migrate-execute-r-script.md).
diff --git a/articles/marketplace/azure-resource-manager-test-drive.md b/articles/marketplace/azure-resource-manager-test-drive.md
index 2d3cacf2b7794..027981c14c3b5 100644
--- a/articles/marketplace/azure-resource-manager-test-drive.md
+++ b/articles/marketplace/azure-resource-manager-test-drive.md
@@ -6,7 +6,7 @@ ms.subservice: partnercenter-marketplace-publisher
ms.topic: article
ms.author: trkeya
author: trkeya
-ms.date: 12/06/2021
+ms.date: 06/03/2022
ms.custom: devx-track-azurepowershell, subject-rbac-steps
---
@@ -83,6 +83,9 @@ You can use any valid name for your parameters; test drive recognizes parameter
Test drive initializes this parameter with a **Base Uri** of your deployment package so you can use this parameter to construct a Uri of any file included in your package.
+> [!NOTE]
+> The `baseUri` parameter cannot be used in conjunction with a custom script extension.
+
```JSON
"parameters": {
...
diff --git a/articles/mysql/flexible-server/concepts-data-in-replication.md b/articles/mysql/flexible-server/concepts-data-in-replication.md
index 3fbd9235fda47..283934922a1d6 100644
--- a/articles/mysql/flexible-server/concepts-data-in-replication.md
+++ b/articles/mysql/flexible-server/concepts-data-in-replication.md
@@ -48,7 +48,7 @@ Modifying the parameter `replicate_wild_ignore_table` used to create replication
- The source server version must be at least MySQL version 5.7.
- Our recommendation is to have the same version for source and replica server versions. For example, both must be MySQL version 5.7 or both must be MySQL version 8.0.
-- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication. To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) if your MySQL version is greater than 8.0.23 `(ALTER TABLE
ADD COLUMN bigint AUTO_INCREMENT INVISIBLE PRIMARY KEY;)`.
+- Our recommendation is to have a primary key in each table. If we have table without primary key, you might face slowness in replication.
- The source server should use the MySQL InnoDB engine.
- User must have permissions to configure binary logging and create new users on the source server.
- Binary log files on the source server shouldn't be purged before the replica applies those changes. If the source is Azure Database for MySQL refer how to configure binlog_expire_logs_seconds for [Flexible server](./concepts-server-parameters.md#binlog_expire_logs_seconds) or [Single server](../concepts-server-parameters.md#binlog_expire_logs_seconds)
diff --git a/articles/mysql/flexible-server/concepts-high-availability.md b/articles/mysql/flexible-server/concepts-high-availability.md
index 911445bb145d7..93a9bec8413d9 100644
--- a/articles/mysql/flexible-server/concepts-high-availability.md
+++ b/articles/mysql/flexible-server/concepts-high-availability.md
@@ -63,7 +63,7 @@ Automatic backups, both snapshots and log backups, are performed on locally redu
>[!Note]
>For both zone-redundant and same-zone HA:
->* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.To create primary keys for tables you can use [invisible column](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) if your MySQL version is greater than 8.0.23 `(ALTER TABLE
ADD COLUMN bigint AUTO_INCREMENT INVISIBLE PRIMARY KEY;)`.
+>* If there's a failure, the time needed for the standby replica to take over the role of primary depends on the binary log application on the standby. So we recommend that you use primary keys on all tables to reduce failover time. Failover times are typically between 60 and 120 seconds.
>* The standby server isn't available for read or write operations. It's a passive standby to enable fast failover.
>* Always use a fully qualified domain name (FQDN) to connect to your primary server. Avoid using an IP address to connect. If there's a failover, after the primary and standby server roles are switched, a DNS A record might change. That change would prevent the application from connecting to the new primary server if an IP address is used in the connection string.
diff --git a/articles/mysql/flexible-server/whats-new.md b/articles/mysql/flexible-server/whats-new.md
index a7687d875caef..0d693d2ebc93b 100644
--- a/articles/mysql/flexible-server/whats-new.md
+++ b/articles/mysql/flexible-server/whats-new.md
@@ -26,6 +26,10 @@ This article summarizes new releases and features in Azure Database for MySQL -
- **Announcing the addition of new Burstable compute instances for Azure Database for MySQL - Flexible Server**
We are announcing the addition of new Burstable compute instances to support customers’ auto-scaling compute requirements from 1 vCore up to 20 vCores. learn more about [Compute Option for Azure Database for MySQL - Flexible Server](https://docs.microsoft.com/azure/mysql/flexible-server/concepts-compute-storage).
+- **Known issues**
+ - The Reserved instances (RI) feature in Azure Database for MySQL – Flexible server is not working properly for the Business Critical service tier, after its rebranding from the Memory Optimized service tier. Specifically, instance reservation has stopped working, and we are currently working to fix the issue.
+ - Private DNS integration details are not displayed on few Azure Database for MySQL Database flexible servers which have HA option enabled. This issue does not have any impact on availability of the server or name resolution. We are working on a permanent fix to resolve the issue and it will be available in the next deployment. Meanwhile, if you want to view the Private DNS Zone details, you can either search under [Private DNS zones](../../dns/private-dns-getstarted-portal.md) in the Azure portal or you can perform a [manual failover](concepts-high-availability.md#planned-forced-failover) of the HA enabled flexible server and refresh the Azure portal.
+
## April 2022
- **Minor version upgrade for Azure Database for MySQL - Flexible server to 8.0.28**
diff --git a/articles/mysql/single-server/concept-reserved-pricing.md b/articles/mysql/single-server/concept-reserved-pricing.md
index 65c501a69a6e9..1736faaaeadbd 100644
--- a/articles/mysql/single-server/concept-reserved-pricing.md
+++ b/articles/mysql/single-server/concept-reserved-pricing.md
@@ -15,6 +15,10 @@ ms.date: 10/06/2021
Azure Database for MySQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for MySQL reserved instances, you make an upfront commitment on MySQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for MySQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term.
+>[!NOTE]
+>The Reserved instances (RI) feature in Azure Database for MySQL – Flexible server is not working properly for the Business Critical service tier, after its rebranding > from the Memory Optimized service tier. Specifically, instance reservation has stopped working, and we are currently working to fix the issue.
+
+
## How does the instance reservation work?
You do not need to assign the reservation to specific Azure Database for MySQL servers. An already running Azure Database for MySQL or ones that are newly deployed, will automatically get the benefit of reserved pricing. By purchasing a reservation, you are pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for MySQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the MySQL Database server. At the end of the reservation term, the billing benefit expires, and the Azure Database for MySQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for MySQL reserved capacity offering](https://azure.microsoft.com/pricing/details/mysql/).
diff --git a/articles/mysql/single-server/single-server-whats-new.md b/articles/mysql/single-server/single-server-whats-new.md
index 8dbf1fffc1450..c198c5788c1d1 100644
--- a/articles/mysql/single-server/single-server-whats-new.md
+++ b/articles/mysql/single-server/single-server-whats-new.md
@@ -22,6 +22,10 @@ This article summarizes new releases and features in Azure Database for MySQL -
Enabled the ability to change the server parameter innodb_ft_server_stopword_table from Portal/CLI.
Users can now change the value of the innodb_ft_server_stopword_table parameter using the Azure portal and CLI. This parameter helps to configure your own InnoDB FULLTEXT index stopword list for all InnoDB tables. For more information, see [innodb_ft_server_stopword_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_ft_server_stopword_table).
+**Known Issues**
+
+Customers using PHP driver with [enableRedirect](./how-to-redirection.md) can no longer connect to the Azure Database for MySQL Single Server, as the CA certificates of the host servers were changed from BaltimoreCyberTrustRoot to DigiCertGlobalRootG2 to address compliance requirements. For successful connections to your database using PHP driver with enableRedirect please visit this [link](./concepts-certificate-rotation.md#do-i-need-to-make-any-changes-on-my-client-to-maintain-connectivity).
+
## March 2022
This release of Azure Database for MySQL - Single Server includes the following updates.
diff --git a/articles/network-watcher/connection-monitor-create-using-portal.md b/articles/network-watcher/connection-monitor-create-using-portal.md
index f7032173473f5..1ebcf2e1256ba 100644
--- a/articles/network-watcher/connection-monitor-create-using-portal.md
+++ b/articles/network-watcher/connection-monitor-create-using-portal.md
@@ -18,12 +18,15 @@ ms.author: vinigam
> [!IMPORTANT]
> Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor ](migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before 29 February 2024.
+> [!IMPORTANT]
+> Connection Monitor will now support end to end connectivity checks from and to *Azure Virtual Machine Scale Sets*, enabling faster performance monitoring and network troubleshooting across scale sets
+
Learn how to use Connection Monitor to monitor communication between your resources. This article describes how to create a monitor by using the Azure portal. Connection Monitor supports hybrid and Azure cloud deployments.
## Before you begin
-In connection monitors that you create by using Connection Monitor, you can add both on-premises machines and Azure VMs as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or on any other URL or IP.
+In connection monitors that you create by using Connection Monitor, you can add both on-premises machines, Azure VMs and Azure Virtual Machine scale sets as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or on any other URL or IP.
Here are some definitions to get you started:
@@ -42,6 +45,8 @@ Here are some definitions to get you started:
:::image type="content" source="./media/connection-monitor-2-preview/cm-tg-2.png" alt-text="Diagram that shows a connection monitor and defines the relationship between test groups and tests.":::
+ > [!NOTE]
+ > Connection Monitor now supports auto enablement of monitoring extensions for Azure & Non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection Monitor.
## Create a connection monitor
@@ -94,6 +99,9 @@ Connection Monitor creates the connection monitor resource in the background.
## Create test groups in a connection monitor
+ >[!NOTE]
+ >> Connection Monitor now supports auto enablement of monitoring extensions for Azure & Non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection Monitor.
+
Each test group in a connection monitor includes sources and destinations that get tested on network parameters. They're tested for the percentage of checks that fail and the RTT over test configurations.
In the Azure portal, to create a test group in a connection monitor, you specify values for the following fields:
@@ -101,17 +109,17 @@ In the Azure portal, to create a test group in a connection monitor, you specify
* **Disable test group**: You can select this check box to disable monitoring for all sources and destinations that the test group specifies. This selection is cleared by default.
* **Name**: Name your test group.
* **Sources**: You can specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
- * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs that are bound to the region that you specified when you created the connection monitor. By default, VMs are grouped into the subscription that they belong to. These groups are collapsed.
+ * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or Virtual Machine scale sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and Virtual Machine scale sets are grouped into the subscription that they belong to. These groups are collapsed.
You can drill down from the **Subscription** level to other levels in the hierarchy:
- **Subscription** > **Resource group** > **VNET** > **Subnet** > **VMs with agents**
+ **Subscription** > **Resource group** > **VNET** > **Subnet** > **VMs with agents**
You can also change the **Group by** selector to start the tree from any other level. For example, if you group by virtual network, you see the VMs that have agents in the hierarchy **VNET** > **Subnet** > **VMs with agents**.
- When you select a VNET, subnet, or single VM, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected VNET or subnet that have the Azure Network Watcher extension participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
+ When you select a VNET, subnet, a single VM or a virtual machine scale set the corresponding resource ID is set as the endpoint. By default, all VMs in the selected VNET or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
- :::image type="content" source="./media/connection-monitor-2-preview/add-azure-sources.png" alt-text="Screenshot that shows the Add Sources pane and the Azure endpoints tab in Connection Monitor.":::
+ :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the Add Sources pane and the Azure endpoints including V M S S tab in Connection Monitor.":::
* To choose on-premises agents, select the **Non–Azure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have the Network Performance Monitor configured.
@@ -123,6 +131,9 @@ In the Azure portal, to create a test group in a connection monitor, you specify
* To choose recently used endpoints, you can use the **Recent endpoint** tab
+ * You need not choose the endpoints with monitoring agents enabled only. You can select Azure or Non-Azure endpoints without the agent enabled and proceed with the creation of Connection Monitor. During the creation process, the monitoring agents for the endpoints will be automatically enabled.
+ :::image type="content" source="./media/connection-monitor-2-preview/unified-enablement.png" alt-text="Screenshot that shows the Add Sources pane and the Non-Azure endpoints tab in Connection Monitor with unified enablement.":::
+
* When you finish setting up sources, select **Done** at the bottom of the tab. You can still edit basic properties like the endpoint name by selecting the endpoint in the **Create Test Group** view.
* **Destinations**: You can monitor connectivity to an Azure VM, an on-premises machine, or any endpoint (a public IP, URL, or FQDN) by specifying it as a destination. In a single test group, you can add Azure VMs, on-premises machines, Office 365 URLs, Dynamics 365 URLs, and custom endpoints.
@@ -168,6 +179,13 @@ In the Azure portal, to create a test group in a connection monitor, you specify
:::image type="content" source="./media/connection-monitor-2-preview/add-test-config.png" alt-text="Screenshot that shows where to set up a test configuration in Connection Monitor.":::
+* **Test Groups**: You can add one or more Test Groups to a Connection Monitor. These test groups can consist of multiple Azure or Non-Azure endpoints.
+ * For selected Azure VMs or Azure virtual machine scale sets and Non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the NPM solution for Non-Azure endpoints will be auto enablement once the creation of Connection Monitor begins.
+ * In case the virtual machine scale set selected is set for manual upgradation, the user will have to upgrade the scale set post Network Watcher extension installation in order to continue setting up the Connection Monitor with virtual machine scale set as endpoints. In-case the virtual machine scale set is set to auto upgradation, the user need not worry about any upgradation after Network Watcher extension installation.
+ * In the scenario mentioned above, user can consent to auto upgradation of virtual machine scale set with auto enablement of Network Watcher extension during the creation of Connection Monitor for Virtual Machine scale sets with manual upgradation. This would eliminate the need for the user to manually upgrade the virtual machine scale set after installing the Network Watcher extension.
+
+ :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test groups and consent for auto-upgradation of V M S S in Connection Monitor.":::
+
## Create alerts in Connection Monitor
You can set up alerts on tests that are failing based on the thresholds set in test configurations.
@@ -186,7 +204,10 @@ In the Azure portal, to create alerts for a connection monitor, you specify valu
- **Enable rule upon creation**: Select this check box to enable the alert rule based on the condition. Disable this check box if you want to create the rule without enabling it.
-:::image type="content" source="./media/connection-monitor-2-preview/create-alert-filled.png" alt-text="Screenshot that shows the Create alert tab in Connection Monitor.":::
+:::image type="content" source="./media/connection-monitor-2-preview/unified-enablement-create.png" alt-text="Screenshot that shows the Create alert tab in Connection Monitor.":::
+
+Once all the steps are completed, the process will proceed with unified enablement of monitoring extensions for all endpoints without monitoring agents enabled, followed by creation of Connection Monitor.
+Once the creation process is successful , it will take ~ 5 mins for the connection monitor to show up on the dashboard.
## Scale limits
diff --git a/articles/network-watcher/connection-monitor-overview.md b/articles/network-watcher/connection-monitor-overview.md
index 6e52cc0ca81a7..3d6fa91a9257e 100644
--- a/articles/network-watcher/connection-monitor-overview.md
+++ b/articles/network-watcher/connection-monitor-overview.md
@@ -24,17 +24,21 @@ ms.custom: mvc
>
> To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor](migrate-to-connection-monitor-from-network-performance-monitor.md), or [migrate from Connection Monitor (Classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before February 29, 2024.
+> [!IMPORTANT]
+> Connection Monitor will now support end to end connectivity checks from and to *Azure Virtual Machine Scale Sets*, enabling faster performance monitoring and network troubleshooting across scale sets
+
Connection Monitor provides unified, end-to-end connection monitoring in Azure Network Watcher. The Connection Monitor feature supports hybrid and Azure cloud deployments. Network Watcher provides tools to monitor, diagnose, and view connectivity-related metrics for your Azure deployments.
Here are some use cases for Connection Monitor:
-- Your front-end web server virtual machine (VM) communicates with a database server VM in a multi-tier application. You want to check network connectivity between the two VMs.
-- You want VMs in, for example, the East US region to ping VMs in the Central US region, and you want to compare cross-region network latencies.
+- Your front-end web server virtual machine (VM) or virtual machine scale set(VMSS) communicates with a database server VM in a multi-tier application. You want to check network connectivity between the two VM/or scale sets.
+- You want VMs/scale sets in, for example, the East US region to ping VMs/scale sets in the Central US region, and you want to compare cross-region network latencies.
- You have multiple on-premises office sites, one in Seattle, Washington, for example, and another in Ashburn, Virginia. Your office sites connect to Microsoft 365 URLs. For your users of Microsoft 365 URLs, you want to compare the latencies between Seattle and Ashburn.
- Your hybrid application needs connectivity to an Azure storage account endpoint. Your on-premises site and your Azure application connect to the same endpoint. You want to compare the latencies of the on-premises site with the latencies of the Azure application.
-- You want to check the connectivity between your on-premises setups and the Azure VMs that host your cloud application.
+- You want to check the connectivity between your on-premises setups and the Azure VMs/virtual machine scale sets that host your cloud application.
+- You want to check the connectivity from single or multiple instances of an Azure Virtual Machine Scale Set to your Azure or Non-Azure multi-tier application.
-Connection Monitor combines the best of two features: the Network Watcher [Connection Monitor (Classic)](./network-watcher-monitoring-overview.md#monitor-communication-between-a-virtual-machine-and-an-endpoint) feature and the NPM [Service Connectivity Monitor](../azure-monitor/insights/network-performance-monitor-service-connectivity.md), [ExpressRoute Monitoring](../expressroute/how-to-npm.md), and [Performance monitoring](../azure-monitor/insights/network-performance-monitor-performance-monitor.md) feature.
+Connection Monitor combines the best of two features: the Network Watcher [Connection Monitor (Classic)](./network-watcher-monitoring-overview.md#monitor-communication-between-a-virtual-machine-and-an-endpoint) feature and the Network Performance Monitor [Service Connectivity Monitor](../azure-monitor/insights/network-performance-monitor-service-connectivity.md), [ExpressRoute Monitoring](../expressroute/how-to-npm.md), and [Performance monitoring](../azure-monitor/insights/network-performance-monitor-performance-monitor.md) feature.
Here are some benefits of Connection Monitor:
@@ -45,7 +49,7 @@ Here are some benefits of Connection Monitor:
* Support for connectivity checks that are based on HTTP, Transmission Control Protocol (TCP), and Internet Control Message Protocol (ICMP)
* Metrics and Log Analytics support for both Azure and non-Azure test setups
-![Diagram showing how Connection Monitor interacts with Azure VMs, non-Azure hosts, endpoints, and data storage locations.](./media/connection-monitor-2-preview/hero-graphic.png)
+![Diagram showing how Connection Monitor interacts with Azure VMs, non-Azure hosts, endpoints, and data storage locations.](./media/connection-monitor-2-preview/hero-graphic-new.png)
To start using Connection Monitor for monitoring, do the following:
@@ -59,19 +63,28 @@ The following sections provide details for these steps.
## Install monitoring agents
+ > [!NOTE]
+ > Connection Monitor now supports auto enablement of monitoring extensions for Azure & Non-Azure endpoints, thus eliminating the need for manual installion of monitoring solutions during the creation of Connection Monitor.
+
Connection Monitor relies on lightweight executable files to run connectivity checks. It supports connectivity checks from both Azure environments and on-premises environments. The executable file that you use depends on whether your VM is hosted on Azure or on-premises.
-### Agents for Azure virtual machines
+### Agents for Azure virtual machines and virtual machine scale sets
-To make Connection Monitor recognize your Azure VMs as monitoring sources, install the Network Watcher Agent virtual machine extension on them. This extension is also known as the *Network Watcher extension*. Azure virtual machines require the extension to trigger end-to-end monitoring and other advanced functionality.
+To make Connection Monitor recognize your Azure VMs or virtual machine scale sets as monitoring sources, install the Network Watcher Agent virtual machine extension on them. This extension is also known as the *Network Watcher extension*. Azure virtual machines and scale sets require the extension to trigger end-to-end monitoring and other advanced functionality.
-You can install the Network Watcher extension when you [create a VM](./connection-monitor.md#create-the-first-vm). You can also separately install, configure, and troubleshoot the Network Watcher extension for [Linux](../virtual-machines/extensions/network-watcher-linux.md) and [Windows](../virtual-machines/extensions/network-watcher-windows.md).
+You can install the Network Watcher extension when you [create a VM](./connection-monitor.md#create-the-first-vm) or when you [create a VM scale set](./connection-monitor-virtual-machine-scale-set.md#create-a-vm-scale-set). Follow similar steps for enabling the You can also separately install, configure, and troubleshoot the Network Watcher extension for [Linux](../virtual-machines/extensions/network-watcher-linux.md) and [Windows](../virtual-machines/extensions/network-watcher-windows.md).
Rules for a network security group (NSG) or firewall can block communication between the source and destination. Connection Monitor detects this issue and shows it as a diagnostics message in the topology. To enable connection monitoring, ensure that the NSG and firewall rules allow packets over TCP or ICMP between the source and destination.
+If you wish to escape the installation process for enabling Network Watcher extension, you can proceed with the creation of Connection Monitor and allow auto enablement of Network Watcher extensions on your Azure VMs and VM scale sets.
+
+ > [!Note]
+ > In the case the virtual machine scale sets is set for manual upgradation, the user will have to upgrade the scale set post Network Watcher extension installation in order to continue setting up the Connection Monitor with virtual machine scale sets as endpoints. In-case the virtual machine scale sets is set to auto upgradation, the user need not worry about any upgradation after Network Watcher extension installation.
+ > As Connection Monitor now supports unified auto enablement of monitoring extensions, user can consent to auto upgradation of VM scale set with auto enablement of Network Watcher extension during the creation on Connection Monitor for VM scale sets with manual upgradation.
+
### Agents for on-premises machines
-To make Connection Monitor recognize your on-premises machines as sources for monitoring, install the Log Analytics agent on the machines. Then, enable the [Network Performance Monitor solution](../network-watcher/connection-monitor-overview.md#enable-the-npm-solution-for-on-premises-machines). These agents are linked to Log Analytics workspaces, so you need to set up the workspace ID and primary key before the agents can start monitoring.
+To make Connection Monitor recognize your on-premises machines as sources for monitoring, install the Log Analytics agent on the machines. Then, enable the [Network Performance Monitor solution](../network-watcher/connection-monitor-overview.md#enable-the-network-performance-monitor-solution-for-on-premises-machines). These agents are linked to Log Analytics workspaces, so you need to set up the workspace ID and primary key before the agents can start monitoring.
To install the Log Analytics agent for Windows machines, see [Install Log Analytics agent on Windows](../azure-monitor/agents/agent-windows.md).
@@ -97,22 +110,24 @@ The script configures only Windows Firewall locally. If you have a network firew
The Log Analytics Windows agent can be multihomed to send data to multiple workspaces and System Center Operations Manager management groups. The Linux agent can send data only to a single destination, either a workspace or management group.
-#### Enable the NPM solution for on-premises machines
+#### Enable the Network Performance Monitor solution for on-premises machines
-To enable the NPM solution for on-premises machines, do the following:
+To enable the Network Performance Monitor solution for on-premises machines, do the following:
1. In the Azure portal, go to **Network Watcher**.
1. On the left pane, under **Monitoring**, select **Network Performance Monitor**.
- A list of workspaces with NPM solution enabled is displayed, filtered by **Subscriptions**.
-1. To add the NPM solution in a new workspace, select **Add NPM** at the top left.
+ A list of workspaces with Network Performance Monitor solution enabled is displayed, filtered by **Subscriptions**.
+1. To add the Network Performance Monitor solution in a new workspace, select **Add NPM** at the top left.
1. Select the subscription and workspace in which you want to enable the solution, and then select **Create**.
After you've enabled the solution, the workspace takes a couple of minutes to be displayed.
- :::image type="content" source="./media/connection-monitor/network-performance-monitor-solution-enable.png" alt-text="Screenshot showing how to add the NPM solution in Connection Monitor." lightbox="./media/connection-monitor/network-performance-monitor-solution-enable.png":::
+ :::image type="content" source="./media/connection-monitor/network-performance-monitor-solution-enable.png" alt-text="Screenshot showing how to add the Network Performance Monitor solution in Connection Monitor." lightbox="./media/connection-monitor/network-performance-monitor-solution-enable.png":::
+
+Unlike Log Analytics agents, the Network Performance Monitor solution can be configured to send data only to a single Log Analytics workspace.
-Unlike Log Analytics agents, the NPM solution can be configured to send data only to a single Log Analytics workspace.
+If you wish to escape the installation process for enabling Network Watcher extension, you can proceed with the creation of Connection Monitor and allow auto enablement of monitoring solution on your on-premises machines.
## Enable Network Watcher on your subscription
@@ -124,7 +139,7 @@ Make sure that Network Watcher is [available for your region](https://azure.micr
Connection Monitor monitors communication at regular intervals. It informs you of changes in reachability and latency. You can also check the current and historical network topology between source agents and destination endpoints.
-Sources can be Azure VMs or on-premises machines that have an installed monitoring agent. Destination endpoints can be Microsoft 365 URLs, Dynamics 365 URLs, custom URLs, Azure VM resource IDs, IPv4, IPv6, FQDN, or any domain name.
+Sources can be Azure VMs/ scale sets or on-premises machines that have an installed monitoring agent. Destination endpoints can be Microsoft 365 URLs, Dynamics 365 URLs, custom URLs, Azure VM resource IDs, IPv4, IPv6, FQDN, or any domain name.
### Access Connection Monitor
@@ -137,12 +152,12 @@ Sources can be Azure VMs or on-premises machines that have an installed monitori
### Create a connection monitor
-In connection monitors that you create in Connection Monitor, you can add both on-premises machines and Azure VMs as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or any other URL or IP address.
+In connection monitors that you create in Connection Monitor, you can add both on-premises machines and Azure VMs/ scale sets as sources. These connection monitors can also monitor connectivity to endpoints. The endpoints can be on Azure or any other URL or IP address.
Connection Monitor includes the following entities:
* **Connection monitor resource**: A region-specific Azure resource. All the following entities are properties of a connection monitor resource.
-* **Endpoint**: A source or destination that participates in connectivity checks. Examples of endpoints include Azure VMs, on-premises agents, URLs, and IP addresses.
+* **Endpoint**: A source or destination that participates in connectivity checks. Examples of endpoints include Azure VMs/ scale sets, on-premises agents, URLs, and IP addresses.
* **Test configuration**: A protocol-specific configuration for a test. Based on the protocol you select, you can define the port, thresholds, test frequency, and other properties.
* **Test group**: The group that contains source endpoints, destination endpoints, and test configurations. A connection monitor can contain more than one test group.
* **Test**: The combination of a source endpoint, destination endpoint, and test configuration. A test is the most granular level at which monitoring data is available. The monitoring data includes the percentage of checks that failed and the round-trip time (RTT).
@@ -175,6 +190,8 @@ All sources, destinations, and test configurations that you add to a test group
| 12 | C | E | Config 2 |
| | |
+
+
### Scale limits
Connection monitors have the following scale limits:
@@ -184,10 +201,21 @@ Connection monitors have the following scale limits:
* Maximum sources and destinations per connection monitor: 100
* Maximum test configurations per connection monitor: 20
+Monitoring coverage for Azure and Non Azure Resources:
+
+Connection Monitor now provides 5 different coverage levels for monitoring compound resources i.e. VNets, SubNets, VM Scale Sets. Coverage level is defined as the % of instances of a compound resource actually included in monitoring those resources as source or destinations.
+Users can manually select a coverage level from Low, Below Average, Average, Above Average and Full to define an approximate % of instances to be included in monitoring the particular resource as an endpoint
+
## Analyze monitoring data and set alerts
After you create a connection monitor, sources check connectivity to destinations based on your test configuration.
+While monitoring endpoints, Connection Monitor re-evaluates status of end points once every 24 hours. Hence, incase a VM gets deallocated or is turned-off during a 24-hour cycle, Connection Monitor would report indeterminate state due to absence of data in the network path till the end of 24-hour cycle before re evaluating the status of the VM and reporting the VM status as deallocated.
+
+ > [!NOTE]
+ > In case of monitoring an Azure Virtual Machine Scale Set, instances of a particular scale set selected for monitoring (either by the user or picked up by default as part of the coverage level selected) might get deallocated or scaled down in the middle of the 24 hour cycle. In this particular time-period, Connection Monitor will not be able to recognize this action and thus end-up reporting indeterminate state due to absence of data.
+ > Users are adviced to allow random selection of virtual machine scale sets instances within coverage levels instead of selecting particular instances of scale sets for monitoring, to minimize the risks of non-discoverability of deallocated or scaled down virtual machine scale sets instances in a 24 hours cycle and lead to indeterminate state of connection monitor.
+
### Checks in a test
Depending on the protocol that you select in the test configuration, Connection Monitor runs a series of checks for the source-destination pair. The checks run according to the test frequency that you select.
@@ -196,6 +224,7 @@ If you use HTTP, the service calculates the number of HTTP responses that return
If you use TCP or ICMP, the service calculates the packet-loss percentage to determine the percentage of failed checks. To calculate RTT, the service measures the time taken to receive the acknowledgment (ACK) for the packets that were sent. If you've enabled traceroute data for your network tests, you can view the hop-by-hop loss and latency for your on-premises network.
+
### States of a test
Depending on the data that the checks return, tests can have the following states:
diff --git a/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md b/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
new file mode 100644
index 0000000000000..131d31dda1acc
--- /dev/null
+++ b/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
@@ -0,0 +1,222 @@
+---
+title: Tutorial - Monitor network communication using the Azure portal using VM scale set
+description: In this tutorial, learn how to monitor network communication between two virtual machine scale sets with Azure Network Watcher's connection monitor capability.
+services: network-watcher
+documentationcenter: na
+author: mjha
+editor: ''
+tags: azure-resource-manager
+# Customer intent: I need to monitor communication between a VM scale set and another VM scale set. If the communication fails, I need to know why, so that I can resolve the problem.
+
+ms.service: network-watcher
+ms.topic: tutorial
+ms.tgt_pltfrm: na
+ms.workload: infrastructure-services
+ms.date: 05/24/2022
+ms.author: mjha
+ms.custom: mvc
+---
+
+# Tutorial: Monitor network communication between two virtual machine scale sets using the Azure portal
+
+> [!NOTE]
+> This tutorial cover Connection Monitor (classic). Try the new and improved [Connection Monitor](connection-monitor-overview.md) to experience enhanced connectivity monitoring
+
+> [!IMPORTANT]
+> Starting 1 July 2021, you will not be able to add new connection monitors in Connection Monitor (classic) but you can continue to use existing connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate from Connection Monitor (classic) to the new Connection Monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md) in Azure Network Watcher before 29 February 2024.
+
+Successful communication between a virtual machine scale set (VMSS) and an endpoint such as another VM, can be critical for your organization. Sometimes, configuration changes are introduced which can break communication. In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a VM scale set and a VM
+> * Monitor communication between VMs with the connection monitor capability of Network Watcher
+> * Generate alerts on Connection Monitor metrics
+> * Diagnose a communication problem between two VM scale sets, and learn how you can resolve it
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Create a VM scale set
+
+Create a VM scale set
+
+## Create a load balancer
+
+Azure [load balancer](../load-balancer/load-balancer-overview.md) distributes incoming traffic among healthy virtual machine instances.
+
+First, create a public Standard Load Balancer by using the portal. The name and public IP address you create are automatically configured as the load balancer's front end.
+
+1. In the search box, type **load balancer**. Under **Marketplace** in the search results, pick **Load balancer**.
+1. In the **Basics** tab of the **Create load balancer** page, enter or select the following information:
+
+ | Setting | Value |
+ | ---| ---|
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new** and type *myVMSSResourceGroup* in the text box.|
+ | Name | *myLoadBalancer* |
+ | Region | Select **East US**. |
+ | Type | Select **Public**. |
+ | SKU | Select **Standard**. |
+ | Public IP address | Select **Create new**. |
+ | Public IP address name | *myPip* |
+ | Assignment| Static |
+ | Availability zone | Select **Zone-redundant**. |
+
+1. When you are done, select **Review + create**
+1. After it passes validation, select **Create**.
+
+
+## Create virtual machine scale set
+
+You can deploy a scale set with a Windows Server image or Linux image such as RHEL, CentOS, Ubuntu, or SLES.
+
+1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual machine scale sets**. Select **Create** on the **Virtual machine scale sets** page, which will open the **Create a virtual machine scale set** page.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and select *myVMSSResourceGroup* from resource group list.
+1. Type *myScaleSet* as the name for your scale set.
+1. In **Region**, select a region that is close to your area.
+1. Under **Orchestration**, ensure the *Uniform* option is selected for **Orchestration mode**.
+1. Select a marketplace image for **Image**. In this example, we have chosen *Ubuntu Server 18.04 LTS*.
+1. Enter your desired username, and select which authentication type you prefer.
+ - A **Password** must be at least 12 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. For more information, see [username and password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
+ - If you select a Linux OS disk image, you can instead choose **SSH public key**. Only provide your public key, such as *~/.ssh/id_rsa.pub*. You can use the Azure Cloud Shell from the portal to [create and use SSH keys](../virtual-machines/linux/mac-create-ssh-keys.md).
+
+
+1. Select **Next** to move the other pages.
+1. Leave the defaults for the **Instance** and **Disks** pages.
+1. On the **Networking** page, under **Load balancing**, select **Yes** to put the scale set instances behind a load balancer.
+1. In **Load balancing options**, select **Azure load balancer**.
+1. In **Select a load balancer**, select *myLoadBalancer* that you created earlier.
+1. For **Select a backend pool**, select **Create new**, type *myBackendPool*, then select **Create**.
+1. When you are done, select **Review + create**.
+1. After it passes validation, select **Create** to deploy the scale set.
+
+
+Once the scale set is created, follow the steps below to enable the Network Watcher extension in the scale set.
+
+1. Under **Settings**, select **Extensions**. Select **Add extension**, and select **Network Watcher Agent for Windows**, as shown in the following picture:
+
+:::image type="content" source="./media/connection-monitor/nw-agent-extension.png" alt-text="Screenshot that shows Network Watcher extension addition.":::
+
+
+1. Under **Network Watcher Agent for Windows**, select **Create**, under **Install extension** select **OK**, and then under **Extensions**, select **OK**.
+
+
+### Create the VM
+
+Complete the steps in [create a VM](./connection-monitor.md#create-the-first-vm) again, with the following changes:
+
+|Step|Setting|Value|
+|---|---|---|
+| 1 | Select a version of **Ubuntu Server** | |
+| 3 | Name | myVm2 |
+| 3 | Authentication type | Paste your SSH public key or select **Password**, and enter a password. |
+| 3 | Resource group | Select **Use existing** and select **myResourceGroup**. |
+| 6 | Extensions | **Network Watcher Agent for Linux** |
+
+The VM takes a few minutes to deploy. Wait for the VM to finish deploying before continuing with the remaining steps.
+
+
+## Create a connection monitor
+
+Create a connection monitor to monitor communication over TCP port 22 from *myVmss1* to *myVm2*.
+
+1. On the left side of the portal, select **All services**.
+2. Start typing *network watcher* in the **Filter** box. When **Network Watcher** appears in the search results, select it.
+3. Under **MONITORING**, select **Connection monitor**.
+4. Select **+ Add**.
+5. Enter or select the information for the connection you want to monitor, and then select **Add**. In the example shown in the following picture, the connection monitored is from the *myVmss1* VM scale set to the *myVm2* VM over port 22:
+
+ | Setting | Value |
+ | --------- | --------- |
+ | Name | myVmss1-myVm2(22) |
+ | Source | |
+ | Virtual machine | myVmss1 |
+ | Destination | |
+ | Select a virtual machine | |
+ | Virtual machine | myVm2 |
+ | Port | 22 |
+
+ :::image type="content" source="./media/connection-monitor/add-connection-monitor.png" alt-text="Screenshot that shows addition of Connection Monitor.":::
+
+## View a connection monitor
+
+1. Complete steps 1-3 in [Create a connection monitor](#create-a-connection-monitor) to view connection monitoring. You see a list of existing connection monitors, as shown in the following picture:
+
+ :::image type="content" source="./media/connection-monitor/connection-monitors.png" alt-text="Screenshot that shows Connection Monitor.":::
+
+2. Select the monitor with the name **myVmss1-myVm2(22)**, as shown in the previous picture, to see details for the monitor, as shown in the following picture:
+
+ :::image type="content" source="./media/connection-monitor/vm-monitor.png" alt-text="Screenshot that shows virtual machine monitor.":::
+
+ Note the following information:
+
+ | Item | Value | Details |
+ | --------- | --------- |-------- |
+ | Status | Reachable | Lets you know whether the endpoint is reachable or not.|
+ | AVG. ROUND-TRIP | Lets you know the round-trip time to make the connection, in milliseconds. Connection monitor probes the connection every 60 seconds, so you can monitor latency over time. |
+ | Hops | Connection monitor lets you know the hops between the two endpoints. In this example, the connection is between two VMs in the same virtual network, so there is only one hop, to the 10.0.0.5 IP address. If any existing system or custom routes, route traffic between the VMs through a VPN gateway, or network virtual appliance, for example, additional hops are listed. |
+ | STATUS | The green check marks for each endpoint let you know that each endpoint is healthy. ||
+
+## Generate alerts
+
+Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. A generated alert can automatically run one or more actions, such as to notify someone or start another process. When setting an alert rule, the resource that you target determines the list of available metrics that you can use to generate alerts.
+
+1. In Azure portal, select the **Monitor** service, and then select **Alerts** > **New alert rule**.
+2. Click **Select target**, and then select the resources that you want to target. Select the **Subscription**, and set **Resource type** to filter down to the Connection Monitor that you want to use.
+
+ :::image type="content" source="./media/connection-monitor/set-alert-rule.png" alt-text="Screenshot of alert rule.":::
+
+1. Once you have selected a resource to target, select **Add criteria**.The Network Watcher has [metrics on which you can create alerts](../azure-monitor/alerts/alerts-metric-near-real-time.md#metrics-and-dimensions-supported). Set **Available signals** to the metrics ProbesFailedPercent and AverageRoundtripMs:
+
+ :::image type="content" source="./media/connection-monitor/set-alert-signals.png" alt-text="Screenshot of alert signals.":::
+
+1. Fill out the alert details like alert rule name, description and severity. You can also add an action group to the alert to automate and customize the alert response.
+
+## View a problem
+
+By default, Azure allows communication over all ports between VMs in the same virtual network. Over time, you, or someone in your organization, might override Azure's default rules, inadvertently causing a communication failure. Complete the following steps to create a communication problem and then view the connection monitor again:
+
+1. In the search box at the top of the portal, enter *myResourceGroup*. When the **myResourceGroup** resource group appears in the search results, select it.
+2. Select the **myVm2-nsg** network security group.
+3. Select **Inbound security rules**, and then select **Add**, as shown in the following picture:
+
+ :::image type="content" source="./media/connection-monitor/inbound-security-rules.png" alt-text="Screenshot of network security rules.":::
+
+4. The default rule that allows communication between all VMs in a virtual network is the rule named **AllowVnetInBound**. Create a rule with a higher priority (lower number) than the **AllowVnetInBound** rule that denies inbound communication over port 22. Select, or enter, the following information, accept the remaining defaults, and then select **Add**:
+
+ | Setting | Value |
+ | --- | --- |
+ | Destination port ranges | 22 |
+ | Action | Deny |
+ | Priority | 100 |
+ | Name | DenySshInbound |
+
+5. Since connection monitor probes at 60-second intervals, wait a few minutes and then on the left side of the portal, select **Network Watcher**, then **Connection monitor**, and then select the **myVm1-myVm2(22)** monitor again. The results are different now, as shown in the following picture:
+
+ :::image type="content" source="./media/connection-monitor/vm-monitor-fault.png" alt-text="Screenshot of virtual machine at fault.":::
+
+ You can see that there's a red exclamation icon in the status column for the **myvm2529** network interface.
+
+6. To learn why the status has changed, select 10.0.0.5, in the previous picture. Connection monitor informs you that the reason for the communication failure is: *Traffic blocked due to the following network security group rule: UserRule_DenySshInbound*.
+
+ If you didn't know that someone had implemented the security rule you created in step 4, you'd learn from connection monitor that the rule is causing the communication problem. You could then change, override, or remove the rule, to restore communication between the VMs.
+
+## Clean up resources
+
+When no longer needed, delete the resource group and all of the resources it contains:
+
+1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
+2. Select **Delete resource group**.
+3. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
+
+## Next steps
+
+In this tutorial, you learned how to monitor a connection between two VMs. You learned that a network security group rule prevented communication to a VM. To learn about all of the different responses connection monitor can return, see [response types](network-watcher-connectivity-overview.md#response). You can also monitor a connection between a VM, a fully qualified domain name, a uniform resource identifier, or an IP address.
+
+At some point, you may find that resources in a virtual network are unable to communicate with resources in other networks connected by an Azure virtual network gateway. Advance to the next tutorial to learn how to diagnose a problem with a virtual network gateway.
+
+> [!div class="nextstepaction"]
+> [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md)
diff --git a/articles/network-watcher/media/connection-monitor-2-preview/add-sources-1.png b/articles/network-watcher/media/connection-monitor-2-preview/add-sources-1.png
new file mode 100644
index 0000000000000..61a114acbb1c9
Binary files /dev/null and b/articles/network-watcher/media/connection-monitor-2-preview/add-sources-1.png differ
diff --git a/articles/network-watcher/media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png b/articles/network-watcher/media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png
new file mode 100644
index 0000000000000..5b2ea0b64d142
Binary files /dev/null and b/articles/network-watcher/media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png differ
diff --git a/articles/network-watcher/media/connection-monitor-2-preview/hero-graphic-new.png b/articles/network-watcher/media/connection-monitor-2-preview/hero-graphic-new.png
new file mode 100644
index 0000000000000..020e9317f374f
Binary files /dev/null and b/articles/network-watcher/media/connection-monitor-2-preview/hero-graphic-new.png differ
diff --git a/articles/network-watcher/media/connection-monitor-2-preview/unified-enablement-create.png b/articles/network-watcher/media/connection-monitor-2-preview/unified-enablement-create.png
new file mode 100644
index 0000000000000..5fb65378e1ea1
Binary files /dev/null and b/articles/network-watcher/media/connection-monitor-2-preview/unified-enablement-create.png differ
diff --git a/articles/network-watcher/media/connection-monitor-2-preview/unified-enablement.png b/articles/network-watcher/media/connection-monitor-2-preview/unified-enablement.png
new file mode 100644
index 0000000000000..a1633d3bd2306
Binary files /dev/null and b/articles/network-watcher/media/connection-monitor-2-preview/unified-enablement.png differ
diff --git a/articles/network-watcher/toc.yml b/articles/network-watcher/toc.yml
index 0c8de9b5391b5..81097dba6c64b 100644
--- a/articles/network-watcher/toc.yml
+++ b/articles/network-watcher/toc.yml
@@ -22,6 +22,8 @@
href: diagnose-vm-network-routing-problem.md
- name: Monitor communication between VMs
href: connection-monitor.md
+ - name: Monitor communication with virtual machine scale set
+ href: connection-monitor-virtual-machine-scale-set.md
- name: Diagnose a communication problem between networks
href: diagnose-communication-problem-between-networks.md
- name: Log VM network traffic
diff --git a/articles/openshift/howto-byok.md b/articles/openshift/howto-byok.md
index 3c29078f08f54..2959aaa772b4f 100644
--- a/articles/openshift/howto-byok.md
+++ b/articles/openshift/howto-byok.md
@@ -1,67 +1,48 @@
---
-title: Encrypt OS disks with a customer-managed key (CMK) on Azure Red Hat OpenShift (ARO)
-description: Encrypt OS disks with a customer-managed key (CMK) on Azure Red Hat OpenShift (ARO)
-author: sayjadha
-ms.author: suvetriv
+title: Encrypt OS disks with a customer-managed key (CMK) on Azure Red Hat OpenShift
+description: Encrypt OS disks with a customer-managed key (CMK) on Azure Red Hat OpenShift
+author: rahulm23
+ms.author: rahulmehta
ms.service: azure-redhat-openshift
-keywords: encryption, byok, aro, deploy, openshift, red hat
+keywords: encryption, byok, deploy, openshift, red hat, key
ms.topic: how-to
ms.date: 10/18/2021
ms.custom: template-how-to, ignite-fall-2021, devx-track-azurecli
ms.devlang: azurecli
---
-# Encrypt OS disks with a customer-managed key (CMK) on Azure Red Hat OpenShift (ARO) (preview)
+# Encrypt OS disks with a customer-managed key on Azure Red Hat OpenShift
-By default, the OS disks of the virtual machines in an Azure Red Hat OpenShift cluster were encrypted with auto-generated keys managed by Microsoft Azure. For additional security, customers can encrypt the OS disks with self-managed keys when deploying an ARO cluster. This features allows for more control by encrypting confidential data with customer-managed keys.
+By default, the OS disks of the virtual machines in an Azure Red Hat OpenShift cluster were encrypted with auto-generated keys managed by Microsoft Azure. For additional security, customers can encrypt the OS disks with self-managed keys when deploying an Azure Red Hat OpenShift cluster. This feature allows for more control by encrypting confidential data with customer-managed keys (CMK).
-Clusters created with customer-managed keys have a default storage class enabled with their keys. Therefore, both OS disks and data disks are encrypted by these keys. The customer-managed keys are stored in Azure Key Vault. For more information about using Azure Key Vault to create and maintain keys, see [Server-side encryption of Azure Disk Storage](../key-vault/general/basic-concepts.md) in the Microsoft Azure documentation.
+Clusters created with customer-managed keys have a default storage class enabled with their keys. Therefore, both OS disks and data disks are encrypted by these keys. The customer-managed keys are stored in Azure Key Vault.
-With host-based encryption, the data stored on the VM host of your ARO agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service. This means the temp disks are encrypted at rest with platform-managed keys. The cache of OS and data disks is encrypted at rest with either platform-managed keys or customer-managed keys depending on the encryption type set on those disks. By default, when using ARO, OS and data disks are encrypted at rest with platform-managed keys, meaning that the caches for these disks are also by default encrypted at rest with platform-managed keys. You can specify your own managed keys following the encryption steps below. The cache for these disks will then also be encrypted using the key that you specify in this step.
+For more information about using Azure Key Vault to create and maintain keys, see [Server-side encryption of Azure Disk Storage](../key-vault/general/basic-concepts.md) in the Microsoft Azure documentation.
-> [!IMPORTANT]
-> ARO preview features are available on a self-service, opt-in basis. Preview features are provided "as is" and "as available," and they are excluded from the service-level agreements and limited warranty. Preview features are partially covered by customer support on a best-effort basis. As such, these features are not meant for production use.
+With host-based encryption, the data stored on the VM host of your Azure Red Hat OpenShift agent nodes' VMs is encrypted at rest and flows encrypted to the Storage service. Host-base encryption means the temp disks are encrypted at rest with platform-managed keys.
-## Limitation
-It is the responsibility of the customers to maintain the Key Vault and Disk Encryption Set in Azure. Failure to maintain the keys will result in broken ARO clusters. The VMs stop working and therefore the entire ARO cluster stops functioning. The Azure Red Hat OpenShift Engineering team cannot access the keys; therefore, they cannot back up, replicate, or retrieve the keys. For details about using Disk Encryption Sets to manage your encryption keys, see [Server-side encryption of Azure Disk Storage](../virtual-machines/disk-encryption.md) in the Microsoft Azure documentation.
+The cache of OS and data disks is encrypted at rest with either platform-managed keys or customer-managed keys, depending on the encryption type set on those disks. By default, when using Azure Red Hat OpenShift, OS and data disks are encrypted at rest with platform-managed keys, meaning that the caches for these disks are also by default encrypted at rest with platform-managed keys.
-## Prerequisites
-* [Verify your permissions](tutorial-create-cluster.md#verify-your-permissions). You must have either Contributor and User Access Administrator permissions, or Owner permissions.
-* Register the resource providers if you have multiple Azure subscriptions. For registration details, see [Register the resource providers](tutorial-create-cluster.md#register-the-resource-providers).
+You can specify your own managed keys following the encryption steps below. The cache for these disks also will be encrypted using the key that you specify in this step.
-## Install the preview Azure CLI extension
-Install and use the Azure CLI to create a Key Vault. The Azure CLI allows the execution of commands through a terminal using interactive command-line prompts or a script.
+## Limitation
+It's the responsibility of customers to maintain the Key Vault and Disk Encryption Set in Azure. Failure to maintain the keys will result in broken Azure Red Hat OpenShift clusters. The VMs will stop working and, as a result, the entire Azure Red Hat OpenShift cluster will stop functioning.
-> [!NOTE]
-> The CLI extension is required for the preview feature only.
+The Azure Red Hat OpenShift Engineering team can't access the keys. Therefore, they can't back up, replicate, or retrieve the keys.
-1. Click the following URL to download both the Python wheel and the CLI extension:
- [https://aka.ms/az-aroext-latest.whl](https://aka.ms/az-aroext-latest.whl)
-1. Run the following command:
- ```azurecli-interactive
- az extension add --upgrade -s
- ```
-1. Verify that the CLI extension is being used:
- ```azurecli-interactive
- az extension list
- [
- {
- "experimental": false,
- "extensionType": "whl",
- "name": "aro",
- "path": "",
- "preview": true,
- "version": "1.0.1"
- }
- ]
- ```
+For details about using Disk Encryption Sets to manage your encryption keys, see [Server-side encryption of Azure Disk Storage](../virtual-machines/disk-encryption.md) in the Microsoft Azure documentation.
+
+## Prerequisites
+* [Verify your permissions](tutorial-create-cluster.md#verify-your-permissions). You must have either Contributor and User Access Administrator permissions or Owner permissions.
+* If you have multiple Azure subscriptions, register the resource providers. For registration details, see [Register the resource providers](tutorial-create-cluster.md#register-the-resource-providers).
## Create a virtual network containing two empty subnets
Create a virtual network containing two empty subnets. If you have an existing virtual network that meets your needs, you can skip this step. To review the procedure of creating a virtual network, see [Create a virtual network containing two empty subnets](tutorial-create-cluster.md#create-a-virtual-network-containing-two-empty-subnets).
## Create an Azure Key Vault instance
You must use an Azure Key Vault instance to store your keys. Create a new Key Vault with purge protection enabled. Then, create a new key within the Key Vault to store your own custom key.
-1. Set additional environment permissions:
+
+1. Set more environment permissions:
```
export KEYVAULT_NAME=$USER-enckv
export KEYVAULT_KEY_NAME=$USER-key
@@ -87,7 +68,7 @@ You must use an Azure Key Vault instance to store your keys. Create a new Key Va
```
## Create an Azure Disk Encryption Set
-The Azure Disk Encryption Set is used as the reference point for disks in ARO clusters. It is connected to the Azure Key Vault that you created in the previous step, and pulls the customer-managed keys from that location.
+The Azure Disk Encryption Set is used as the reference point for disks in Azure Red Hat OpenShift clusters. It's connected to the Azure Key Vault that you created in the previous step, and pulls the customer-managed keys from that location.
```azurecli-interactive
az disk-encryption-set create -n $DISK_ENCRYPTION_SET_NAME \
-l $LOCATION \
@@ -112,8 +93,8 @@ az keyvault set-policy -n $KEYVAULT_NAME \
--key-permissions wrapkey unwrapkey get
```
-## Create an ARO cluster
-Create an ARO cluster to use the customer-managed keys.
+## Create an Azure Red Hat OpenShift cluster
+Create an Azure Red Hat OpenShift cluster to use the customer-managed keys.
```azurecli-interactive
az aro create --resource-group $RESOURCEGROUP \
--name $CLUSTER \
@@ -122,7 +103,7 @@ az aro create --resource-group $RESOURCEGROUP \
--worker-subnet worker-subnet \
--disk-encryption-set $DES_ID
```
-After creating the ARO cluster, all VMs are encrypted with the customer-managed encryption keys.
+After you create the Azure Red Hat OpenShift cluster, all VMs are encrypted with the customer-managed encryption keys.
To verify that you configured the keys correctly, run the following commands:
1. Get the name of the cluster Resource Group where the cluster VMs, disks, and so on are located:
@@ -133,4 +114,4 @@ To verify that you configured the keys correctly, run the following commands:
```azurecli-interactive
az disk list -g $CLUSTERRESOURCEGROUP --query '[].encryption'
```
- The field `diskEncryptionSetId` in the output must point to the Disk Encryption Set that you specified while creating the ARO cluster.
+ The field `diskEncryptionSetId` in the output must point to the Disk Encryption Set that you specified while creating the Azure Red Hat OpenShift cluster.
diff --git a/articles/openshift/howto-upgrade.md b/articles/openshift/howto-upgrade.md
index a1716821e799b..26a2a2fdc104a 100644
--- a/articles/openshift/howto-upgrade.md
+++ b/articles/openshift/howto-upgrade.md
@@ -4,33 +4,77 @@ description: Learn how to upgrade an Azure Red Hat OpenShift cluster running Ope
ms.service: azure-redhat-openshift
ms.topic: article
ms.date: 1/10/2021
-author: sakthi-vetrivel
-ms.author: suvetriv
-keywords: aro, openshift, az aro, red hat, cli
+author: rahulm23
+ms.author: rahulmehta
+keywords: aro, openshift, az aro, red hat, cli, azure, MUO, managed, upgrade, operator
+#Customer intent: I need to understand how to upgrade my Azure Red Hat OpenShift cluster running OpenShift 4.
---
-# Upgrade an Azure Red Hat OpenShift (ARO) cluster
+# Upgrade an Azure Red Hat OpenShift cluster
-Part of the ARO cluster lifecycle involves performing periodic upgrades to the latest OpenShift version. It is important you apply the latest security releases, or upgrade to get the latest features. This article shows you how to upgrade all components in an OpenShift cluster using the OpenShift Web Console.
+As part of the Azure Red Hat OpenShift cluster lifecycle, you need to perform periodic upgrades to the latest version of the OpenShift platform. Upgrading your Azure Red Hat OpenShift clusters enables you to upgrade to the latest features and functionalities and apply the latest security releases.
+
+This article shows you how to upgrade all components in an OpenShift cluster using the OpenShift web console or the managed-upgrade-operator (MUO).
## Before you begin
-This article requires that you're running the Azure CLI version 2.0.65 of later. Run `az --version` to find your current version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
+* This article requires that you're running the Azure CLI version 2.6.0 or later. Run `az --version` to find your current version. If you need to install or upgrade Azure CLI/it, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+* This article assumes you have access to an existing Azure Red Hat OpenShift cluster as a user with `admin` privileges.
+
+* This article assumes you've updated your Azure Red Hat OpenShift pull secret for an existing Azure Red Hat OpenShift 4.x cluster. Including the **cloud.openshift.com** entry from your pull secret enables your cluster to start sending telemetry data to Red Hat.
+
+ For more information, see [Add or update your Red Hat pull secret on an Azure Red Hat OpenShift 4 cluster](howto-add-update-pull-secret.md).
+
+## Check for Azure Red Hat OpenShift cluster upgrades
+
+1. From the top-left of the OpenShift web console, which is the default when you sign as the kuberadmin, select the **Administration** tab.
+
+2. Select **Cluster Settings** and open the **Details** tab. You'll see the version, update status, and channel. The channel isn't configured by default.
+
+3. Select the **Channel** link, and at the prompt enter the desired update channel, for example **stable-4.10**. Once the desired channel is chosen, a graph showing available releases and channels is displayed. If the **Update Status** for your cluster shows **Updates Available**, you can update your cluster.
+
+## Upgrade your Azure Red Hat OpenShift cluster with the OpenShift web console
+
+From the OpenShift web console in the previous step, set the **Channel** to the correct channel for the version that you want to update to, such as `stable-4.10`.
+
+Selection a version to update to, and select **Update**. You'll see the update status change to: `Update to in progress`. You can review the progress of the cluster update by watching the progress bars for the operators and nodes.
+
+## Scheduling individual upgrades using the managed-upgrade-operator
+
+Use the managed-upgrade-operator (MUO) to upgrade your Azure Red Hat OpenShift cluster.
-This article assumes you have access to an existing Azure Red Hat OpenShift cluster as a user with `admin` privileges.
+The managed-upgrade-operator manages automated cluster upgrades. The managed-upgrade-operator starts the cluster upgrade, but it doesn't perform any activities of the cluster upgrade process itself. The OpenShift Container Platform (OCP) is responsible for upgrading the clusters. The goal of the managed-upgrade-operator is to satisfy the operating conditions that a managed cluster must hold, both before and after starting the cluster upgrade.
-## Check for available ARO cluster upgrades
+1. Prepare the configuration file, as shown in the following example for upgrading to OpenShift 4.10.
-From the OpenShift web console, select **Administration** > **Cluster Settings** and open the **Details** tab.
+```
+apiVersion: upgrade.managed.openshift.io/v1alpha1
+kind: UpgradeConfig
+metadata:
+ name: managed-upgrade-config
+ namespace: openshift-managed-upgrade-operator
+spec:
+ type: "ARO"
+ upgradeAt: "2022-02-08T03:20:00Z"
+ PDBForceDrainTimeout: 60
+ desired:
+ channel: "stable-4.10"
+ version: "4.10.10"
+```
-If the **Update Status** for your cluster reflects **Updates Available**, you can update your cluster.
+where:
-## Upgrade your ARO cluster
+* `channel` is the channel the configuration file will pull from, according to the lifecycle policy. The channel used should be `stable-4.10`.
+* `version` is the version that you wish to upgrade to, such as `4.10.10`.
+* `upgradeAT` is the time when the upgrade will take place.
-From the web console in the previous step, set the **Channel** to the correct channel for the version that you want to update to, such as `stable-4.5`.
+2. Apply the configuration file:
-Selection a version to update to, and select **Update**. You'll see the update status change to: `Update to in progress`. You can review the progress of the cluster update by watching the progress bars for the Operators and nodes.
+```azurecli-interactive
+$ oc create -f .yaml
+```
## Next steps
-- [Learn to upgrade an ARO cluster using the OC CLI](https://docs.openshift.com/container-platform/4.5/updating/updating-cluster-between-minor.html)
-- You can find information about available OpenShift Container Platform advisories and updates in the [errata section](https://access.redhat.com/downloads/content/290/ver=4.6/rhel---8/4.6.0/x86_64/product-errata) of the Customer Portal.
+- [Learn to upgrade an Azure Red Hat OpenShift cluster using the OC CLI](https://docs.openshift.com/container-platform/4.10/updating/index.html).
+- You can find information about available OpenShift Container Platform advisories and updates in the [errata section](https://access.redhat.com/downloads/content/290/ver=4.10/rhel---8/4.10.13/x86_64/product-software) of the Customer Portal.
\ No newline at end of file
diff --git a/articles/openshift/toc.yml b/articles/openshift/toc.yml
index 36377cfd27e63..2ad96ea28aae4 100644
--- a/articles/openshift/toc.yml
+++ b/articles/openshift/toc.yml
@@ -40,7 +40,7 @@
href: howto-restrict-egress.md
- name: Storage
items:
- - name: Encrypt cluster data with customer-managed key (preview)
+ - name: Encrypt cluster data with customer-managed key
href: howto-byok.md
- name: Create an Azure Files Storageclass
href: howto-create-a-storageclass.md
diff --git a/articles/postgresql/TOC.yml b/articles/postgresql/TOC.yml
index 7cf290eea7029..c5ad6c8736acb 100644
--- a/articles/postgresql/TOC.yml
+++ b/articles/postgresql/TOC.yml
@@ -534,7 +534,7 @@
href: flexible-server/how-to-connect-query-guide.md
- name: Database deployment
items:
- - name: GitHub actions
+ - name: GitHub Actions
href: how-to-deploy-github-action.md
- name: Azure pipelines
href: flexible-server/azure-pipelines-deploy-database-task.md
@@ -687,6 +687,8 @@
href: hyperscale/concepts-columnar.md
- name: How-to guides
items:
+ - name: Connect and query
+ href: hyperscale/howto-connect.md
- name: Build scalable apps
items:
- name: Overview
diff --git a/articles/postgresql/hyperscale/concepts-connection-pool.md b/articles/postgresql/hyperscale/concepts-connection-pool.md
index d0251328391f0..d5940038dffd8 100644
--- a/articles/postgresql/hyperscale/concepts-connection-pool.md
+++ b/articles/postgresql/hyperscale/concepts-connection-pool.md
@@ -6,7 +6,7 @@ author: jonels-msft
ms.service: postgresql
ms.subservice: hyperscale-citus
ms.topic: conceptual
-ms.date: 08/03/2021
+ms.date: 05/31/2022
---
# Azure Database for PostgreSQL – Hyperscale (Citus) connection pooling
@@ -30,8 +30,11 @@ actively run in the database doesn't change. Instead, PgBouncer queues excess
connections and runs them when the database is ready.
Hyperscale (Citus) is now offering a managed instance of PgBouncer for server
-groups. It supports up to 2,000 simultaneous client connections. To connect
-through PgBouncer, follow these steps:
+groups. It supports up to 2,000 simultaneous client connections. Additionally,
+if a server group has [high availability](concepts-high-availability.md) (HA)
+enabled, then so does its managed PgBouncer.
+
+To connect through PgBouncer, follow these steps:
1. Go to the **Connection strings** page for your server group in the Azure
portal.
diff --git a/articles/postgresql/hyperscale/howto-connect.md b/articles/postgresql/hyperscale/howto-connect.md
new file mode 100644
index 0000000000000..b529ff64ea350
--- /dev/null
+++ b/articles/postgresql/hyperscale/howto-connect.md
@@ -0,0 +1,90 @@
+---
+title: Connect to server - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn how to connect to and query a Hyperscale (Citus) server group
+ms.author: jonels
+author: jonels-msft
+ms.service: postgresql
+ms.subservice: hyperscale-citus
+ms.topic: how-to
+ms.date: 05/25/2022
+---
+
+# Connect to a server group
+
+Choose your database client below to learn how to configure it to connect to
+Hyperscale (Citus).
+
+# [pgAdmin](#tab/pgadmin)
+
+[pgAdmin](https://www.pgadmin.org/) is a popular and feature-rich open source
+administration and development platform for PostgreSQL.
+
+1. [Download](https://www.pgadmin.org/download/) and install pgAdmin.
+
+2. Open the pgAdmin application on your client computer. From the Dashboard,
+ select **Add New Server**.
+
+ ![pgAdmin dashboard](../media/howto-hyperscale-connect/pgadmin-dashboard.png)
+
+3. Choose a **Name** in the General tab. Any name will work.
+
+ ![pgAdmin general connection settings](../media/howto-hyperscale-connect/pgadmin-general.png)
+
+4. Enter connection details in the Connection tab.
+
+ ![pgAdmin db connection settings](../media/howto-hyperscale-connect/pgadmin-connection.png)
+
+ Customize the following fields:
+
+ * **Host name/address**: Obtain this value from the **Overview** page for your
+ server group in the Azure portal. It's listed there as **Coordinator name**.
+ It will be of the form, `c.myservergroup.postgres.database.azure.com`.
+ * **Maintenance database**: use the value `citus`.
+ * **Username**: use the value `citus`.
+ * **Password**: the connection password.
+ * **Save password**: enable if desired.
+
+5. In the SSL tab, set **SSL mode** to **Require**.
+
+ ![pgAdmin ssl settings](../media/howto-hyperscale-connect/pgadmin-ssl.png)
+
+6. Select **Save** to save and connect to the database.
+
+# [psql](#tab/psql)
+
+The [psql utility](https://www.postgresql.org/docs/current/app-psql.html) is a
+terminal-based front-end to PostgreSQL. It enables you to type in queries
+interactively, issue them to PostgreSQL, and see the query results.
+
+1. Install psql. It's included with a [PostgreSQL
+ installation](https://www.postgresql.org/docs/current/tutorial-install.html),
+ or available separately in package managers for several operating systems.
+
+2. Obtain the connection string. In the server group page, select the
+ **Connection strings** menu item.
+
+ ![get connection string](../media/quickstart-connect-psql/get-connection-string.png)
+
+ Find the string marked **psql**. It will be of the form, `psql
+ "host=c.servergroup.postgres.database.azure.com port=5432 dbname=citus
+ user=citus password={your_password} sslmode=require"`
+
+ * Copy the string.
+ * Replace "{your\_password}" with the administrative password you chose earlier.
+ * Notice the hostname starts with a `c.`, for instance
+ `c.demo.postgres.database.azure.com`. This prefix indicates the
+ coordinator node of the server group.
+ * The default dbname and username is `citus` and can't be changed.
+
+3. In a local terminal prompt, paste the psql connection string, *substituting
+ your password for the string `{your_password}`*, then press enter.
+
+---
+
+**Next steps**
+
+* Troubleshoot [connection issues](howto-troubleshoot-common-connection-issues.md).
+* [Verify TLS](howto-ssl-connection-security.md) certificates in your
+ connections.
+* Now that you can connect to the database, learn how to [build scalable
+ apps](howto-build-scalable-apps-overview.md).
diff --git a/articles/postgresql/media/howto-hyperscale-connect/pgadmin-connection.png b/articles/postgresql/media/howto-hyperscale-connect/pgadmin-connection.png
new file mode 100644
index 0000000000000..e5cd93e3e92a0
Binary files /dev/null and b/articles/postgresql/media/howto-hyperscale-connect/pgadmin-connection.png differ
diff --git a/articles/postgresql/media/howto-hyperscale-connect/pgadmin-dashboard.png b/articles/postgresql/media/howto-hyperscale-connect/pgadmin-dashboard.png
new file mode 100644
index 0000000000000..a3fc66b540282
Binary files /dev/null and b/articles/postgresql/media/howto-hyperscale-connect/pgadmin-dashboard.png differ
diff --git a/articles/postgresql/media/howto-hyperscale-connect/pgadmin-general.png b/articles/postgresql/media/howto-hyperscale-connect/pgadmin-general.png
new file mode 100644
index 0000000000000..294904779db11
Binary files /dev/null and b/articles/postgresql/media/howto-hyperscale-connect/pgadmin-general.png differ
diff --git a/articles/postgresql/media/howto-hyperscale-connect/pgadmin-ssl.png b/articles/postgresql/media/howto-hyperscale-connect/pgadmin-ssl.png
new file mode 100644
index 0000000000000..80f95fe99f12e
Binary files /dev/null and b/articles/postgresql/media/howto-hyperscale-connect/pgadmin-ssl.png differ
diff --git a/articles/postgresql/single-server/application-best-practices.md b/articles/postgresql/single-server/application-best-practices.md
index 477aff15c0c5f..63a0094f5fd42 100644
--- a/articles/postgresql/single-server/application-best-practices.md
+++ b/articles/postgresql/single-server/application-best-practices.md
@@ -49,7 +49,7 @@ You can use [Data-in Replication](./concepts-read-replicas.md) for failover scen
## Database deployment
### Configure CI/CD deployment pipeline
-Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it.
+Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub Actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it.
### Define manual database deployment process
During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment:
diff --git a/articles/postgresql/single-server/how-to-deploy-github-action.md b/articles/postgresql/single-server/how-to-deploy-github-action.md
index 35e0b2fcdcbd1..605305fa63ed2 100644
--- a/articles/postgresql/single-server/how-to-deploy-github-action.md
+++ b/articles/postgresql/single-server/how-to-deploy-github-action.md
@@ -186,7 +186,7 @@ You will use the connection string as a GitHub secret.
1. Open the first result to see detailed logs of your workflow's run.
- :::image type="content" source="media/how-to-deploy-github-action/gitbub-action-postgres-success.png" alt-text="Log of GitHub actions run":::
+ :::image type="content" source="media/how-to-deploy-github-action/gitbub-action-postgres-success.png" alt-text="Log of GitHub Actions run":::
## Clean up resources
diff --git a/articles/private-link/private-endpoint-dns.md b/articles/private-link/private-endpoint-dns.md
index ddc9acb4a2913..c5d372ccdbc95 100644
--- a/articles/private-link/private-endpoint-dns.md
+++ b/articles/private-link/private-endpoint-dns.md
@@ -5,7 +5,7 @@ services: private-link
author: asudbring
ms.service: private-link
ms.topic: conceptual
-ms.date: 03/23/2022
+ms.date: 05/31/2022
ms.author: allensu
ms.custom: fasttrack-edit
@@ -65,7 +65,7 @@ For Azure services, use the recommended zone names as described in the following
| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) / mariadbServer | privatelink.mariadb.database.azure.com | mariadb.database.azure.com |
| Azure Key Vault (Microsoft.KeyVault/vaults) / vault | privatelink.vaultcore.azure.net | vault.azure.net vaultcore.azure.net |
| Azure Key Vault (Microsoft.KeyVault/managedHSMs) / Managed HSMs | privatelink.managedhsm.azure.net | managedhsm.azure.net |
-| Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) / management | privatelink.{region}.azmk8s.io | {region}.azmk8s.io |
+| Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) / management | privatelink.{region}.azmk8s.io {subzone}.privatelink.{region}.azmk8s.io | {region}.azmk8s.io |
| Azure Search (Microsoft.Search/searchServices) / searchService | privatelink.search.windows.net | search.windows.net |
| Azure Container Registry (Microsoft.ContainerRegistry/registries) / registry | privatelink.azurecr.io | azurecr.io |
| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) / configurationStores | privatelink.azconfig.io | azconfig.io |
@@ -91,9 +91,10 @@ For Azure services, use the recommended zone names as described in the following
| Microsoft Purview (Microsoft.Purview) / portal| privatelink.purviewstudio.azure.com | purview.azure.com |
| Azure Digital Twins (Microsoft.DigitalTwins) / digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net |
| Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.net | azurehdinsight.net |
-| Azure Arc (Microsoft.HybridCompute) / hybridcompute | privatelink.his.arc.azure.com privatelink.guestconfiguration.azure.com | his.arc.azure.com guestconfiguration.azure.com |
+| Azure Arc (Microsoft.HybridCompute) / hybridcompute | privatelink.his.arc.azure.com privatelink.guestconfiguration.azure.com | his.arc.azure.com guestconfiguration.azure.com |
| Azure Media Services (Microsoft.Media) / keydelivery, liveevent, streamingendpoint | privatelink.media.azure.net | media.azure.net |
| Azure Data Explorer (Microsoft.Kusto) | privatelink.{region}.kusto.windows.net | {region}.kusto.windows.net |
+| Azure Static Web Apps (Microsoft.Web/staticSites) / staticSites | privatelink.azurestaticapps.net privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net {partitionId}.azurestaticapps.net |
1To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
diff --git a/articles/private-link/private-endpoint-overview.md b/articles/private-link/private-endpoint-overview.md
index 8aff79cf9f827..91e3e074cd919 100644
--- a/articles/private-link/private-endpoint-overview.md
+++ b/articles/private-link/private-endpoint-overview.md
@@ -6,7 +6,7 @@ author: asudbring
# Customer intent: As someone who has a basic network background but is new to Azure, I want to understand the capabilities of private endpoints so that I can securely connect to my Azure PaaS services within the virtual network.
ms.service: private-link
ms.topic: conceptual
-ms.date: 02/17/2022
+ms.date: 05/31/2022
ms.author: allensu
---
# What is a private endpoint?
@@ -110,6 +110,7 @@ A private-link resource is the destination target of a specified private endpoin
| Azure App Service | Microsoft.Web/hostingEnvironments | hosting environment |
| Azure App Service | Microsoft.Web/sites | sites |
| Azure Static Web Apps | Microsoft.Web/staticSites | staticSites |
+| Azure Media Services | Microsoft.Media/mediaservices | keydelivery, liveevent, streamingendpoint |
> [!NOTE]
> You can create private endpoints only on a General Purpose v2 (GPv2) storage account.
diff --git a/articles/purview/concept-guidelines-pricing.md b/articles/purview/concept-guidelines-pricing.md
index 36f5745137dd3..3a37cb3e6a8be 100644
--- a/articles/purview/concept-guidelines-pricing.md
+++ b/articles/purview/concept-guidelines-pricing.md
@@ -21,16 +21,16 @@ Microsoft Purview enables a unified governance experience by providing a single
## Factors impacting Azure Pricing
-There are **direct** and **indirect** costs that need to be considered while planning the Microsoft Purview budgeting and cost management.
+There are [**direct**](#direct-costs) and [**indirect**](#indirect-costs) costs that need to be considered while planning the Microsoft Purview budgeting and cost management.
-### Direct costs
+## Direct costs
Direct costs impacting Microsoft Purview pricing are based on the following three dimensions:
-- **Elastic data map**
-- **Automated scanning & classification**
-- **Advanced resource sets**
+- [**Elastic data map**](#elastic-data-map)
+- [**Automated scanning & classification**](#automated-scanning-classification-and-ingestion)
+- [**Advanced resource sets**](#advanced-resource-sets)
-#### Elastic data map
+### Elastic data map
- The **Data map** is the foundation of the Microsoft Purview architecture and so needs to be up to date with asset information in the data estate at any given point
@@ -40,7 +40,7 @@ Direct costs impacting Microsoft Purview pricing are based on the following thre
- However, the data map scales automatically between the minimal and maximal limits of that elasticity window, to cater to changes in the data map with respect to two key factors - **operation throughput** and **metadata storage**
-##### Operation throughput
+#### Operation throughput
- An event driven factor based on the Create, Read, Update, Delete operations performed on the data map
- Some examples of the data map operations would be:
@@ -60,11 +60,11 @@ Direct costs impacting Microsoft Purview pricing are based on the following thre
- The **burst duration** is the percentage of the month that such bursts (in elasticity) are expected because of growing metadata or higher number of operations on the data map
-##### Metadata storage
+#### Metadata storage
- If the number of assets reduces in the data estate, and are then removed in the data map through subsequent incremental scans, the storage component automatically reduces and so the data map scales down
-#### Automated scanning, classification and ingestion
+### Automated scanning, classification, and ingestion
There are two major automated processes that can trigger ingestion of metadata into Microsoft Purview:
1. Automatic scans using native [connectors](azure-purview-connector-overview.md). This process includes three main steps:
@@ -75,7 +75,7 @@ There are two major automated processes that can trigger ingestion of metadata i
2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines. This process includes:
- Ingestion of metadata and lineage into Microsoft Purview if Microsoft Purview account is connected to any Azure Data Factory or Azure Synapse pipelines.
-##### 1. Automatic scans using native connectors
+#### 1. Automatic scans using native connectors
- A **full scan** processes all assets within a selected scope of a data source whereas an **incremental scan** detects and processes assets, which have been created, modified, or deleted since the previous successful scan
- All scans (full or Incremental scans) will pick up **updated, modified, or deleted** assets
@@ -98,11 +98,11 @@ There are two major automated processes that can trigger ingestion of metadata i
- Align your scan schedules with Self-Hosted Integration Runtime (SHIR) VMs (Virtual Machines) size to avoid extra costs linked to virtual machines
-##### 2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines
+#### 2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines
- metadata and lineage is ingested from Azure Data Factory or Azure Synapse pipelines every time the pipelines run in the source system.
-#### Advanced resource sets
+### Advanced resource sets
- Microsoft Purview uses **resource sets** to address the challenge of mapping large numbers of data assets to a single logical resource by providing the ability to scan all the files in the data lake and find patterns (GUID, localization patterns, etc.) to group them as a single asset in the data map
@@ -117,7 +117,7 @@ There are two major automated processes that can trigger ingestion of metadata i
- It is important to note that billing for Advanced Resource Sets is based on the compute used by the offline tier to aggregate resource set information and is dependent on the size/number of resource sets in your catalog
-### Indirect costs
+## Indirect costs
Indirect costs impacting Microsoft Purview pricing to be considered are:
diff --git a/articles/remote-rendering/overview/system-requirements.md b/articles/remote-rendering/overview/system-requirements.md
index 7381756948f66..50f5175e3a0e4 100644
--- a/articles/remote-rendering/overview/system-requirements.md
+++ b/articles/remote-rendering/overview/system-requirements.md
@@ -111,7 +111,7 @@ The following software must be installed:
For development with Unity, install a supported version of Unity [(download)](https://unity3d.com/get-unity/download). We recommend using Unity Hub for managing installations.
> [!IMPORTANT]
-> In addition to the supported versions mentioned below, make sure to check out the [Unity known issues page](/mixed-reality/develop/unity/known-issues).
+> In addition to the supported versions mentioned below, make sure to check out the [Unity known issues page](/windows/mixed-reality/develop/unity/known-issues).
Make sure to include the following modules in your Unity installation:
* **UWP** - Universal Windows Platform Build Support
@@ -136,4 +136,4 @@ For Unity 2020, use latest version of Unity 2020.3.
## Next steps
-* [Quickstart: Render a model with Unity](../quickstarts/render-model.md)
\ No newline at end of file
+* [Quickstart: Render a model with Unity](../quickstarts/render-model.md)
diff --git a/articles/route-server/vmware-solution-default-route.md b/articles/route-server/vmware-solution-default-route.md
index 2a753fc5d6c76..03a7bf5e1b9ee 100644
--- a/articles/route-server/vmware-solution-default-route.md
+++ b/articles/route-server/vmware-solution-default-route.md
@@ -49,7 +49,7 @@ If advertising less specific prefixes isn't possible as in the option described
:::image type="content" source="./media/scenarios/vmware-solution-to-on-premises.png" alt-text="Diagram of AVS to on-premises communication with Route Server in two regions.":::
-Note that some sort of encapsulation protocol such as VXLAN or IPsec is required between the NVAs. The reason why encapsulation is needed is because the NVA NICs would learn the routes from Azure Route Server with the NVA as next hop, and create a routing loop.
+Note that some sort of encapsulation protocol such as VXLAN or IPsec is required between the NVAs. The reason why encapsulation is needed is because the NVA NICs would learn the routes from Azure Route Server with the NVA as next hop, and create a routing loop. An alternative to using an overlay is by leveraging secondary NICs in the NVA that don't learn the routes from Azure Route Server, and configuring UDRs so that Azure can route traffic to the remote environment over those NICs. You can find more details in [Enterprise-scale network topology and connectivity for Azure VMware Solution][caf_avs_nw].
The main difference between this dual VNet design and the previously described single VNet design is that with two VNets you have full control on what is advertised to each ExpressRoute circuit, and this allows for a more dynamic and granular configuration. In comparison, in the single-VNet design described earlier in this document a common set of supernets or less specific prefixes are sent down both circuits to attract traffic to the VNet. Additional, in the single-VNet design there is a static configuration component in the UDRs that are required in the Gateway Subnet. Hence, although less cost-effective (two ExpressRoute gateways and two sets of NVAs are required), the double-VNet design might be a better alternative for very dynamic routing environments.
@@ -57,3 +57,5 @@ The main difference between this dual VNet design and the previously described s
* [Learn how Azure Route Server works with ExpressRoute](expressroute-vpn-support.md)
* [Learn how Azure Route Server works with a network virtual appliance](resource-manager-template-samples.md)
+
+[caf_avs_nw]: /azure/cloud-adoption-framework/scenarios/azure-vmware/eslz-network-topology-connectivity
diff --git a/articles/search/TOC.yml b/articles/search/TOC.yml
index 36f70e17ea70a..9011280a44d7d 100644
--- a/articles/search/TOC.yml
+++ b/articles/search/TOC.yml
@@ -412,8 +412,6 @@
href: knowledge-store-projections-examples.md
- name: Projection example
href: knowledge-store-projection-example-long.md
- - name: View with Storage Browser
- href: knowledge-store-view-storage-explorer.md
- name: Connect with Power BI
href: knowledge-store-connect-power-bi.md
- name: Queries
diff --git a/articles/search/cognitive-search-concept-intro.md b/articles/search/cognitive-search-concept-intro.md
index 7528a8e05cf08..ca09d009994af 100644
--- a/articles/search/cognitive-search-concept-intro.md
+++ b/articles/search/cognitive-search-concept-intro.md
@@ -135,7 +135,7 @@ The output of AI enrichment is either a [fully text-searchable index](search-wha
### Check content in a knowledge store
-In Azure Storage, a [knowledge store](knowledge-store-concept-intro.md) can assume the following forms: a blob container of JSON documents, a blob container of image objects, or tables in Table Storage. You can use [Storage Browser](knowledge-store-view-storage-explorer.md), [Power BI](knowledge-store-connect-power-bi.md), or any app that connects to Azure Storage to access your content.
+In Azure Storage, a [knowledge store](knowledge-store-concept-intro.md) can assume the following forms: a blob container of JSON documents, a blob container of image objects, or tables in Table Storage. You can use [Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md), [Power BI](knowledge-store-connect-power-bi.md), or any app that connects to Azure Storage to access your content.
+ A blob container captures enriched documents in their entirety, which is useful if you're creating a feed into other processes.
diff --git a/articles/search/cognitive-search-quickstart-blob.md b/articles/search/cognitive-search-quickstart-blob.md
index 1eaab26f5d3a1..bdb2c7b962dcd 100644
--- a/articles/search/cognitive-search-quickstart-blob.md
+++ b/articles/search/cognitive-search-quickstart-blob.md
@@ -7,7 +7,7 @@ author: HeidiSteen
ms.author: heidist
ms.service: cognitive-search
ms.topic: quickstart
-ms.date: 10/07/2021
+ms.date: 05/31/2022
ms.custom: mode-ui
---
# Quickstart: Translate text and recognize entities using the Import data wizard
@@ -26,7 +26,7 @@ Before you begin, have the following prerequisites in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-+ Azure Cognitive Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
++ Azure Cognitive Search. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). You can use a free service for this quickstart.
+ Azure Storage account with Blob Storage. [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal) or [find an existing account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
@@ -35,7 +35,7 @@ Before you begin, have the following prerequisites in place:
+ Choose the StorageV2 (general purpose V2).
> [!NOTE]
-> This quickstart also uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create an additional Cognitive Services resource.
+> This quickstart uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. You can complete this exercise without having to create a Cognitive Services resource.
## Set up your data
@@ -74,7 +74,7 @@ You are now ready to move on the Import data wizard.
Next, configure AI enrichment to invoke language detection, text translation, and entity recognition.
-1. For this quickstart, we are using the **Free** Cognitive Services resource. The sample data consists of 10 files, so the daily, per-indexer allotment of 20 free transactions on Cognitive Services is sufficient for this quickstart.
+1. For this quickstart, you can use the **Free** Cognitive Services resource. The sample data consists of 10 files, so the daily, per-indexer allotment of 20 free transactions on Cognitive Services is sufficient for this quickstart.
:::image type="content" source="media/cognitive-search-quickstart-blob/free-enrichments.png" alt-text="Attach free Cognitive Services processing" border="true":::
@@ -104,7 +104,7 @@ For this quickstart, the wizard does a good job setting reasonable defaults:
:::image type="content" source="media/cognitive-search-quickstart-blob/index-fields-lang-entities.png" alt-text="Index fields" border="true":::
-Marking a field as **Retrievable** does not mean that the field *must* be present in the search results. You can precisely control search results composition by using the **$select** query parameter to specify which fields to include. For text-heavy fields like `content`, the **$select** parameter is your solution for shaping manageable search results to the human users of your application, while ensuring client code has access to all the information it needs via the **Retrievable** attribute.
+Marking a field as **Retrievable** doesn't mean that the field *must* be present in the search results. You can precisely control search results composition by using the **$select** query parameter to specify which fields to include. For text-heavy fields like `content`, the **$select** parameter is your solution for shaping manageable search results to the human users of your application, while ensuring client code has access to all the information it needs via the **Retrievable** attribute.
### Step 4 - Configure the indexer
@@ -151,7 +151,7 @@ When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
diff --git a/articles/search/cognitive-search-quickstart-ocr.md b/articles/search/cognitive-search-quickstart-ocr.md
index bda22c1d83878..ab0e8d4be085c 100644
--- a/articles/search/cognitive-search-quickstart-ocr.md
+++ b/articles/search/cognitive-search-quickstart-ocr.md
@@ -7,7 +7,7 @@ author: HeidiSteen
ms.author: heidist
ms.service: cognitive-search
ms.topic: quickstart
-ms.date: 10/07/2021
+ms.date: 05/31/2022
ms.custom: mode-ui
---
@@ -27,7 +27,7 @@ Before you begin, have the following prerequisites in place:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-+ Azure Cognitive Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
++ Azure Cognitive Search . [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). You can use a free service for this quickstart.
+ Azure Storage account with Blob Storage. [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal) or [find an existing account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
@@ -36,7 +36,7 @@ Before you begin, have the following prerequisites in place:
+ Choose the StorageV2 (general purpose V2).
> [!NOTE]
-> This quickstart also uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. This means that you can complete this exercise without having to create an additional Cognitive Services resource.
+> This quickstart uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. You can complete this exercise without having to create a Cognitive Services resource.
## Set up your data
@@ -53,7 +53,7 @@ In the following steps, set up a blob container in Azure Storage to store hetero
You should have 10 files containing photographs of signs.
-There is a second subfolder that includes landmark buildings. If you want to [attach a Cognitive Services key](cognitive-search-attach-cognitive-services.md), you can include these files as well to see how image analysis works over image files that do not include embedded text. The key is necessary for jobs that exceed the free allotment.
+There is a second subfolder that includes landmark buildings. If you want to [attach a Cognitive Services key](cognitive-search-attach-cognitive-services.md), you can include these files as well to see how image analysis works over image files that don't include embedded text. The key is necessary for jobs that exceed the free allotment.
You are now ready to move on the Import data wizard.
@@ -75,7 +75,7 @@ You are now ready to move on the Import data wizard.
Next, configure AI enrichment to invoke OCR and image analysis.
-1. For this quickstart, we are using the **Free** Cognitive Services resource. The sample data consists of 19 files, so the daily, per-indexer allotment of 20 free transactions on Cognitive Services is sufficient for this quickstart.
+1. For this quickstart, you can use the **Free** Cognitive Services resource. The sample data consists of 19 files, so the daily, per-indexer allotment of 20 free transactions on Cognitive Services is sufficient for this quickstart.
:::image type="content" source="media/cognitive-search-quickstart-blob/free-enrichments.png" alt-text="Attach free Cognitive Services processing" border="true":::
@@ -103,7 +103,7 @@ For this quickstart, the wizard does a good job setting reasonable defaults:
:::image type="content" source="media/cognitive-search-quickstart-blob/index-fields-ocr-images.png" alt-text="Index fields" border="true":::
-Marking a field as **Retrievable** does not mean that the field *must* be present in the search results. You can precisely control search results composition by using the **$select** query parameter to specify which fields to include. For text-heavy fields like `content`, the **$select** parameter is your solution for shaping manageable search results to the human users of your application, while ensuring client code has access to all the information it needs via the **Retrievable** attribute.
+Marking a field as **Retrievable** doesn't mean that the field *must* be present in the search results. You can precisely control search results composition by using the **$select** query parameter to specify which fields to include. For text-heavy fields like `content`, the **$select** parameter is your solution for shaping manageable search results to the human users of your application, while ensuring client code has access to all the information it needs via the **Retrievable** attribute.
### Step 4 - Configure the indexer
@@ -151,7 +151,7 @@ When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
diff --git a/articles/search/knowledge-store-concept-intro.md b/articles/search/knowledge-store-concept-intro.md
index 1edd95eb443fc..f0cfbba73f7ff 100644
--- a/articles/search/knowledge-store-concept-intro.md
+++ b/articles/search/knowledge-store-concept-intro.md
@@ -1,26 +1,28 @@
---
title: Knowledge store concepts
titleSuffix: Azure Cognitive Search
-description: Send enriched documents to Azure Storage where you can view, reshape, and consume enriched documents in Azure Cognitive Search and in other applications.
+description: A knowledge store is enriched content created by an Azure Cognitive Search skillset and saved to Azure Storage for use in other apps and non-search scenarios.
author: HeidiSteen
manager: nitinme
ms.author: heidist
ms.service: cognitive-search
ms.topic: conceptual
-ms.date: 09/02/2021
+ms.date: 05/31/2022
---
# Knowledge store in Azure Cognitive Search
-Knowledge store is a data sink created by a Cognitive Search [AI enrichment pipeline](cognitive-search-concept-intro.md) that stores enriched content in tables and blob containers in Azure Storage for independent analysis or downstream processing in non-search scenarios, like knowledge mining.
+Knowledge store is a data sink created by a [Cognitive Search enrichment pipeline](cognitive-search-concept-intro.md) that stores AI-enriched content in tables and blob containers in Azure Storage for independent analysis or downstream processing in non-search scenarios like knowledge mining.
-If you have used cognitive skills in the past, you already know that *skillsets* move a document through a sequence of enrichments that invoke atomic transformations, such as recognizing entities or translating text. The outcome can be a search index, or projections in a knowledge store. The two outputs, search index and knowledge store, are mutually exclusive products of the same pipeline; derived from the same inputs, but resulting in output that is structured, stored, and used in different applications.
+If you've used cognitive skills in the past, you already know that enriched content is created by *skillsets*. Skillsets move a document through a sequence of enrichments that invoke atomic transformations, such as recognizing entities or translating text.
+
+Output can be a search index, or projections in a knowledge store. The two outputs, search index and knowledge store, are mutually exclusive products of the same pipeline. They are derived from the same inputs, but their content is structured, stored, and used in different applications.
:::image type="content" source="media/knowledge-store-concept-intro/knowledge-store-concept-intro.svg" alt-text="Pipeline with skillset" border="false":::
Physically, a knowledge store is [Azure Storage](../storage/common/storage-account-overview.md), either Azure Table Storage, Azure Blob Storage, or both. Any tool or process that can connect to Azure Storage can consume the contents of a knowledge store.
-Viewed through Storage Browser, a knowledge store looks like any other collection of tables, objects, or files. The following example shows a knowledge store composed of three tables with fields that are either carried forward from the data source, or created through enrichments (see "sentiment score" and "translated_text").
+Viewed through Azure portal, a knowledge store looks like any other collection of tables, objects, or files. The following screenshot shows a knowledge store composed of three tables. You can adopt a naming convention, such as a "kstore" prefix, to keep your content together.
:::image type="content" source="media/knowledge-store-concept-intro/kstore-in-storage-explorer.png" alt-text="Skills read and write from enrichment tree" border="true":::
@@ -72,7 +74,9 @@ The type of projection you specify in this structure determines the type of stor
## Create a knowledge store
-To create knowledge store, use the portal or an API. You will need [Azure Storage](../storage/index.yml), a [skillset](cognitive-search-working-with-skillsets.md), and an [indexer](search-indexer-overview.md). Because indexers require a search index, you will also need to provide an index definition.
+To create knowledge store, use the portal or an API.
+
+You'll need [Azure Storage](../storage/index.yml), a [skillset](cognitive-search-working-with-skillsets.md), and an [indexer](search-indexer-overview.md). Because indexers require a search index, you'll also need to provide an index definition.
Go with the portal approach for the fastest route to a finished knowledge store. Or, choose the REST API for a deeper understanding of how objects are defined and related.
@@ -80,17 +84,15 @@ Go with the portal approach for the fastest route to a finished knowledge store.
[**Create your first knowledge store in four steps**](knowledge-store-create-portal.md) using the **Import data** wizard.
-1. [Sign in to Azure portal](https://portal.azure.com).
-
-1. Define your data source.
+1. Define a data source that contains the data you want to enrich.
-1. Define your skillset and specify a knowledge store.
+1. Define a skillset. The skillset specifies enrichment steps and the knowledge store.
-1. Define an index schema. The wizard requires it and can infer one for you.
+1. Define an index schema. You might not need one, but indexers require it. The wizard can infer an index.
-1. Complete the wizard. Extraction, enrichment, and storage occur in this last step.
+1. Complete the wizard. Data extraction, enrichment, and knowledge store creation occur in this last step.
-The wizard automates tasks that you would otherwise have to be handled manually. Specifically, both shaping and projections (definitions of physical data structures in Azure Storage) are created for you.
+The wizard automates several tasks. Specifically, both shaping and projections (definitions of physical data structures in Azure Storage) are created for you.
### [**REST**](#tab/kstore-rest)
@@ -117,9 +119,9 @@ For .NET developers, use the [KnowledgeStore Class](/dotnet/api/azure.search.doc
## Connect with apps
-Once the enrichments exist in storage, any tool or technology that connects to Azure Blob or Table Storage can be used to explore, analyze, or consume the contents. The following list is a start:
+Once enriched content exists in storage, any tool or technology that connects to Azure Storage can be used to explore, analyze, or consume the contents. The following list is a start:
-+ [Storage Browser](knowledge-store-view-storage-explorer.md) to view enriched document structure and content. Consider this as your baseline tool for viewing knowledge store contents.
++ [Storage Explorer](../storage/blobs/quickstart-storage-explorer.md) or Storage browser (preview) in Azure portal to view enriched document structure and content. Consider this as your baseline tool for viewing knowledge store contents.
+ [Power BI](knowledge-store-connect-power-bi.md) for reporting and analysis.
@@ -129,7 +131,7 @@ Once the enrichments exist in storage, any tool or technology that connects to A
Each time you run the indexer and skillset, the knowledge store is updated if the skillset or underlying source data has changed. Any changes picked up by the indexer are propagated through the enrichment process to the projections in the knowledge store, ensuring that your projected data is a current representation of content in the originating data source.
-> [!Note]
+> [!NOTE]
> While you can edit the data in the projections, any edits will be overwritten on the next pipeline invocation, assuming the document in source data is updated.
### Changes in source data
diff --git a/articles/search/knowledge-store-create-portal.md b/articles/search/knowledge-store-create-portal.md
index 5cdfc892da4f7..389076d07ceb9 100644
--- a/articles/search/knowledge-store-create-portal.md
+++ b/articles/search/knowledge-store-create-portal.md
@@ -7,15 +7,15 @@ ms.author: heidist
manager: nitinme
ms.service: cognitive-search
ms.topic: quickstart
-ms.date: 05/11/2022
+ms.date: 05/31/2022
ms.custom: mode-ui
---
# Quickstart: Create a knowledge store in the Azure portal
-[Knowledge store](knowledge-store-concept-intro.md) is a feature of Azure Cognitive Search that accepts output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) and makes it available in Azure Storage for downstream apps and workloads. Enrichments created by the pipeline - such as translated text, OCR text, tagged images, and recognized entities - are projected into tables or blobs, where they can be accessed by any app or workload that connects to Azure Storage.
+[Knowledge store](knowledge-store-concept-intro.md) is a feature of Azure Cognitive Search that accepts output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) and makes it available in Azure Storage for downstream apps and workloads.
-In this quickstart, you'll set up your data and then run the **Import data** wizard to create an enrichment pipeline that also generates a knowledge store. The knowledge store will contain original text content pulled from the source (customer reviews of a hotel), plus AI-generated content that includes a sentiment label, key phrase extraction, and text translation of non-English customer comments.
+In this quickstart, you'll set up some sample data and then run the **Import data** wizard to create an enrichment pipeline that also generates a knowledge store. The knowledge store will contain original text content pulled from the source (customer reviews of a hotel), plus AI-generated content that includes a sentiment label, key phrase extraction, and text translation of non-English customer comments.
> [!NOTE]
> This quickstart shows you the fastest route to a finished knowledge store in Azure Storage. For more detailed explanations of each step, see [Create a knowledge store in REST](knowledge-store-create-rest.md) instead.
@@ -139,15 +139,19 @@ In this wizard step, configure an indexer that will pull together the data sourc
In the **Overview** page, open the **Indexers** tab in the middle of the page, and then select **hotels-reviews-idxr**. Within a minute or two, status should progress from "In progress" to "Success" with zero errors and warnings.
-## Check tables in Storage Browser
+## Check tables in Azure portal
-In the Azure portal, switch to your Azure Storage account and use **Storage Browser** to view the new tables. You should see three tables, one for each projection that was offered in the "Save enrichments" section of the "Add enrichments" page.
+1. In the Azure portal, [open the Storage account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) used to create the knowledge store.
-+ "hotelReviewssDocuments" contains all of the first-level nodes of a document's enrichment tree that are not collections.
+1. In the storage account's left navigation pane, select **Storage browser (preview)** to view the new tables.
-+ "hotelReviewssKeyPhrases" contains a long list of just the key phrases extracted from all reviews. Skills that output collections (arrays), such as key phrases and entities, will have output sent to a standalone table.
+ You should see three tables, one for each projection that was offered in the "Save enrichments" section of the "Add enrichments" page.
-+ "hotelReviewssPages" contains enriched fields created over each page that was split from the document. In this skillset and data source, page-level enrichments consisting of sentiment labels and translated text. A pages table (or a sentences table if you specify that particular level of granularity) is created when you choose "pages" granularity in the skillset definition.
+ + "hotelReviewssDocuments" contains all of the first-level nodes of a document's enrichment tree that are not collections.
+
+ + "hotelReviewssKeyPhrases" contains a long list of just the key phrases extracted from all reviews. Skills that output collections (arrays), such as key phrases and entities, will have output sent to a standalone table.
+
+ + "hotelReviewssPages" contains enriched fields created over each page that was split from the document. In this skillset and data source, page-level enrichments consisting of sentiment labels and translated text. A pages table (or a sentences table if you specify that particular level of granularity) is created when you choose "pages" granularity in the skillset definition.
All of these tables contain ID columns to support table relationships in other tools and apps. When you open a table, scroll past these fields to view the content fields added by the pipeline.
diff --git a/articles/search/knowledge-store-create-rest.md b/articles/search/knowledge-store-create-rest.md
index 08e4257a7b76c..abcdb8e375abd 100644
--- a/articles/search/knowledge-store-create-rest.md
+++ b/articles/search/knowledge-store-create-rest.md
@@ -8,18 +8,18 @@ manager: nitinme
ms.author: heidist
ms.service: cognitive-search
ms.topic: how-to
-ms.date: 05/11/2022
+ms.date: 05/31/2022
---
# Create a knowledge store using REST and Postman
-Knowledge store is a feature of Azure Cognitive Search that sends skillset output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) to Azure Storage for subsequent knowledge mining, data analysis, or downstream processing. After the knowledge store is populated, you can use tools like [Storage Browser](knowledge-store-view-storage-explorer.md) or [Power BI](knowledge-store-connect-power-bi.md) to explore the content.
+[Knowledge store](knowledge-store-concept-intro.md) is a feature of Azure Cognitive Search that accepts output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) and makes it available in Azure Storage for downstream apps and workloads. After the knowledge store is populated, use tools like [Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) or [Power BI](knowledge-store-connect-power-bi.md) to explore the content.
-In this article, you'll learn how to use the REST API to ingest, enrich, and explore a set of customer reviews of hotel stays in a knowledge store in Azure Storage. The end result is a knowledge store that contains original text content pulled from the source, plus AI-generated content that includes a sentiment score, key phrase extraction, language detection, and text translation of non-English customer comments.
+In this article, you'll learn how to use the REST API to ingest, enrich, and explore a set of customer reviews of hotel stays in a knowledge store. The knowledge store contains original text content pulled from the source, plus AI-generated content that includes a sentiment score, key phrase extraction, language detection, and text translation of non-English customer comments.
To make the initial data set available, the hotel reviews are first imported into Azure Blob Storage. Post-processing, the results are saved as a knowledge store in Azure Table Storage.
> [!NOTE]
-> The [source code](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/knowledge-store) for this article includes a Postman collection containing all of the requests. If you don't want to use Postman, you can [create the same knowledge store in the Azure portal](knowledge-store-create-portal.md) using the Import data wizard.
+> This article provides detailed explanations of each step. For a faster approach, see [Create a knowledge store in Azure portal](knowledge-store-create-portal.md) instead.
## Prerequisites
@@ -35,11 +35,11 @@ To make the initial data set available, the hotel reviews are first imported int
## Load data
-This uses Azure Cognitive Search, Azure Blob Storage, and [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes to provide free processing for up to 20 transactions daily. A small workload means that you can skip creating or attaching a Cognitive Services resource.
+This step uses Azure Cognitive Search, Azure Blob Storage, and [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes to provide free processing for up to 20 transactions daily. A small workload means that you can skip creating or attaching a Cognitive Services resource.
1. [Download HotelReviews_Free.csv](https://knowledgestoredemo.blob.core.windows.net/hotel-reviews/HotelReviews_Free.csv?sp=r&st=2019-11-04T01:23:53Z&se=2025-11-04T16:00:00Z&spr=https&sv=2019-02-02&sr=b&sig=siQgWOnI%2FDamhwOgxmj11qwBqqtKMaztQKFNqWx00AY%3D). This data is hotel review data saved in a CSV file (originates from Kaggle.com) and contains 19 pieces of customer feedback about a single hotel.
-1. In the Azure Storage resource, use **Storage Browser** to create a blob container named **hotel-reviews**.
+1. In Azure portal, on the Azure Storage resource page, use **Storage Browser** to create a blob container named **hotel-reviews**.
1. Select **Upload** at the top of the page to load the **HotelReviews-Free.csv** file you downloaded from the previous step.
@@ -397,7 +397,7 @@ After you send each request, the search service should respond with a 201 succes
In the Azure portal, go to the Azure Cognitive Search service's **Overview** page. Select the **Indexers** tab, and then select **hotels-reviews-ixr**. Within a minute or two, status should progress from "In progress" to "Success" with zero errors and warnings.
-## Check tables in Storage Browser
+## Check tables in Azure portal
In the Azure portal, switch to your Azure Storage account and use **Storage Browser** to view the new tables. You should see six tables, one for each projection defined in the skillset.
@@ -429,9 +429,7 @@ If you are using a free service, remember that you are limited to three indexes,
## Next steps
-Now that you've enriched your data by using Cognitive Services and projected the results to a knowledge store, you can use Storage Browser or other apps to explore your enriched data set.
-
-To learn how to explore this knowledge store by using Storage Browser, see this walkthrough:
+Now that you've enriched your data by using Cognitive Services and projected the results to a knowledge store, you can use Storage Explorer or other apps to explore your enriched data set.
> [!div class="nextstepaction"]
-> [View with Storage Browser](knowledge-store-view-storage-explorer.md)
\ No newline at end of file
+> [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md)
\ No newline at end of file
diff --git a/articles/search/knowledge-store-projection-overview.md b/articles/search/knowledge-store-projection-overview.md
index 19555e90d3d50..2d49582a8ce1a 100644
--- a/articles/search/knowledge-store-projection-overview.md
+++ b/articles/search/knowledge-store-projection-overview.md
@@ -138,7 +138,7 @@ Projections have a lifecycle that is tied to the source data in your data source
After the indexer is run, connect to projections and consume the data in other apps and workloads.
-+ Use [Storage Browser](knowledge-store-view-storage-explorer.md) to verify object creation and content.
++ Use Azure portal to verify object creation and content in Azure Storage.
+ Use [Power BI for data exploration](knowledge-store-connect-power-bi.md). This tool works best when the data is in Azure Table Storage. Within Power BI, you can manipulate data into new tables that are easier to query and analyze.
diff --git a/articles/search/knowledge-store-projections-examples.md b/articles/search/knowledge-store-projections-examples.md
index e19b588123b4d..c098894da71eb 100644
--- a/articles/search/knowledge-store-projections-examples.md
+++ b/articles/search/knowledge-store-projections-examples.md
@@ -268,7 +268,7 @@ You can process projections by following these steps:
1. [Monitor indexer execution](search-howto-monitor-indexers.md) to check progress and catch any errors.
-1. [Use Storage Browser](knowledge-store-view-storage-explorer.md) to verify object creation in Azure Storage.
+1. Use Azure portal to verify object creation in Azure Storage.
1. If you are projecting tables, [import them into Power BI](knowledge-store-connect-power-bi.md) for table manipulation and visualization. In most cases, Power BI will auto-discover the relationships among tables.
diff --git a/articles/search/knowledge-store-view-storage-explorer.md b/articles/search/knowledge-store-view-storage-explorer.md
deleted file mode 100644
index ff5d0de649d82..0000000000000
--- a/articles/search/knowledge-store-view-storage-explorer.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title: View a knowledge store
-titleSuffix: Azure Cognitive Search
-description: View a knowledge store using the Storage Browser in the Azure portal.
-
-manager: nitinme
-author: HeidiSteen
-ms.author: heidist
-ms.service: cognitive-search
-ms.topic: conceptual
-ms.date: 11/03/2021
----
-
-# View a knowledge store with Storage Browser
-
-A [knowledge store](knowledge-store-concept-intro.md) is content created by an Azure Cognitive Search skillset and saved to Azure Storage. In this article, you'll learn how to view the contents of a knowledge store using Storage Browser in the Azure portal.
-
-Start with an existing knowledge store created in the [Azure portal](knowledge-store-create-portal.md) or using the [REST APIs](knowledge-store-create-rest.md). Both the portal and REST walkthroughs create a knowledge store in Azure Table Storage.
-
-## Start Storage Browser
-
-1. In the Azure portal, [open the Storage account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) that you used to create the knowledge store.
-
-1. In the storage account's left navigation pane, select **Storage Browser**.
-
-## View and edit tables
-
-1. Expand **Tables** to find the table projections of your knowledge store. If you used the quickstart or REST article to create the knowledge store, the tables will contain content related to customer reviews of a European hotel.
-
- :::image type="content" source="media/knowledge-store-concept-intro/kstore-in-storage-explorer.png" alt-text="Screenshot of Storage Browser" border="true":::
-
-1. Select a table from the list to views it's contents.
-
-1. To rearrange column order or delete a column, select **Edit columns** at the top of the page.
-
-In Storage Browser, you can only query one table at time using [supported query syntax](/rest/api/storageservices/Querying-Tables-and-Entities). To query across tables, consider using Power BI instead.
-
-## Next steps
-
-Connect this knowledge store to Power BI to build visualizations that include multiple tables.
-
-> [!div class="nextstepaction"]
-> [Connect with Power BI](knowledge-store-connect-power-bi.md)
diff --git a/articles/search/media/knowledge-store-concept-intro/kstore-in-storage-explorer.png b/articles/search/media/knowledge-store-concept-intro/kstore-in-storage-explorer.png
index bb0631a9a1be8..59450011c9383 100644
Binary files a/articles/search/media/knowledge-store-concept-intro/kstore-in-storage-explorer.png and b/articles/search/media/knowledge-store-concept-intro/kstore-in-storage-explorer.png differ
diff --git a/articles/search/search-dotnet-sdk-migration-version-11.md b/articles/search/search-dotnet-sdk-migration-version-11.md
index a8e94048657d4..8384dcf3ab52c 100644
--- a/articles/search/search-dotnet-sdk-migration-version-11.md
+++ b/articles/search/search-dotnet-sdk-migration-version-11.md
@@ -9,7 +9,7 @@ ms.author: heidist
ms.service: cognitive-search
ms.devlang: csharp
ms.topic: conceptual
-ms.date: 04/25/2022
+ms.date: 05/31/2022
ms.custom: devx-track-csharp
---
@@ -35,6 +35,8 @@ The benefits of upgrading are summarized as follows:
+ Consistency with other Azure client libraries. **Azure.Search.Documents** takes a dependency on [Azure.Core](/dotnet/api/azure.core) and [System.Text.Json](/dotnet/api/system.text.json), and follows conventional approaches for common tasks such as client connections and authorization.
+**Microsoft.Azure.Search** is officially retired. If you're using an old version, we recommend upgrading to the next higher version, repeating the process in succession until you reach version 11 and **Azure.Search.Documents**. An incremental upgrade strategy makes it easier to find and fix blocking issues. See [Previous version docs](/previous-versions/azure/search/) for guidance.
+
## Package comparison
Version 11 consolidates and simplifies package management so that there are fewer to manage.
@@ -60,7 +62,7 @@ Where applicable, the following table maps the client libraries between the two
## Naming and other API differences
-Besides the client differences (noted previously and thus omitted here), multiple other APIs have been renamed and in some cases redesigned. Class name differences are summarized below. This list is not exhaustive but it does group API changes by task, which can be helpful for revisions on specific code blocks. For an itemized list of API updates, see the [change log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/CHANGELOG.md) for `Azure.Search.Documents` on GitHub.
+Besides the client differences (noted previously and thus omitted here), multiple other APIs have been renamed and in some cases redesigned. Class name differences are summarized below. This list isn't exhaustive but it does group API changes by task, which can be helpful for revisions on specific code blocks. For an itemized list of API updates, see the [change log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/CHANGELOG.md) for `Azure.Search.Documents` on GitHub.
### Authentication and encryption
@@ -148,7 +150,7 @@ SearchClient client = new SearchClient(endpoint, "mountains", credential, client
Response> results = client.Search("Rainier");
```
-If you are using Newtonsoft.Json for JSON serialization, you can pass in global naming policies using similar attributes, or by using properties on [JsonSerializerSettings](https://www.newtonsoft.com/json/help/html/T_Newtonsoft_Json_JsonSerializerSettings.htm). For an example equivalent to the one above, see the [Deserializing documents example](https://github.com/Azure/azure-sdk-for-net/blob/259df3985d9710507e2454e1591811f8b3a7ad5d/sdk/core/Microsoft.Azure.Core.Spatial.NewtonsoftJson/README.md) in the Newtonsoft.Json readme.
+If you're using Newtonsoft.Json for JSON serialization, you can pass in global naming policies using similar attributes, or by using properties on [JsonSerializerSettings](https://www.newtonsoft.com/json/help/html/T_Newtonsoft_Json_JsonSerializerSettings.htm). For an example equivalent to the one above, see the [Deserializing documents example](https://github.com/Azure/azure-sdk-for-net/blob/259df3985d9710507e2454e1591811f8b3a7ad5d/sdk/core/Microsoft.Azure.Core.Spatial.NewtonsoftJson/README.md) in the Newtonsoft.Json readme.
@@ -222,7 +224,7 @@ The following steps get you started on a code migration by walking through the f
SearchIndexClient indexClient = new SearchIndexClient(endpoint, credential);
```
-1. Add new client references for indexer-related objects. If you are using indexers, datasources, or skillsets, change the client references to [SearchIndexerClient](/dotnet/api/azure.search.documents.indexes.searchindexerclient). This client is new in version 11 and has no antecedent.
+1. Add new client references for indexer-related objects. If you're using indexers, datasources, or skillsets, change the client references to [SearchIndexerClient](/dotnet/api/azure.search.documents.indexes.searchindexerclient). This client is new in version 11 and has no antecedent.
1. Revise collections and lists. In the new SDK, all lists are read-only to avoid downstream issues if the list happens to contain null values. The code change is to add items to a list. For example, instead of assigning strings to a Select property, you would add them as follows:
@@ -260,7 +262,7 @@ The following steps get you started on a code migration by walking through the f
Given the sweeping changes to libraries and APIs, an upgrade to version 11 is non-trivial and constitutes a breaking change in the sense that your code will no longer be backward compatible with version 10 and earlier. For a thorough review of the differences, see the [change log](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/search/Azure.Search.Documents/CHANGELOG.md) for `Azure.Search.Documents`.
-In terms of service version updates, where code changes in version 11 relate to existing functionality (and not just a refactoring of the APIs), you will find the following behavior changes:
+In terms of service version updates, where code changes in version 11 relate to existing functionality (and not just a refactoring of the APIs), you'll find the following behavior changes:
+ [BM25 ranking algorithm](index-ranking-similarity.md) replaces the previous ranking algorithm with newer technology. New services will use this algorithm automatically. For existing services, you must set parameters to use the new algorithm.
diff --git a/articles/search/search-faq-frequently-asked-questions.yml b/articles/search/search-faq-frequently-asked-questions.yml
index da75c542c1d0f..465bfb948e9e1 100644
--- a/articles/search/search-faq-frequently-asked-questions.yml
+++ b/articles/search/search-faq-frequently-asked-questions.yml
@@ -42,7 +42,7 @@ sections:
- question: |
If I migrate my search service to another subscription or resource group, should I expect any downtime?
answer: |
- As long as you follow the [checklist before moving resources](/azure-resource-manager/management/move-resource-group-and-subscription) and make sure each step is completed, there should not be any downtime.
+ As long as you follow the [checklist before moving resources](/azure/azure-resource-manager/management/move-resource-group-and-subscription) and make sure each step is completed, there should not be any downtime.
- name: Indexing
questions:
diff --git a/articles/search/search-what-is-azure-search.md b/articles/search/search-what-is-azure-search.md
index d2e62578ff890..7b884fc22b787 100644
--- a/articles/search/search-what-is-azure-search.md
+++ b/articles/search/search-what-is-azure-search.md
@@ -8,7 +8,7 @@ author: HeidiSteen
ms.author: heidist
ms.service: cognitive-search
ms.topic: overview
-ms.date: 01/03/2022
+ms.date: 05/31/2022
ms.custom: contperf-fy21q1
---
# What is Azure Cognitive Search?
@@ -21,7 +21,7 @@ When you create a search service, you'll work with the following capabilities:
+ A search engine for full text search with storage for user-owned content in a search index
+ Rich indexing, with [text analysis](search-analyzers.md) and [optional AI enrichment](cognitive-search-concept-intro.md) for advanced content extraction and transformation
-+ Rich query capabilities, including simple syntax, full Lucene syntax, and typeahead search
++ Rich query syntax that supplements free text search with filters, autocomplete, regex, geo-search and more
+ Programmability through REST APIs and client libraries in Azure SDKs for .NET, Python, Java, and JavaScript
+ Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
@@ -35,15 +35,15 @@ Across the Azure platform, Cognitive Search can integrate with other Azure servi
On the search service itself, the two primary workloads are *indexing* and *querying*.
-+ [Indexing](search-what-is-an-index.md) is an intake process that loads content into to your search service and makes it searchable. Internally, inbound text is processed into tokens and stored in inverted indexes for fast scans. You can upload any text that is in the form of JSON documents.
++ [**Indexing**](search-what-is-an-index.md) is an intake process that loads content into to your search service and makes it searchable. Internally, inbound text is processed into tokens and stored in inverted indexes for fast scans. You can upload any text that is in the form of JSON documents.
Additionally, if your content includes mixed files, you have the option of adding *AI enrichment* through [cognitive skills](cognitive-search-working-with-skillsets.md). AI enrichment can extract text embedded in application files, and also infer text and structure from non-text files by analyzing the content.
- The skills providing the analysis are predefined ones from Microsoft, or custom skills that you create. The subsequent analysis and transformations can result in new information and structures that did not previously exist, providing high utility for many search and knowledge mining scenarios.
+ The skills providing the analysis are predefined ones from Microsoft, or custom skills that you create. The subsequent analysis and transformations can result in new information and structures that didn't previously exist, providing high utility for many search and knowledge mining scenarios.
-+ [Querying](search-query-overview.md) can happen once an index is populated with searchable text, when your client app sends query requests to a search service and handles responses. All query execution is over a search index that you create, own, and store in your service. In your client app, the search experience is defined using APIs from Azure Cognitive Search, and can include relevance tuning, autocomplete, synonym matching, fuzzy matching, pattern matching, filter, and sort.
++ [**Querying**](search-query-overview.md) can happen once an index is populated with searchable text, when your client app sends query requests to a search service and handles responses. All query execution is over a search index that you create, own, and store in your service. In your client app, the search experience is defined using APIs from Azure Cognitive Search, and can include relevance tuning, autocomplete, synonym matching, fuzzy matching, pattern matching, filter, and sort.
-Functionality is exposed through a simple [REST API](/rest/api/searchservice/) or [.NET SDK](search-howto-dotnet-sdk.md) that masks the inherent complexity of information retrieval. You can also use the Azure portal for service administration and content management, with tools for prototyping and querying your indexes and skillsets. Because the service runs in the cloud, infrastructure and availability are managed by Microsoft.
+Functionality is exposed through a simple [REST API](/rest/api/searchservice/), or Azure SDKs like the [Azure SDK for .NET](search-howto-dotnet-sdk.md), that masks the inherent complexity of information retrieval. You can also use the Azure portal for service administration and content management, with tools for prototyping and querying your indexes and skillsets. Because the service runs in the cloud, infrastructure and availability are managed by Microsoft.
## Why use Cognitive Search?
@@ -63,7 +63,7 @@ For more information about specific functionality, see [Features of Azure Cognit
An end-to-end exploration of core search features can be accomplished in four steps:
-1. [**Decide on a tier**](search-sku-tier.md). One free search service is allowed per subscription. All quickstarts can be completed on the free tier. For more capacity and capabilities, you will need a [billable tier](https://azure.microsoft.com/pricing/details/search/).
+1. [**Decide on a tier**](search-sku-tier.md) and region. One free search service is allowed per subscription. All quickstarts can be completed on the free tier. For more capacity and capabilities, you'll need a [billable tier](https://azure.microsoft.com/pricing/details/search/).
1. [**Create a search service**](search-create-service-portal.md) in the Azure portal.
@@ -71,7 +71,7 @@ An end-to-end exploration of core search features can be accomplished in four st
1. [**Finish with Search Explorer**](search-explorer.md), using a portal client to query the search index you just created.
-Alternatively, you can create, load, and query a search index in atomically:
+Alternatively, you can create, load, and query a search index in atomic steps:
1. [**Create a search index**](search-what-is-an-index.md) using the portal, [REST API](/rest/api/searchservice/create-index), [.NET SDK](search-howto-dotnet-sdk.md), or another SDK. The index schema defines the structure of searchable content.
@@ -90,16 +90,16 @@ Customers often ask how Azure Cognitive Search compares with other search-relate
|-------------|-----------------|
| Microsoft Search | [Microsoft Search](/microsoftsearch/overview-microsoft-search) is for Microsoft 365 authenticated users who need to query over content in SharePoint. It's offered as a ready-to-use search experience, enabled and configured by administrators, with the ability to accept external content through connectors from Microsoft and other sources. If this describes your scenario, then Microsoft Search with Microsoft 365 is an attractive option to explore.
In contrast, Azure Cognitive Search executes queries over an index that you define, populated with data and documents you own, often from diverse sources. Azure Cognitive Search has crawler capabilities for some Azure data sources through [indexers](search-indexer-overview.md), but you can push any JSON document that conforms to your index schema into a single, consolidated searchable resource. You can also customize the indexing pipeline to include machine learning and lexical analyzers. Because Cognitive Search is built to be a plug-in component in larger solutions, you can integrate search into almost any app, on any platform.|
|Bing | [Bing Web Search API](../cognitive-services/bing-web-search/index.yml) searches the indexes on Bing.com for matching terms you submit. Indexes are built from HTML, XML, and other web content on public sites. Built on the same foundation, [Bing Custom Search](/azure/cognitive-services/bing-custom-search/) offers the same crawler technology for web content types, scoped to individual web sites.
In Cognitive Search, you can define and populate the index. You can use [indexers](search-indexer-overview.md) to crawl data on Azure data sources, or push any index-conforming JSON document to your search service. |
-|Database search | Many database platforms include a built-in search experience. SQL Server has [full text search](/sql/relational-databases/search/full-text-search). Cosmos DB and similar technologies have queryable indexes. When evaluating products that combine search and storage, it can be challenging to determine which way to go. Many solutions use both: DBMS for storage, and Azure Cognitive Search for specialized search features.
Compared to DBMS search, Azure Cognitive Search stores content from heterogeneous sources and offers specialized text processing features such as linguistic-aware text processing (stemming, lemmatization, word forms) in [56 languages](/rest/api/searchservice/language-support). It also supports autocorrection of misspelled words, [synonyms](/rest/api/searchservice/synonym-map-operations), [suggestions](/rest/api/searchservice/suggestions), [scoring controls](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [facets](search-faceted-navigation.md), and [custom tokenization](/rest/api/searchservice/custom-analyzers-in-azure-search). The [full text search engine](search-lucene-query-architecture.md) in Azure Cognitive Search is built on Apache Lucene, an industry standard in information retrieval. However, while Azure Cognitive Search persists data in the form of an inverted index, it is not a replacement for true data storage and we don't recommend using it in that capacity. For more information, see this [forum post](https://stackoverflow.com/questions/40101159/can-azure-search-be-used-as-a-primary-database-for-some-data).
Resource utilization is another inflection point in this category. Indexing and some query operations are often computationally intensive. Offloading search from the DBMS to a dedicated solution in the cloud preserves system resources for transaction processing. Furthermore, by externalizing search, you can easily adjust scale to match query volume.|
-|Dedicated search solution | Assuming you have decided on dedicated search with full spectrum functionality, a final categorical comparison is between on premises solutions or a cloud service. Many search technologies offer controls over indexing and query pipelines, access to richer query and filtering syntax, control over rank and relevance, and features for self-directed and intelligent search.
A cloud service is the right choice if you want a turn-key solution with minimal overhead and maintenance, and adjustable scale.
Within the cloud paradigm, several providers offer comparable baseline features, with full-text search, geospatial search, and the ability to handle a certain level of ambiguity in search inputs. Typically, it's a [specialized feature](search-features-list.md), or the ease and overall simplicity of APIs, tools, and management that determines the best fit. |
+|Database search | Many database platforms include a built-in search experience. SQL Server has [full text search](/sql/relational-databases/search/full-text-search). Cosmos DB and similar technologies have queryable indexes. When evaluating products that combine search and storage, it can be challenging to determine which way to go. Many solutions use both: DBMS for storage, and Azure Cognitive Search for specialized search features.
Compared to DBMS search, Azure Cognitive Search stores content from heterogeneous sources and offers specialized text processing features such as linguistic-aware text processing (stemming, lemmatization, word forms) in [56 languages](/rest/api/searchservice/language-support). It also supports autocorrection of misspelled words, [synonyms](/rest/api/searchservice/synonym-map-operations), [suggestions](/rest/api/searchservice/suggestions), [scoring controls](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [facets](search-faceted-navigation.md), and [custom tokenization](/rest/api/searchservice/custom-analyzers-in-azure-search). The [full text search engine](search-lucene-query-architecture.md) in Azure Cognitive Search is built on Apache Lucene, an industry standard in information retrieval. However, while Azure Cognitive Search persists data in the form of an inverted index, it isn't a replacement for true data storage and we don't recommend using it in that capacity. For more information, see this [forum post](https://stackoverflow.com/questions/40101159/can-azure-search-be-used-as-a-primary-database-for-some-data).
Resource utilization is another inflection point in this category. Indexing and some query operations are often computationally intensive. Offloading search from the DBMS to a dedicated solution in the cloud preserves system resources for transaction processing. Furthermore, by externalizing search, you can easily adjust scale to match query volume.|
+|Dedicated search solution | Assuming you've decided on dedicated search with full spectrum functionality, a final categorical comparison is between on premises solutions or a cloud service. Many search technologies offer controls over indexing and query pipelines, access to richer query and filtering syntax, control over rank and relevance, and features for self-directed and intelligent search.
A cloud service is the right choice if you want a turn-key solution with minimal overhead and maintenance, and adjustable scale.
Within the cloud paradigm, several providers offer comparable baseline features, with full-text search, geospatial search, and the ability to handle a certain level of ambiguity in search inputs. Typically, it's a [specialized feature](search-features-list.md), or the ease and overall simplicity of APIs, tools, and management that determines the best fit. |
Among cloud providers, Azure Cognitive Search is strongest for full text search workloads over content stores and databases on Azure, for apps that rely primarily on search for both information retrieval and content navigation.
Key strengths include:
+ Data integration (crawlers) at the indexing layer.
++ AI and machine learning integration with Azure Cognitive Services, useful if you need to make unsearchable content full text-searchable.
+ Security integration with Azure Active Directory for trusted connections, and with Azure Private Link integration to support private connections to a search index in no-internet scenarios.
-+ Machine learning and AI integration with Azure Cognitive Services, useful if you need to make unsearchable content types full text-searchable.
+ Linguistic and custom text analysis in 56 languages.
+ [Full search experience](search-features-list.md): rich query language, relevance tuning and semantic ranking, faceting, autocomplete queries and suggested results, and synonyms.
+ Azure scale, reliability, and world-class availability.
diff --git a/articles/security/fundamentals/antimalware.md b/articles/security/fundamentals/antimalware.md
index afcfa0fa943c6..928b7c693ceec 100644
--- a/articles/security/fundamentals/antimalware.md
+++ b/articles/security/fundamentals/antimalware.md
@@ -46,7 +46,7 @@ The Microsoft Antimalware Client and Service is installed by default in a disabl
When using Azure App Service on Windows, the underlying service that hosts the web app has Microsoft Antimalware enabled on it. This is used to protect Azure App Service infrastructure and does not run on customer content.
> [!NOTE]
-> Microsoft Defender Antivirus is the built-in Antimalware enabled in Windows Server 2016. The Microsoft Defender Antivirus Interface is also enabled by default on some Windows Server 2016 SKU's [see here for more information](/windows/threat-protection/windows-defender-antivirus/windows-defender-antivirus-on-windows-server-2016).
+> Microsoft Defender Antivirus is the built-in Antimalware enabled in Windows Server 2016. The Microsoft Defender Antivirus Interface is also enabled by default on some Windows Server 2016 SKU's [see here for more information](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows).
> The Azure VM Antimalware extension can still be added to a Windows Server 2016 Azure VM with Microsoft Defender Antivirus, but in this scenario the extension will apply any optional [configuration policies](https://gallery.technet.microsoft.com/Antimalware-For-Azure-5ce70efe) to be used by Microsoft Defender Antivirus, the extension will not deploy any additional antimalware services.
> You can read more about this update [here](/archive/blogs/azuresecurity/update-to-azure-antimalware-extension-for-cloud-services).
diff --git a/articles/sentinel/TOC.yml b/articles/sentinel/TOC.yml
index 98284e3b4d571..e6b6eae779dfb 100644
--- a/articles/sentinel/TOC.yml
+++ b/articles/sentinel/TOC.yml
@@ -154,7 +154,51 @@
- name: Manage workspace access
href: resource-context-rbac.md
- name: Migrate to Microsoft Sentinel
- href: migration.md
+ items:
+ - name: Plan and design your migration
+ items:
+ - name: Plan your migration
+ href: migration.md
+ - name: Track migration with a workbook
+ href: migration-track.md
+ - name: Migrate from ArcSight
+ items:
+ - name: Migrate detection rules
+ href: migration-arcsight-detection-rules.md
+ - name: Migrate SOAR automation
+ href: migration-arcsight-automation.md
+ - name: Export historical data
+ href: migration-arcsight-historical-data.md
+ - name: Migrate from Splunk
+ items:
+ - name: Migrate detection rules
+ href: migration-splunk-detection-rules.md
+ - name: Migrate SOAR automation
+ href: migration-splunk-automation.md
+ - name: Export historical data
+ href: migration-splunk-historical-data.md
+ - name: Migrate from QRadar
+ items:
+ - name: Migrate detection rules
+ href: migration-qradar-detection-rules.md
+ - name: Migrate SOAR automation
+ href: migration-qradar-automation.md
+ - name: Export historical data
+ href: migration-qradar-historical-data.md
+ - name: Ingest historical data
+ items:
+ - name: Select target platform
+ href: migration-ingestion-target-platform.md
+ - name: Select data ingestion tool
+ href: migration-ingestion-tool.md
+ - name: Ingest data
+ href: migration-export-ingest.md
+ - name: Convert dashboards to workbooks
+ href: migration-convert-dashboards.md
+ - name: Update SOC processes
+ href: migration-security-operations-center-processes.md
+ - name: Deploy side-by-side
+ href: deploy-side-by-side.md
- name: Understand MITRE ATT&CK coverage
href: mitre-coverage.md
- name: Manage Microsoft Sentinel content
@@ -298,7 +342,7 @@
href: bookmarks.md
- name: Hunt with livestream
href: livestream.md
- - name: Investigate and respond
+ - name: Investigate incidents
items:
- name: Investigate incidents
href: investigate-cases.md
@@ -312,6 +356,10 @@
href: customize-entity-activities.md
- name: Collaborate in Microsoft Teams
href: collaborate-in-microsoft-teams.md
+ - name: Automate responses
+ items:
+ - name: Create automation rules
+ href: create-manage-use-automation-rules.md
- name: Authenticate playbooks to Microsoft Sentinel
href: authenticate-playbooks-to-sentinel.md
- name: Use triggers and actions in playbooks
diff --git a/articles/sentinel/automate-incident-handling-with-automation-rules.md b/articles/sentinel/automate-incident-handling-with-automation-rules.md
index 6713fd7da6e99..3461d8d2c05c6 100644
--- a/articles/sentinel/automate-incident-handling-with-automation-rules.md
+++ b/articles/sentinel/automate-incident-handling-with-automation-rules.md
@@ -16,21 +16,91 @@ This article explains what Microsoft Sentinel automation rules are, and how to u
## What are automation rules?
-Automation rules are a way to centrally manage the automation of incident handling, allowing you to perform simple automation tasks without using playbooks. For example, automation rules allow you to automatically assign incidents to the proper personnel, tag incidents to classify them, and change the status of incidents and close them. Automation rules can also automate responses for multiple analytics rules at once, control the order of actions that are executed, and run playbooks for those cases where more complex automation tasks are necessary. In short, automation rules streamline the use of automation in Microsoft Sentinel, enabling you to simplify complex workflows for your incident orchestration processes.
+Automation rules are a way to centrally manage the automation of incident handling, allowing you to perform simple automation tasks without using playbooks.
+
+For example, automation rules allow you to automatically:
+- Suppress noisy incidents.
+- Triage new incidents by changing their status from New to Active and assigning an owner.
+- Tag incidents to classify them.
+- Escalate an incident by assigning a new owner.
+- Close resolved incidents, specifying a reason and adding comments.
+
+Automation rules can also:
+- Automate responses for multiple analytics rules at once.
+- Control the order of actions that are executed.
+- Inspect the incident's contents (alerts, entities, and other properties) and take further action by calling a playbook.
+
+In short, automation rules streamline the use of automation in Microsoft Sentinel, enabling you to simplify complex workflows for your incident orchestration processes.
## Components
Automation rules are made up of several components:
-### Trigger
+- **[Triggers](#triggers)** that define what kind of incident event will cause the rule to run, subject to...
+- **[Conditions](#conditions)** that will determine the exact circumstances under which the rule will run and perform...
+- **[Actions](#actions)** to change the incident in some way or call a [playbook](automate-responses-with-playbooks.md).
+
+### Triggers
-Automation rules are triggered by the creation of an incident.
+Automation rules are triggered **when an incident is created or updated** (the update trigger is now in **Preview**). Recall that incidents are created from alerts by analytics rules, of which there are several types, as explained in [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
-To review – incidents are created from alerts by analytics rules, of which there are several types, as explained in the tutorial [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
+The following table shows the different possible ways that incidents can be created or updated that will cause an automation rule to run.
+
+| Trigger type | Events that cause the rule to run |
+| --------- | ------------ |
+| **When incident is created** | - A new incident is created by an analytics rule. - An incident is ingested from Microsoft 365 Defender. - A new incident is created manually. |
+| **When incident is updated** (Preview) | - An incident's status is changed (closed/reopened/triaged). - An incident's owner is assigned or changed. - An incident's severity is raised or lowered. - Alerts are added to an incident. - Comments, tags, or tactics are added to an incident. |
### Conditions
-Complex sets of conditions can be defined to govern when actions (see below) should run. These conditions are typically based on the states or values of attributes of incidents and their entities, and they can include `AND`/`OR`/`NOT`/`CONTAINS` operators.
+Complex sets of conditions can be defined to govern when actions (see below) should run. These conditions include the event that triggers the rule (incident created or updated), the states or values of the incident's properties and [entity properties](entities-reference.md), and also the analytics rule or rules that generated the incident.
+
+When an automation rule is triggered, it checks the triggering incident against the conditions defined in the rule. The property-based conditions are evaluated according to **the current state** of the property at the moment the evaluation occurs, or according to **changes in the state** of the property (see below for details). Since a single incident creation or update event could trigger several automation rules, the **order** in which they run (see below) makes a difference in determining the outcome of the conditions' evaluation. The **actions** defined in the rule will run only if all the conditions are satisfied.
+
+#### Incident create trigger
+
+For rules defined using the trigger **When an incident is created**, you can define conditions that check the **current state** of the values of a given list of incident properties, using one or more of the following operators:
+
+An incident property's value
+- **equals** or **does not equal** the value defined in the condition.
+- **contains** or **does not contain** the value defined in the condition.
+- **starts with** or **does not start with** the value defined in the condition.
+- **ends with** or **does not end with** the value defined in the condition.
+
+
+
+The **current state** in this context refers to the moment the condition is evaluated - that is, the moment the automation rule runs. If more than one automation rule is defined to run in response to the creation of this incident, then changes made to the incident by an earlier-run automation rule are considered the current state for later-run rules.
+
+#### Incident update trigger
+
+The conditions evaluated in rules defined using the trigger **When an incident is updated** include all of those listed for the incident creation trigger. But the update trigger includes more properties that can be evaluated.
+
+One of these properties is **Updated by**. This property lets you track the type of source that made the change in the incident. You can create a condition evaluating whether the incident was updated by one of the following:
+
+- an application
+- a user
+- an alert grouping (that added alerts to the incident)
+- a playbook
+- an automation rule
+- Microsoft 365 Defender
+
+Using this condition, for example, you can instruct this automation rule to run on any change made to an incident, except if it was made by another automation rule.
+
+More to the point, the update trigger also uses other operators that check **state changes** in the values of incident properties as well as their current state. A **state change** condition would be satisfied if:
+
+An incident property's value was
+- **changed** (regardless of the actual value before or after).
+- **changed from** the value defined in the condition.
+- **changed to** the value defined in the condition.
+- **added** to (this applies to properties with a list of values).
+
+> [!NOTE]
+> - An automation rule, based on the update trigger, can run on an incident that was updated by another automation rule, based on the incident creation trigger, that ran on the incident.
+>
+> - Also, if an incident is updated by an automation rule that ran on the incident's creation, the incident can be evaluated by *both* a subsequent *incident-creation* automation rule *and* an *incident-update* automation rule, both of which will run if the incident satisfies the rules' conditions.
+>
+> - If an incident triggers both create-trigger and update-trigger automation rules, the create-trigger rules will run first, according to their **[Order](#order)** numbers, and then the update-trigger rules will run, according to *their* **Order** numbers.
+
### Actions
@@ -64,7 +134,7 @@ For example, if "First Automation Rule" changed an incident's severity from Medi
### Incident-triggered automation
-Until now, only alerts could trigger an automated response, through the use of playbooks. With automation rules, incidents can now trigger automated response chains, which can include new incident-triggered playbooks ([special permissions are required](#permissions-for-automation-rules-to-run-playbooks)), when an incident is created.
+Before automation rules existed, only alerts could trigger an automated response, through the use of playbooks. With automation rules, incidents can now trigger automated response chains, which can include new incident-triggered playbooks ([special permissions are required](#permissions-for-automation-rules-to-run-playbooks)), when an incident is created.
### Trigger playbooks for Microsoft providers
@@ -72,13 +142,13 @@ Automation rules provide a way to automate the handling of Microsoft security al
Microsoft security alerts include the following:
-- Microsoft Defender for Cloud Apps
+- Microsoft Defender for Cloud Apps (formerly Microsoft Cloud App Security)
- Azure AD Identity Protection
-- Microsoft Defender for Cloud
+- Microsoft Defender for Cloud (formerly Azure Defender or Azure Security Center)
- Defender for IoT (formerly Azure Security Center for IoT)
-- Microsoft Defender for Office 365 (formerly Office 365 ATP)
-- Microsoft Defender for Endpoint (formerly MDATP)
-- Microsoft Defender for Identity (formerly Azure ATP)
+- Microsoft Defender for Office 365
+- Microsoft Defender for Endpoint
+- Microsoft Defender for Identity
### Multiple sequenced playbooks/actions in a single rule
@@ -104,6 +174,22 @@ You can add expiration dates for your automation rules. There may be cases other
You can automatically add free-text tags to incidents to group or classify them according to any criteria of your choosing.
+## Use cases added by update trigger
+
+Now that changes made to incidents can trigger automation rules, more scenarios are open to automation.
+
+### Extend automation when incident evolves
+
+You can use the update trigger to apply many of the above use cases to incidents as their investigation progresses and analysts add alerts, comments, and tags. Control alert grouping in incidents.
+
+### Update orchestration and notification
+
+Notify your various teams and other personnel when changes are made to incidents, so they won't miss any critical updates. Escalate incidents by assigning them to new owners and informing the new owners of their assignments. Control when and how incidents are reopened.
+
+### Maintain synchronization with external systems
+
+If you've used playbooks to create tickets in external systems when incidents are created, you can use an update-trigger automation rule to call a playbook that will update those tickets.
+
## Automation rules execution
Automation rules are run sequentially, according to the order you determine. Each automation rule is executed after the previous one has finished its run. Within an automation rule, all actions are run sequentially in the order in which they are defined.
@@ -115,7 +201,6 @@ Playbook actions within an automation rule may be treated differently under some
| Less than a second | Immediately after playbook is completed |
| Less than two minutes | Up to two minutes after playbook began running, but no more than 10 seconds after the playbook is completed |
| More than two minutes | Two minutes after playbook began running, regardless of whether or not it was completed |
-|
### Permissions for automation rules to run playbooks
@@ -149,42 +234,33 @@ In the specific case of a Managed Security Service Provider (MSSP), where a serv
## Creating and managing automation rules
-You can create and manage automation rules from different points in the Microsoft Sentinel experience, depending on your particular need and use case.
+You can [create and manage automation rules](create-manage-use-automation-rules.md) from different points in the Microsoft Sentinel experience, depending on your particular need and use case.
- **Automation blade**
- Automation rules can be centrally managed in the new **Automation** blade (which replaces the **Playbooks** blade), under the **Automation rules** tab. (You can also now manage playbooks in this blade, under the **Playbooks** tab.) From there, you can create new automation rules and edit the existing ones. You can also drag automation rules to change the order of execution, and enable or disable them.
+ Automation rules can be centrally managed in the **Automation** blade, under the **Automation rules** tab. From there, you can create new automation rules and edit the existing ones. You can also drag automation rules to change the order of execution, and enable or disable them.
In the **Automation** blade, you see all the rules that are defined on the workspace, along with their status (Enabled/Disabled) and which analytics rules they are applied to.
- When you need an automation rule that will apply to many analytics rules, create it directly in the **Automation** blade. From the top menu, click **Create** and **Add new rule**, which opens the **Create new automation rule** panel. From here you have complete flexibility in configuring the rule: you can apply it to any analytics rules (including future ones) and define the widest range of conditions and actions.
+ When you need an automation rule that will apply to many analytics rules, create it directly in the **Automation** blade.
- **Analytics rule wizard**
- In the **Automated response** tab of the analytics rule wizard, you can see, manage, and create automation rules that apply to the particular analytics rule being created or edited in the wizard.
-
- When you click **Create** and one of the rule types (**Scheduled query rule** or **Microsoft incident creation rule**) from the top menu in the **Analytics** blade, or if you select an existing analytics rule and click **Edit**, you'll open the rule wizard. When you select the **Automated response** tab, you will see a section called **Incident automation**, under which the automation rules that currently apply to this rule will be displayed. You can select an existing automation rule to edit, or click **Add new** to create a new one.
+ In the **Automated response** tab of the analytics rule wizard, under **Incident automation**, you can view, edit, and create automation rules that apply to the particular analytics rule being created or edited in the wizard.
- You'll notice that when you create the automation rule from here, the **Create new automation rule** panel shows the **analytics rule** condition as unavailable, because this rule is already set to apply only to the analytics rule you're editing in the wizard. All the other configuration options are still available to you.
+ You'll notice that when you create an automation rule from here, the **Create new automation rule** panel shows the **analytics rule** condition as unavailable, because this rule is already set to apply only to the analytics rule you're editing in the wizard. All the other configuration options are still available to you.
- **Incidents blade**
- You can also create an automation rule from the **Incidents** blade, in order to respond to a single, recurring incident. This is useful when creating a [suppression rule](#incident-suppression) for automatically closing "noisy" incidents. Select an incident from the queue and click **Create automation rule** from the top menu.
-
- You'll notice that the **Create new automation rule** panel has populated all the fields with values from the incident. It names the rule the same name as the incident, applies it to the analytics rule that generated the incident, and uses all the available entities in the incident as conditions of the rule. It also suggests a suppression (closing) action by default, and suggests an expiration date for the rule. You can add or remove conditions and actions, and change the expiration date, as you wish.
-
-## Auditing automation rule activity
+ You can also create an automation rule from the **Incidents** blade, in order to respond to a single, recurring incident. This is useful when creating a [suppression rule](#incident-suppression) for [automatically closing "noisy" incidents](false-positives.md).
-You may be interested in knowing what happened to a given incident, and what a certain automation rule may or may not have done to it. You have a full record of incident chronicles available to you in the *SecurityIncident* table in the **Logs** blade. Use the following query to see all your automation rule activity:
+ You'll notice that when you create an automation rule from here, the **Create new automation rule** panel has populated all the fields with values from the incident. It names the rule the same name as the incident, applies it to the analytics rule that generated the incident, and uses all the available entities in the incident as conditions of the rule. It also suggests a suppression (closing) action by default, and suggests an expiration date for the rule. You can add or remove conditions and actions, and change the expiration date, as you wish.
-```kusto
-SecurityIncident
-| where ModifiedBy contains "Automation"
-```
## Next steps
-In this document, you learned how to use automation rules to manage your Microsoft Sentinel incidents queue and implement some basic incident-handling automation.
+In this document, you learned about how automation rules can help you to manage your Microsoft Sentinel incidents queue and implement some basic incident-handling automation.
+- [Create and use Microsoft Sentinel automation rules to manage incidents](create-manage-use-automation-rules.md).
- To learn more about advanced automation options, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
-- For help in implementing automation rules and playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
+- For help in implementing playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
diff --git a/articles/sentinel/automation.md b/articles/sentinel/automation.md
index 78bf811335fcd..b9df503923929 100644
--- a/articles/sentinel/automation.md
+++ b/articles/sentinel/automation.md
@@ -26,7 +26,7 @@ Microsoft Sentinel, in addition to being a Security Information and Event Manage
## Automation rules
-Automation rules (now generally available!) allow users to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules will streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
+Automation rules (now generally available!) allow users to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules also allow you to apply automations when an incident is **updated** (now in **Preview**), as well as when it's created. This new capability will further streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
Learn more with this [complete explanation of automation rules](automate-incident-handling-with-automation-rules.md).
@@ -44,4 +44,5 @@ In this document, you learned how Microsoft Sentinel uses automation to help you
- To learn about automation of incident handling, see [Automate incident handling in Microsoft Sentinel](automate-incident-handling-with-automation-rules.md).
- To learn more about advanced automation options, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
-- For help in implementing automation rules and playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
+- To get started creating automation rules, see [Create and use Microsoft Sentinel automation rules to manage incidents](create-manage-use-automation-rules.md)
+- For help in implementing advanced automation with playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
diff --git a/articles/sentinel/ci-cd.md b/articles/sentinel/ci-cd.md
index 3e393e34b0f72..74a6ef0d68d0e 100644
--- a/articles/sentinel/ci-cd.md
+++ b/articles/sentinel/ci-cd.md
@@ -125,9 +125,9 @@ After the deployment is complete:
- The content stored in your repository is displayed in your Microsoft Sentinel workspace, in the relevant Microsoft Sentinel page.
-- The connection details on the **Repositories** page are updated with the link to the connection's deployment logs. For example:
+- The connection details on the **Repositories** page are updated with the link to the connection's deployment logs and the status and time of the last deployment. For example:
- :::image type="content" source="media/ci-cd/deployment-logs-link.png" alt-text="Screenshot of a GitHub repository connection's deployment logs.":::
+ :::image type="content" source="media/ci-cd/deployment-logs-status.png" alt-text="Screenshot of a GitHub repository connection's deployment logs.":::
### Improve deployment performance with smart deployments
diff --git a/articles/sentinel/create-manage-use-automation-rules.md b/articles/sentinel/create-manage-use-automation-rules.md
new file mode 100644
index 0000000000000..ddf629365ea98
--- /dev/null
+++ b/articles/sentinel/create-manage-use-automation-rules.md
@@ -0,0 +1,158 @@
+---
+title: Create and use Microsoft Sentinel automation rules to manage incidents | Microsoft Docs
+description: This article explains how to create and use automation rules in Microsoft Sentinel to manage and handle incidents, in order to maximize your SOC's efficiency and effectiveness in response to security threats.
+author: yelevin
+ms.topic: how-to
+ms.date: 05/23/2022
+ms.author: yelevin
+---
+
+# Create and use Microsoft Sentinel automation rules to manage incidents
+
+[!INCLUDE [Banner for top of topics](./includes/banner.md)]
+
+This article explains how to create and use automation rules in Microsoft Sentinel to manage and handle incidents, in order to maximize your SOC's efficiency and effectiveness in response to security threats.
+
+In this article you'll learn how to define the triggers and conditions that will determine when your automation rule will run, the various actions that you can have the rule perform, and the remaining features and functionalities.
+
+## Design your automation rule
+
+### Determine the scope
+
+The first step in designing and defining your automation rule is figuring out which incidents you want it to apply to. This determination will directly impact how you create the rule.
+
+You also want to determine your use case. What are you trying to accomplish with this automation? Consider the following options:
+
+- Suppress noisy incidents (see [this article on handling false positives](false-positives.md#add-exceptions-by-using-automation-rules) instead)
+- Triage new incidents by changing their status from New to Active and assigning an owner.
+- Tag incidents to classify them.
+- Escalate an incident by assigning a new owner.
+- Close resolved incidents, specifying a reason and adding comments.
+- Analyze the incident's contents (alerts, entities, and other properties) and take further action by calling a playbook.
+
+### Determine the trigger
+
+Do you want this automation to be activated when new incidents are created? Or any time an incident gets updated?
+
+Automation rules are triggered **when an incident is created or updated** (the update trigger is now in **Preview**). Recall that incidents are created from alerts by analytics rules, of which there are several types, as explained in [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
+
+The following table shows the different possible ways that incidents can be created or updated that will cause an automation rule to run.
+
+| Trigger type | Events that cause the rule to run |
+| --------- | ------------ |
+| **When incident is created** | - A new incident is created by an analytics rule. - An incident is ingested from Microsoft 365 Defender. - A new incident is created manually. |
+| **When incident is updated** (Preview) | - An incident's status is changed (closed/reopened/triaged). - An incident's owner is assigned or changed. - An incident's severity is raised or lowered. - Alerts are added to an incident. - Comments, tags, or tactics are added to an incident. |
+
+## Create your automation rule
+
+Most of the following instructions apply to any and all use cases for which you'll create automation rules.
+
+- For the use case of suppressing noisy incidents, see [this article on handling false positives](false-positives.md#add-exceptions-by-using-automation-rules).
+- For creating an automation rule that will apply to a single specific analytics rule, see [this article on configuring automated response in analytics rules](detect-threats-custom.md#set-automated-responses-and-create-the-rule).
+
+1. From the **Automation** blade in the Microsoft Sentinel navigation menu, select **Create** from the top menu and choose **Automation rule**.
+
+ :::image type="content" source="./media/create-manage-use-automation-rules/add-rule-automation.png" alt-text="Screenshot of creating a new automation rule in the Automation blade." lightbox="./media/create-manage-use-automation-rules/add-rule-automation.png":::
+
+1. The **Create new automation rule** panel opens. Enter a name for your rule.
+
+ :::image type="content" source="media/create-manage-use-automation-rules/create-automation-rule.png" alt-text="Screenshot of Create new automation rule wizard.":::
+
+1. If you want the automation rule to take effect only on certain analytics rules, specify which ones by modifying the **If Analytics rule name** condition.
+
+### Choose your trigger
+
+From the **Trigger** drop-down, select **When incident is created** or **When incident is updated (Preview)** according to what you decided when designing your rule.
+
+:::image type="content" source="media/create-manage-use-automation-rules/select-trigger.png" alt-text="Screenshot of selecting the incident create or incident update trigger.":::
+
+### Add conditions
+
+Add any other conditions you want this automation rule's activation to depend on. Select **+ Add condition** and choose conditions from the drop-down list. The list of conditions is populated by incident property and [entity property](entities-reference.md) fields.
+
+1. Select a property from the first drop-down box on the left. You can begin typing any part of a property name in the search box to dynamically filter the list, so you can find what you're looking for quickly.
+ :::image type="content" source="media/create-manage-use-automation-rules/filter-list.png" alt-text="Screenshot of typing in a search box to filter the list of choices.":::
+
+1. Select an operator from the next drop-down box to the right.
+ :::image type="content" source="media/create-manage-use-automation-rules/select-operator.png" alt-text="Screenshot of selecting a condition operator for automation rules.":::
+
+ The list of operators you can choose from varies according to the selected trigger and property. Here's a summary of what's available:
+
+ #### Conditions available with the create trigger
+
+ | Property | Operator set |
+ | -------- | -------- |
+ | - Title - Description - Tag - All listed entity properties | - Equals/Does not equal - Contains/Does not contain - Starts with/Does not start with - Ends with/Does not end with |
+ | - Severity - Status - Incident provider | - Equals/Does not equal |
+ | - Tactics - Alert product names | - Contains/Does not contain |
+
+ #### Conditions available with the update trigger
+
+ | Property | Operator set |
+ | -------- | -------- |
+ | - Title - Description - Tag - All listed entity properties | - Equals/Does not equal - Contains/Does not contain - Starts with/Does not start with - Ends with/Does not end with |
+ | - Tag (in addition to above) - Alerts - Comments | - Added |
+ | - Severity - Status | - Equals/Does not equal - Changed - Changed from - Changed to |
+ | - Owner | - Changed |
+ | - Incident provider - Updated by | - Equals/Does not equal |
+ | - Tactics | - Contains/Does not contain - Added |
+ | - Alert product names | - Contains/Does not contain |
+
+1. Enter a value in the text box on the right. Depending on the property you chose, this might be a drop-down list from which you would select the values you choose. You might also be able to add several values by selecting the icon to the right of the text box (highlighted by the red arrow below).
+
+ :::image type="content" source="media/create-manage-use-automation-rules/add-values-to-condition.png" alt-text="Screenshot of adding values to your condition in automation rules.":::
+
+### Add actions
+
+Choose the actions you want this automation rule to take. Available actions include **Assign owner**, **Change status**, **Change severity**, **Add tags**, and **Run playbook**. You can add as many actions as you like.
+
+:::image type="content" source="media/create-manage-use-automation-rules/select-action.png" alt-text="Screenshot of list of actions to select in automation rule.":::
+
+If you add a **Run playbook** action, you will be prompted to choose from the drop-down list of available playbooks.
+
+- Only playbooks that start with the **incident trigger** can be run from automation rules, so only they will appear in the list.
+
+- Microsoft Sentinel must be granted explicit permissions in order to run playbooks based on the incident trigger. If a playbook appears "grayed out" in the drop-down list, it means Sentinel does not have permission to that playbook's resource group. Click the **Manage playbook permissions** link to assign permissions.
+
+ In the **Manage permissions** panel that opens up, mark the check boxes of the resource groups containing the playbooks you want to run, and click **Apply**.
+ :::image type="content" source="./media/tutorial-respond-threats-playbook/manage-permissions.png" alt-text="Manage permissions":::
+
+ You yourself must have **owner** permissions on any resource group to which you want to grant Microsoft Sentinel permissions, and you must have the **Logic App Contributor** role on any resource group containing playbooks you want to run.
+
+- If you don't yet have a playbook that will take the action you have in mind, [create a new playbook](tutorial-respond-threats-playbook.md). You will have to exit the automation rule creation process and restart it after you have created your playbook.
+
+### Finish creating your rule
+
+1. Set an **expiration date** for your automation rule if you want it to have one.
+
+1. Enter a number under **Order** to determine where in the sequence of automation rules this rule will run.
+
+1. Click **Apply**. You're done!
+
+## Audit automation rule activity
+
+Find out what automation rules may have done to a given incident. You have a full record of incident chronicles available to you in the *SecurityIncident* table in the **Logs** blade. Use the following query to see all your automation rule activity:
+
+```kusto
+SecurityIncident
+| where ModifiedBy contains "Automation"
+```
+
+## Automation rules execution
+
+Automation rules are run sequentially, according to the order you determine. Each automation rule is executed after the previous one has finished its run. Within an automation rule, all actions are run sequentially in the order in which they are defined.
+
+Playbook actions within an automation rule may be treated differently under some circumstances, according to the following criteria:
+
+| Playbook run time | Automation rule advances to the next action... |
+| ----------------- | --------------------------------------------------- |
+| Less than a second | Immediately after playbook is completed |
+| Less than two minutes | Up to two minutes after playbook began running, but no more than 10 seconds after the playbook is completed |
+| More than two minutes | Two minutes after playbook began running, regardless of whether or not it was completed |
+
+## Next steps
+
+In this document, you learned how to use automation rules to manage your Microsoft Sentinel incidents queue and implement some basic incident-handling automation.
+
+- To learn more about advanced automation options, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
+- For help in implementing automation rules and playbooks, see [Tutorial: Use playbooks to automate threat responses in Microsoft Sentinel](tutorial-respond-threats-playbook.md).
diff --git a/articles/sentinel/deploy-side-by-side.md b/articles/sentinel/deploy-side-by-side.md
new file mode 100644
index 0000000000000..1f8a911047a44
--- /dev/null
+++ b/articles/sentinel/deploy-side-by-side.md
@@ -0,0 +1,109 @@
+---
+title: Deploy Microsoft Sentinel side-by-side to an existing SIEM.
+description: Learn how to deploy Microsoft Sentinel side-by-side to an existing SIEM.
+author: limwainstein
+ms.topic: conceptual
+ms.date: 05/30/2022
+ms.author: lwainstein
+---
+
+# Deploy Microsoft Sentinel side-by-side to an existing SIEM
+
+Your security operations center (SOC) team uses centralized security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solutions to protect your increasingly decentralized digital estate.
+
+This article describes how to deploy Microsoft Sentinel in a side-by-side configuration together with your existing SIEM.
+
+## Select a side-by-side approach and method
+
+Use a side-by-side architecture either as a short-term, transitional phase that leads to a completely cloud-hosted SIEM, or as a medium- to long-term operational model, depending on the SIEM needs of your organization.
+
+For example, while the recommended architecture is to use a side-by-side architecture just long enough to complete a migration to Microsoft Sentinel, your organization may want to stay with your side-by-side configuration for longer, such as if you aren't ready to move away from your legacy SIEM. Typically, organizations who use a long-term, side-by-side configuration use Microsoft Sentinel to analyze only their cloud data.
+
+Consider the pros and cons for each approach when deciding which one to use.
+
+> [!NOTE]
+> Many organizations avoid running multiple on-premises analytics solutions because of cost and complexity.
+>
+> Microsoft Sentinel provides [pay-as-you-go pricing](billing.md) and flexible infrastructure, giving SOC teams time to adapt to the change. Deploy and test your content at a pace that works best for your organization, and learn about how to [fully migrate to Microsoft Sentinel](migration.md).
+>
+### Short-term approach
+
+|**Pros** |**Cons** |
+|---------|---------|
+|• Gives SOC staff time to adapt to new processes as you deploy workloads and analytics.
• Gains deep correlation across all data sources for hunting scenarios.
• Eliminates having to do analytics between SIEMs, create forwarding rules, and close investigations in two places.
• Enables your SOC team to quickly downgrade legacy SIEM solutions, eliminating infrastructure and licensing costs. |• Can require a steep learning curve for SOC staff. |
+
+### Medium- to long-term approach
+
+|**Pros** |**Cons** |
+|---------|---------|
+|• Lets you use key Microsoft Sentinel benefits, like AI, ML, and investigation capabilities, without moving completely away from your legacy SIEM.
• Saves money compared to your legacy SIEM, by analyzing cloud or Microsoft data in Microsoft Sentinel. |• Increases complexity by separating analytics across different databases.
• Splits case management and investigations for multi-environment incidents.
• Incurs greater staff and infrastructure costs.
• Requires SOC staff to be knowledgeable about two different SIEM solutions. |
+
+### Send alerts from a legacy SIEM to Microsoft Sentinel (Recommended)
+
+Send alerts, or indicators of anomalous activity, from your legacy SIEM to Microsoft Sentinel.
+
+- Ingest and analyze cloud data in Microsoft Sentinel
+- Use your legacy SIEM to analyze on-premises data and generate alerts.
+- Forward the alerts from your on-premises SIEM into Microsoft Sentinel to establish a single interface.
+
+For example, forward alerts using [Logstash](connect-logstash.md), [APIs](/rest/api/securityinsights/), or [Syslog](connect-syslog.md), and store them in [JSON](https://techcommunity.microsoft.com/t5/azure-sentinel/tip-easily-use-json-fields-in-sentinel/ba-p/768747) format in your Microsoft Sentinel [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
+
+By sending alerts from your legacy SIEM to Microsoft Sentinel, your team can cross-correlate and investigate those alerts in Microsoft Sentinel. The team can still access the legacy SIEM for deeper investigation if needed. Meanwhile, you can continue deploying data sources over an extended transition period.
+
+This recommended, side-by-side deployment method provides you with full value from Microsoft Sentinel and the ability to deploy data sources at the pace that's right for your organization. This approach avoids duplicating costs for data storage and ingestion while you move your data sources over.
+
+For more information, see:
+
+- [Migrate QRadar offenses to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/migrating-qradar-offenses-to-azure-sentinel/ba-p/2102043)
+- [Export data from Splunk to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/how-to-export-data-from-splunk-to-azure-sentinel/ba-p/1891237).
+
+If you want to fully migrate to Microsoft Sentinel, review the full [migration guide](migration.md).
+
+### Send alerts and enriched incidents from Microsoft Sentinel to a legacy SIEM
+
+Analyze some data in Microsoft Sentinel, such as cloud data, and then send the generated alerts to a legacy SIEM. Use the *legacy* SIEM as your single interface to do cross-correlation with the alerts that Microsoft Sentinel generated. You can still use Microsoft Sentinel for deeper investigation of the Microsoft Sentinel-generated alerts.
+
+This configuration is cost effective, as you can move your cloud data analysis to Microsoft Sentinel without duplicating costs or paying for data twice. You still have the freedom to migrate at your own pace. As you continue to shift data sources and detections over to Microsoft Sentinel, it becomes easier to migrate to Microsoft Sentinel as your primary interface. However, simply forwarding enriched incidents to a legacy SIEM limits the value you get from Microsoft Sentinel's investigation, hunting, and automation capabilities.
+
+For more information, see:
+
+- [Send enriched Microsoft Sentinel alerts to your legacy SIEM](https://techcommunity.microsoft.com/t5/azure-sentinel/sending-enriched-azure-sentinel-alerts-to-3rd-party-siem-and/ba-p/1456976)
+- [Send enriched Microsoft Sentinel alerts to IBM QRadar](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-side-by-side-with-qradar/ba-p/1488333)
+- [Ingest Microsoft Sentinel alerts into Splunk](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-side-by-side-with-splunk/ba-p/1211266)
+
+### Other methods
+
+The following table describes side-by-side configurations that are *not* recommended, with details as to why:
+
+|Method |Description |
+|---------|---------|
+|**Send Microsoft Sentinel logs to your legacy SIEM** | With this method, you'll continue to experience the cost and scale challenges of your on-premises SIEM.
You'll pay for data ingestion in Microsoft Sentinel, along with storage costs in your legacy SIEM, and you can't take advantage of Microsoft Sentinel's SIEM and SOAR detections, analytics, User Entity Behavior Analytics (UEBA), AI, or investigation and automation tools. |
+|**Send logs from a legacy SIEM to Microsoft Sentinel** | While this method provides you with the full functionality of Microsoft Sentinel, your organization still pays for two different data ingestion sources. Besides adding architectural complexity, this model can result in higher costs. |
+|**Use Microsoft Sentinel and your legacy SIEM as two fully separate solutions** | You could use Microsoft Sentinel to analyze some data sources, like your cloud data, and continue to use your on-premises SIEM for other sources. This setup allows for clear boundaries for when to use each solution, and avoids duplication of costs.
However, cross-correlation becomes difficult, and you can't fully diagnose attacks that cross both sets of data sources. In today's landscape, where threats often move laterally across an organization, such visibility gaps can pose significant security risks. |
+
+## Use automation to streamline processes
+
+Use automated workflows to group and prioritize alerts into a common incident, and modify its priority.
+
+For more information, see:
+
+- [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md).
+- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+- [Automate incident handling in Microsoft Sentinel with automation rules](automate-incident-handling-with-automation-rules.md)
+
+## Next steps
+
+Explore Microsoft's Microsoft Sentinel resources to expand your skills and get the most out of Microsoft Sentinel.
+
+Also consider increasing your threat protection by using Microsoft Sentinel alongside [Microsoft 365 Defender](./microsoft-365-defender-sentinel-integration.md) and [Microsoft Defender for Cloud](../security-center/azure-defender.md) for [integrated threat protection](https://www.microsoft.com/security/business/threat-protection). Benefit from the breadth of visibility that Microsoft Sentinel delivers, while diving deeper into detailed threat analysis.
+
+For more information, see:
+
+- [Rule migration best practices](https://techcommunity.microsoft.com/t5/azure-sentinel/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417)
+- [Webinar: Best Practices for Converting Detection Rules](https://www.youtube.com/watch?v=njXK1h9lfR4)
+- [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md)
+- [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md)
+- [Microsoft Sentinel learning path](/learn/paths/security-ops-sentinel/)
+- [SC-200 Microsoft Security Operations Analyst certification](/learn/certifications/exams/sc-200)
+- [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/azure-sentinel/become-an-azure-sentinel-ninja-the-complete-level-400-training/ba-p/1246310)
+- [Investigate an attack on a hybrid environment with Microsoft Sentinel](https://mslearn.cloudguides.com/guides/Investigate%20an%20attack%20on%20a%20hybrid%20environment%20with%20Azure%20Sentinel)
diff --git a/articles/sentinel/detect-threats-custom.md b/articles/sentinel/detect-threats-custom.md
index c173940a35016..d18f8a0aabb7e 100644
--- a/articles/sentinel/detect-threats-custom.md
+++ b/articles/sentinel/detect-threats-custom.md
@@ -84,9 +84,6 @@ In the **Set rule logic** tab, you can either write a query directly in the **Ru
### Alert enrichment
-> [!IMPORTANT]
-> The alert enrichment features are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
- Use the **Entity mapping** configuration section to map parameters from your query results to Microsoft Sentinel-recognized entities. Entities enrich the rules' output (alerts and incidents) with essential information that serves as the building blocks of any investigative processes and remedial actions that follow. They are also the criteria by which you can group alerts together into incidents in the **Incident settings** tab.
Learn more about [entities in Microsoft Sentinel](entities.md).
@@ -144,9 +141,6 @@ If you see that your query would trigger too many or too frequent alerts, you ca
### Event grouping and rule suppression
-> [!IMPORTANT]
-> Event grouping is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
- Under **Event grouping**, choose one of two ways to handle the grouping of **events** into **alerts**:
- **Group all events into a single alert** (the default setting). The rule generates a single alert every time it runs, as long as the query returns more results than the specified **alert threshold** above. The alert includes a summary of all the events returned in the results.
@@ -181,9 +175,6 @@ If you see that your query would trigger too many or too frequent alerts, you ca
In the **Incident Settings** tab, you can choose whether and how Microsoft Sentinel turns alerts into actionable incidents. If this tab is left alone, Microsoft Sentinel will create a single, separate incident from each and every alert. You can choose to have no incidents created, or to group several alerts into a single incident, by changing the settings in this tab.
-> [!IMPORTANT]
-> The incident settings tab is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
For example:
:::image type="content" source="media/tutorial-detect-threats-custom/incident-settings-tab.png" alt-text="Define the incident creation and alert grouping settings":::
@@ -219,9 +210,15 @@ In the **Alert grouping** section, if you want a single incident to be generated
## Set automated responses and create the rule
1. In the **Automated responses** tab, you can set automation based on the alert or alerts generated by this analytics rule, or based on the incident created by the alerts.
+
- For alert-based automation, select from the drop-down list under **Alert automation** any playbooks you want to run automatically when an alert is generated.
- - For incident-based automation, select or create an automation rule under **Incident automation (preview)**. You can call playbooks (those based on the **incident trigger**) from these automation rules, as well as automate triage, assignment, and closing.
+
+ - For incident-based automation, the grid displayed under **Incident automation** shows the automation rules that already apply to this analytics rule (by virtue of it meeting the conditions defined in those rules). You can edit any of these by selecting the ellipsis at the end of each row. Or, you can [create a new automation rule](create-manage-use-automation-rules.md).
+
+ You can call playbooks (those based on the **incident trigger**) from these automation rules, as well as automate triage, assignment, and closing.
+
- For more information and instructions on creating playbooks and automation rules, see [Automate threat responses](tutorial-respond-threats-playbook.md#automate-threat-responses).
+
- For more information about when to use the **alert trigger** or the **incident trigger**, see [Use triggers and actions in Microsoft Sentinel playbooks](playbook-triggers-actions.md#microsoft-sentinel-triggers-summary).
:::image type="content" source="media/tutorial-detect-threats-custom/automated-response-tab.png" alt-text="Define the automated response settings":::
diff --git a/articles/sentinel/false-positives.md b/articles/sentinel/false-positives.md
index 16a594d1e1914..0ac5c350b36b5 100644
--- a/articles/sentinel/false-positives.md
+++ b/articles/sentinel/false-positives.md
@@ -46,23 +46,27 @@ To add an automation rule to handle a false positive:
1. In Microsoft Sentinel, under **Incidents**, select the incident you want to create an exception for.
1. Select **Create automation rule**.
1. In the **Create new automation rule** sidebar, optionally modify the new rule name to identify the exception, rather than just the alert rule name.
-1. Under **Conditions**, optionally add more **Analytic rule name**s to apply the exception to.
+1. Under **Conditions**, optionally add more **Analytics rule name**s to apply the exception to.
+ Select the drop-down box containing the analytics rule name and select more analytics rules from the list.
1. The sidebar presents the specific entities in the current incident that might have caused the false positive. Keep the automatic suggestions, or modify them to fine-tune the exception. For example, you could change a condition on an IP address to apply to an entire subnet.
:::image type="content" source="media/false-positives/create-rule.png" alt-text="Screenshot showing how to create an automation rule for an incident in Microsoft Sentinel.":::
-1. After you define the trigger, you can continue to define what the rule does:
+1. After you're satisfied with the conditions, you can continue to define what the rule does:
:::image type="content" source="media/false-positives/apply-rule.png" alt-text="Screenshot showing how to finish creating and applying an automation rule in Microsoft Sentinel.":::
- The rule is already configured to close an incident that meets the exception criteria.
+ - You can keep the specified closing reason as is, or you can change it if another reason is more appropriate.
- You can add a comment to the automatically closed incident that explains the exception. For example, you could specify that the incident originated from known administrative activity.
- By default, the rule is set to expire automatically after 24 hours. This expiration might be what you want, and reduces the chance of false negative errors. If you want a longer exception, set **Rule expiration** to a later time.
+1. You can add more actions if you want. For example, you can add a tag to the incident, or you can run a playbook to send an email or a notification or to synchronize with an external system.
+
1. Select **Apply** to activate the exception.
> [!TIP]
-> You can also create an automation rule from scratch, without starting from an incident. Select **Automation** from the Microsoft Sentinel left navigation menu, and then select **Create** > **Add new rule**.
+> You can also create an automation rule from scratch, without starting from an incident. Select **Automation** from the Microsoft Sentinel left navigation menu, and then select **Create** > **Add new rule**. [Learn more about automation rules](automate-incident-handling-with-automation-rules.md).
## Add exceptions by modifying analytics rules
diff --git a/articles/sentinel/media/ci-cd/deployment-logs-link.png b/articles/sentinel/media/ci-cd/deployment-logs-link.png
deleted file mode 100644
index 0dc9702d5baa6..0000000000000
Binary files a/articles/sentinel/media/ci-cd/deployment-logs-link.png and /dev/null differ
diff --git a/articles/sentinel/media/ci-cd/deployment-logs-status.png b/articles/sentinel/media/ci-cd/deployment-logs-status.png
new file mode 100644
index 0000000000000..9aaddc467691e
Binary files /dev/null and b/articles/sentinel/media/ci-cd/deployment-logs-status.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/add-rule-automation.png b/articles/sentinel/media/create-manage-use-automation-rules/add-rule-automation.png
new file mode 100644
index 0000000000000..024c876612d89
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/add-rule-automation.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/add-tag.png b/articles/sentinel/media/create-manage-use-automation-rules/add-tag.png
new file mode 100644
index 0000000000000..8adf0c2bc854e
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/add-tag.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/add-values-to-condition.png b/articles/sentinel/media/create-manage-use-automation-rules/add-values-to-condition.png
new file mode 100644
index 0000000000000..0147d792520fc
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/add-values-to-condition.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/assign-owner.png b/articles/sentinel/media/create-manage-use-automation-rules/assign-owner.png
new file mode 100644
index 0000000000000..07af2adca0a69
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/assign-owner.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/condition-properties.png b/articles/sentinel/media/create-manage-use-automation-rules/condition-properties.png
new file mode 100644
index 0000000000000..36c99936a8417
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/condition-properties.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/create-automation-rule-on-incident.png b/articles/sentinel/media/create-manage-use-automation-rules/create-automation-rule-on-incident.png
new file mode 100644
index 0000000000000..9128ed3e30256
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/create-automation-rule-on-incident.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/create-automation-rule.png b/articles/sentinel/media/create-manage-use-automation-rules/create-automation-rule.png
new file mode 100644
index 0000000000000..290aa3d1bc80a
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/create-automation-rule.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/filter-list.png b/articles/sentinel/media/create-manage-use-automation-rules/filter-list.png
new file mode 100644
index 0000000000000..19614462b39e4
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/filter-list.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/incident-automation-rule-populated.png b/articles/sentinel/media/create-manage-use-automation-rules/incident-automation-rule-populated.png
new file mode 100644
index 0000000000000..2d2b42834ab34
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/incident-automation-rule-populated.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/select-action.png b/articles/sentinel/media/create-manage-use-automation-rules/select-action.png
new file mode 100644
index 0000000000000..b9711a2fe909a
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/select-action.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/select-create-automation-rule.png b/articles/sentinel/media/create-manage-use-automation-rules/select-create-automation-rule.png
new file mode 100644
index 0000000000000..5847eb6152ced
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/select-create-automation-rule.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/select-operator.png b/articles/sentinel/media/create-manage-use-automation-rules/select-operator.png
new file mode 100644
index 0000000000000..0667019d186de
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/select-operator.png differ
diff --git a/articles/sentinel/media/create-manage-use-automation-rules/select-trigger.png b/articles/sentinel/media/create-manage-use-automation-rules/select-trigger.png
new file mode 100644
index 0000000000000..2bc971f6e7652
Binary files /dev/null and b/articles/sentinel/media/create-manage-use-automation-rules/select-trigger.png differ
diff --git a/articles/sentinel/media/migration-arcsight-automation/arcsight-sentinel-soar-workflow.png b/articles/sentinel/media/migration-arcsight-automation/arcsight-sentinel-soar-workflow.png
new file mode 100644
index 0000000000000..b4cd67e3e5865
Binary files /dev/null and b/articles/sentinel/media/migration-arcsight-automation/arcsight-sentinel-soar-workflow.png differ
diff --git a/articles/sentinel/media/migration-arcsight-detection-rules/compare-rule-terminology.png b/articles/sentinel/media/migration-arcsight-detection-rules/compare-rule-terminology.png
new file mode 100644
index 0000000000000..d5f4fbd4bd6f7
Binary files /dev/null and b/articles/sentinel/media/migration-arcsight-detection-rules/compare-rule-terminology.png differ
diff --git a/articles/sentinel/media/migration-arcsight-detection-rules/rule-1-sample.png b/articles/sentinel/media/migration-arcsight-detection-rules/rule-1-sample.png
new file mode 100644
index 0000000000000..9e7617c4e3ce1
Binary files /dev/null and b/articles/sentinel/media/migration-arcsight-detection-rules/rule-1-sample.png differ
diff --git a/articles/sentinel/media/migration-arcsight-detection-rules/rule-2-sample.png b/articles/sentinel/media/migration-arcsight-detection-rules/rule-2-sample.png
new file mode 100644
index 0000000000000..bd8b01afe6cb8
Binary files /dev/null and b/articles/sentinel/media/migration-arcsight-detection-rules/rule-2-sample.png differ
diff --git a/articles/sentinel/media/migration-arcsight-detection-rules/rule-3-sample-1.png b/articles/sentinel/media/migration-arcsight-detection-rules/rule-3-sample-1.png
new file mode 100644
index 0000000000000..dabc0fce98397
Binary files /dev/null and b/articles/sentinel/media/migration-arcsight-detection-rules/rule-3-sample-1.png differ
diff --git a/articles/sentinel/media/migration-arcsight-detection-rules/rule-3-sample-2.png b/articles/sentinel/media/migration-arcsight-detection-rules/rule-3-sample-2.png
new file mode 100644
index 0000000000000..c191a5c6ccdfc
Binary files /dev/null and b/articles/sentinel/media/migration-arcsight-detection-rules/rule-3-sample-2.png differ
diff --git a/articles/sentinel/media/migration-arcsight-detection-rules/rule-4-sample.png b/articles/sentinel/media/migration-arcsight-detection-rules/rule-4-sample.png
new file mode 100644
index 0000000000000..29dddd511643b
Binary files /dev/null and b/articles/sentinel/media/migration-arcsight-detection-rules/rule-4-sample.png differ
diff --git a/articles/sentinel/media/migration-arcsight-detection-rules/rule-5-sample.png b/articles/sentinel/media/migration-arcsight-detection-rules/rule-5-sample.png
new file mode 100644
index 0000000000000..63c1bf4008f30
Binary files /dev/null and b/articles/sentinel/media/migration-arcsight-detection-rules/rule-5-sample.png differ
diff --git a/articles/sentinel/media/migration-arcsight-detection-rules/rule-6-sample.png b/articles/sentinel/media/migration-arcsight-detection-rules/rule-6-sample.png
new file mode 100644
index 0000000000000..87e0e16a3839e
Binary files /dev/null and b/articles/sentinel/media/migration-arcsight-detection-rules/rule-6-sample.png differ
diff --git a/articles/sentinel/media/migration-arcsight-detection-rules/rule-7-sample.png b/articles/sentinel/media/migration-arcsight-detection-rules/rule-7-sample.png
new file mode 100644
index 0000000000000..6a3a790a44101
Binary files /dev/null and b/articles/sentinel/media/migration-arcsight-detection-rules/rule-7-sample.png differ
diff --git a/articles/sentinel/media/migration-export-ingest/export-data.png b/articles/sentinel/media/migration-export-ingest/export-data.png
new file mode 100644
index 0000000000000..6b581ed961b36
Binary files /dev/null and b/articles/sentinel/media/migration-export-ingest/export-data.png differ
diff --git a/articles/sentinel/media/migration-overview/migration-phases.png b/articles/sentinel/media/migration-overview/migration-phases.png
new file mode 100644
index 0000000000000..2d9fb39eff96a
Binary files /dev/null and b/articles/sentinel/media/migration-overview/migration-phases.png differ
diff --git a/articles/sentinel/media/migration-qradar-automation/qradar-sentinel-soar-workflow.png b/articles/sentinel/media/migration-qradar-automation/qradar-sentinel-soar-workflow.png
new file mode 100644
index 0000000000000..72531a23f1c88
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-automation/qradar-sentinel-soar-workflow.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/compare-rule-terminology.png b/articles/sentinel/media/migration-qradar-detection-rules/compare-rule-terminology.png
new file mode 100644
index 0000000000000..d5f4fbd4bd6f7
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/compare-rule-terminology.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-1-sample-aql.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-1-sample-aql.png
new file mode 100644
index 0000000000000..7f28336ea9215
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-1-sample-aql.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-1-sample-equals.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-1-sample-equals.png
new file mode 100644
index 0000000000000..84646f7b1fa19
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-1-sample-equals.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-1-sample.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-1-sample.png
new file mode 100644
index 0000000000000..c0df6cc6d75ca
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-1-sample.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-1-syntax.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-1-syntax.png
new file mode 100644
index 0000000000000..48d93ab446709
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-1-syntax.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-after-before-at.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-after-before-at.png
new file mode 100644
index 0000000000000..99ddbe8d2d985
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-after-before-at.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-protocol.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-protocol.png
new file mode 100644
index 0000000000000..ad02d16960ceb
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-protocol.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-selected-day-week.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-selected-day-week.png
new file mode 100644
index 0000000000000..028957d2a444f
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-selected-day-week.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-selected-day.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-selected-day.png
new file mode 100644
index 0000000000000..0a7fa07cdf394
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-2-sample-selected-day.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-2-syntax.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-2-syntax.png
new file mode 100644
index 0000000000000..601d341c377f8
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-2-syntax.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-3-sample-payload.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-3-sample-payload.png
new file mode 100644
index 0000000000000..05d0febceeefc
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-3-sample-payload.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-3-sample-protocol.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-3-sample-protocol.png
new file mode 100644
index 0000000000000..5872d203b1aa4
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-3-sample-protocol.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-3-syntax.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-3-syntax.png
new file mode 100644
index 0000000000000..b49ce7776b4ac
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-3-syntax.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-4-sample-event-property.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-4-sample-event-property.png
new file mode 100644
index 0000000000000..b2edec02c9012
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-4-sample-event-property.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-4-syntax.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-4-syntax.png
new file mode 100644
index 0000000000000..525460a8c6619
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-4-syntax.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-5-sample-1.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-5-sample-1.png
new file mode 100644
index 0000000000000..29a0229eb4ed7
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-5-sample-1.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-5-sample-2.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-5-sample-2.png
new file mode 100644
index 0000000000000..efa0741ffbf9b
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-5-sample-2.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-5-sample-3.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-5-sample-3.png
new file mode 100644
index 0000000000000..0e82e348aa72d
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-5-sample-3.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-5-syntax.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-5-syntax.png
new file mode 100644
index 0000000000000..0e9bf6e15f745
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-5-syntax.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-6-sample-1.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-6-sample-1.png
new file mode 100644
index 0000000000000..62b02382384d9
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-6-sample-1.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-6-syntax.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-6-syntax.png
new file mode 100644
index 0000000000000..2fc93d0efa851
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-6-syntax.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-7-sample-1-port.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-7-sample-1-port.png
new file mode 100644
index 0000000000000..5bfb884388ca2
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-7-sample-1-port.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-7-sample-2-ip.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-7-sample-2-ip.png
new file mode 100644
index 0000000000000..a9c8e83365142
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-7-sample-2-ip.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-7-syntax.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-7-syntax.png
new file mode 100644
index 0000000000000..829db722e5f8f
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-7-syntax.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-8-sample-1.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-8-sample-1.png
new file mode 100644
index 0000000000000..730b18cfbb14c
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-8-sample-1.png differ
diff --git a/articles/sentinel/media/migration-qradar-detection-rules/rule-8-syntax.png b/articles/sentinel/media/migration-qradar-detection-rules/rule-8-syntax.png
new file mode 100644
index 0000000000000..73b80e31d3ab2
Binary files /dev/null and b/articles/sentinel/media/migration-qradar-detection-rules/rule-8-syntax.png differ
diff --git a/articles/sentinel/media/migration-soc-processes/analyst-workflow-assign-incidents.png b/articles/sentinel/media/migration-soc-processes/analyst-workflow-assign-incidents.png
new file mode 100644
index 0000000000000..ad580eb86effa
Binary files /dev/null and b/articles/sentinel/media/migration-soc-processes/analyst-workflow-assign-incidents.png differ
diff --git a/articles/sentinel/media/migration-soc-processes/analyst-workflow-incident-details.png b/articles/sentinel/media/migration-soc-processes/analyst-workflow-incident-details.png
new file mode 100644
index 0000000000000..53168df5d069a
Binary files /dev/null and b/articles/sentinel/media/migration-soc-processes/analyst-workflow-incident-details.png differ
diff --git a/articles/sentinel/media/migration-soc-processes/analyst-workflow-incident-triage-notebook.png b/articles/sentinel/media/migration-soc-processes/analyst-workflow-incident-triage-notebook.png
new file mode 100644
index 0000000000000..b052501c11abb
Binary files /dev/null and b/articles/sentinel/media/migration-soc-processes/analyst-workflow-incident-triage-notebook.png differ
diff --git a/articles/sentinel/media/migration-soc-processes/analyst-workflow-incidents.png b/articles/sentinel/media/migration-soc-processes/analyst-workflow-incidents.png
new file mode 100644
index 0000000000000..6a096cabcbf7c
Binary files /dev/null and b/articles/sentinel/media/migration-soc-processes/analyst-workflow-incidents.png differ
diff --git a/articles/sentinel/media/migration-soc-processes/analyst-workflow-investigation-graph.png b/articles/sentinel/media/migration-soc-processes/analyst-workflow-investigation-graph.png
new file mode 100644
index 0000000000000..c0e5c00769542
Binary files /dev/null and b/articles/sentinel/media/migration-soc-processes/analyst-workflow-investigation-graph.png differ
diff --git a/articles/sentinel/media/migration-soc-processes/analyst-workflow-investigation-workbooks.png b/articles/sentinel/media/migration-soc-processes/analyst-workflow-investigation-workbooks.png
new file mode 100644
index 0000000000000..137ce385986c9
Binary files /dev/null and b/articles/sentinel/media/migration-soc-processes/analyst-workflow-investigation-workbooks.png differ
diff --git a/articles/sentinel/media/migration-soc-processes/analyst-workflow-playbooks.png b/articles/sentinel/media/migration-soc-processes/analyst-workflow-playbooks.png
new file mode 100644
index 0000000000000..1b9e0942d8fbb
Binary files /dev/null and b/articles/sentinel/media/migration-soc-processes/analyst-workflow-playbooks.png differ
diff --git a/articles/sentinel/media/migration-splunk-automation/splunk-sentinel-soar-workflow-new.png b/articles/sentinel/media/migration-splunk-automation/splunk-sentinel-soar-workflow-new.png
new file mode 100644
index 0000000000000..1aa4e81e2d4ea
Binary files /dev/null and b/articles/sentinel/media/migration-splunk-automation/splunk-sentinel-soar-workflow-new.png differ
diff --git a/articles/sentinel/media/migration-splunk-automation/splunk-sentinel-soar-workflow.png b/articles/sentinel/media/migration-splunk-automation/splunk-sentinel-soar-workflow.png
new file mode 100644
index 0000000000000..73f0434ab5587
Binary files /dev/null and b/articles/sentinel/media/migration-splunk-automation/splunk-sentinel-soar-workflow.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-analytics-monitor.png b/articles/sentinel/media/migration-track/migration-track-analytics-monitor.png
new file mode 100644
index 0000000000000..8c90895bf7a39
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-analytics-monitor.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-analytics.png b/articles/sentinel/media/migration-track/migration-track-analytics.png
new file mode 100644
index 0000000000000..54de47a6765c7
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-analytics.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-automation.png b/articles/sentinel/media/migration-track/migration-track-automation.png
new file mode 100644
index 0000000000000..0ce20f2357129
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-automation.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-azure-deployment.png b/articles/sentinel/media/migration-track/migration-track-azure-deployment.png
new file mode 100644
index 0000000000000..264f0a99edf6d
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-azure-deployment.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-configure-data-connectors.png b/articles/sentinel/media/migration-track/migration-track-configure-data-connectors.png
new file mode 100644
index 0000000000000..863de5da3693f
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-configure-data-connectors.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-customize.png b/articles/sentinel/media/migration-track/migration-track-customize.png
new file mode 100644
index 0000000000000..1c65320d2c330
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-customize.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-data-connectors.png b/articles/sentinel/media/migration-track/migration-track-data-connectors.png
new file mode 100644
index 0000000000000..5e2e50d78c817
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-data-connectors.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-data-management.png b/articles/sentinel/media/migration-track/migration-track-data-management.png
new file mode 100644
index 0000000000000..91ceccc2d3a9c
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-data-management.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-mitre.png b/articles/sentinel/media/migration-track/migration-track-mitre.png
new file mode 100644
index 0000000000000..4a0f5bdead911
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-mitre.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-tips.png b/articles/sentinel/media/migration-track/migration-track-tips.png
new file mode 100644
index 0000000000000..c847d511ae6ba
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-tips.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-ueba.png b/articles/sentinel/media/migration-track/migration-track-ueba.png
new file mode 100644
index 0000000000000..32e7ee6613cbf
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-ueba.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-update-watchlist.png b/articles/sentinel/media/migration-track/migration-track-update-watchlist.png
new file mode 100644
index 0000000000000..71e9ffbf567da
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-update-watchlist.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-update.png b/articles/sentinel/media/migration-track/migration-track-update.png
new file mode 100644
index 0000000000000..821cc15f27240
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-update.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-view-workbooks.png b/articles/sentinel/media/migration-track/migration-track-view-workbooks.png
new file mode 100644
index 0000000000000..a7bb0ceb2dcff
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-view-workbooks.png differ
diff --git a/articles/sentinel/media/migration-track/migration-track-workbook.png b/articles/sentinel/media/migration-track/migration-track-workbook.png
new file mode 100644
index 0000000000000..9106bb16b8309
Binary files /dev/null and b/articles/sentinel/media/migration-track/migration-track-workbook.png differ
diff --git a/articles/sentinel/media/tutorial-detect-threats-custom/automated-response-tab-old.png b/articles/sentinel/media/tutorial-detect-threats-custom/automated-response-tab-old.png
deleted file mode 100644
index be4b1b5300d65..0000000000000
Binary files a/articles/sentinel/media/tutorial-detect-threats-custom/automated-response-tab-old.png and /dev/null differ
diff --git a/articles/sentinel/media/tutorial-detect-threats-custom/automated-response-tab.png b/articles/sentinel/media/tutorial-detect-threats-custom/automated-response-tab.png
index 28f2190c906da..86b2d340b3e40 100644
Binary files a/articles/sentinel/media/tutorial-detect-threats-custom/automated-response-tab.png and b/articles/sentinel/media/tutorial-detect-threats-custom/automated-response-tab.png differ
diff --git a/articles/sentinel/media/tutorial-respond-threats-playbook/create-automation-rule.png b/articles/sentinel/media/tutorial-respond-threats-playbook/create-automation-rule.png
index d03bb37220a19..290aa3d1bc80a 100644
Binary files a/articles/sentinel/media/tutorial-respond-threats-playbook/create-automation-rule.png and b/articles/sentinel/media/tutorial-respond-threats-playbook/create-automation-rule.png differ
diff --git a/articles/sentinel/migration-arcsight-automation.md b/articles/sentinel/migration-arcsight-automation.md
new file mode 100644
index 0000000000000..358b197d88ac5
--- /dev/null
+++ b/articles/sentinel/migration-arcsight-automation.md
@@ -0,0 +1,85 @@
+---
+title: Migrate ArcSight SOAR automation to Microsoft Sentinel | Microsoft Docs
+description: Learn how to identify SOAR use cases, and how to migrate your ArcSight SOAR automation to Microsoft Sentinel.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+---
+
+# Migrate ArcSight SOAR automation to Microsoft Sentinel
+
+Microsoft Sentinel provides Security Orchestration, Automation, and Response (SOAR) capabilities with [automation rules](automate-incident-handling-with-automation-rules.md) and [playbooks](tutorial-respond-threats-playbook.md). Automation rules automate incident handling and response, and playbooks run predetermined sequences of actions to response and remediate threats. This article discusses how to identify SOAR use cases, and how to migrate your ArcSight SOAR automation to Microsoft Sentinel.
+
+Automation rules simplify complex workflows for your incident orchestration processes, and allow you to centrally manage your incident handling automation.
+
+With automation rules, you can:
+- Perform simple automation tasks without necessarily using playbooks. For example, you can assign, tag incidents, change status, and close incidents.
+- Automate responses for multiple analytics rules at once.
+- Control the order of actions that are executed.
+- Run playbooks for those cases where more complex automation tasks are necessary.
+
+## Identify SOAR use cases
+
+Here’s what you need to think about when migrating SOAR use cases from ArcSight.
+- **Use case quality**. Choose good use cases for automation. Use cases should be based on procedures that are clearly defined, with minimal variation, and a low false-positive rate. Automation should work with efficient use cases.
+- **Manual intervention**. Automated response can have wide ranging effects and high impact automations should have human input to confirm high impact actions before they’re taken.
+- **Binary criteria**. To increase response success, decision points within an automated workflow should be as limited as possible, with binary criteria. Binary criteria reduces the need for human intervention, and enhances outcome predictability.
+- **Accurate alerts or data**. Response actions are dependent on the accuracy of signals such as alerts. Alerts and enrichment sources should be reliable. Microsoft Sentinel resources such as watchlists and reliable threat intelligence can enhance reliability.
+- **Analyst role**. While automation where possible is great, reserve more complex tasks for analysts, and provide them with the opportunity for input into workflows that require validation. In short, response automation should augment and extend analyst capabilities.
+
+## Migrate SOAR workflow
+
+This section shows how key SOAR concepts in ArcSight translate to Microsoft Sentinel components, and provides general guidelines for how to migrate each step or component in the SOAR workflow.
+
+:::image type="content" source="media/migration-arcsight-automation/arcsight-sentinel-soar-workflow.png" alt-text="Diagram displaying the ArcSight and Microsoft Sentinel SOAR workflows." border="false":::
+
+|Step (in diagram) |ArcSight |Microsoft Sentinel |
+|---------|---------|---------|
+|1 |Ingest events into Enterprise Security Manager (ESM) and trigger correlation events. |Ingest events into the Log Analytics workspace. |
+|2 |Automatically filter alerts for case creation. |Use [analytics rules](detect-threats-built-in.md#use-built-in-analytics-rules) to trigger alerts. Enrich alerts using the [custom details feature](surface-custom-details-in-alerts.md) to create dynamic incident names. |
+|3 |Classify cases. |Use [automation rules](automate-incident-handling-with-automation-rules.md). With automation rules, Microsoft Sentinel treats incidents according to the analytics rule that triggered the incident, and the incident properties that match defined criteria. |
+|4 |Consolidate cases. |You can consolidate several alerts to a single incident according to properties such as matching entities, alert details, or creation timeframe, using the alert grouping feature. |
+|5 |Dispatch cases. |Assign incidents to specific analysts using [an integration](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/automate-incident-assignment-with-shifts-for-teams/ba-p/2297549) between Microsoft Teams, Azure Logic Apps, and Microsoft Sentinel automation rules. |
+
+## Map SOAR components
+
+Review which Microsoft Sentinel or Azure Logic Apps features map to the main ArcSight SOAR components.
+
+|ArcSight |Microsoft Sentinel/Azure Logic Apps |
+|---------|---------|
+|Trigger |[Trigger](../logic-apps/logic-apps-overview.md) |
+|Automation bit |[Azure Function connector](../logic-apps/logic-apps-azure-functions.md) |
+|Action |[Action](../logic-apps/logic-apps-overview.md) |
+|Scheduled playbooks |Playbooks initiated by the [recurrence trigger](../connectors/connectors-native-recurrence.md) |
+|Workflow playbooks |Playbooks automatically initiated by Microsoft Sentinel [alert or incident triggers](playbook-triggers-actions.md) |
+|Marketplace |• [Automation > Templates tab](use-playbook-templates.md) • [Content hub catalog](sentinel-solutions-catalog.md) • [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser) |
+
+## Operationalize playbooks and automation rules in Microsoft Sentinel
+
+Most of the playbooks that you use with Microsoft Sentinel are available in either the [Automation > Templates tab](use-playbook-templates.md), the [Content hub catalog](sentinel-solutions-catalog.md), or [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser). In some cases, however, you might need to create playbooks from scratch or from existing templates.
+
+You typically build your custom logic app using the Azure Logic App Designer feature. The logic apps code is based on [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md), which facilitate development, deployment and portability of Azure Logic Apps across multiple environments. To convert your custom playbook into a portable ARM template, you can use the [ARM template generator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/export-microsoft-sentinel-playbooks-or-azure-logic-apps-with/ba-p/3275898).
+
+Use these resources for cases where you need to build your own playbooks either from scratch or from existing templates.
+- [Automate incident handling in Microsoft Sentinel](automate-incident-handling-with-automation-rules.md)
+- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
+- [How to use Microsoft Sentinel for Incident Response, Orchestration and Automation](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-for-incident-response-orchestration/ba-p/2242397)
+- [Adaptive Cards to enhance incident response in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-microsoft-teams-adaptive-cards-to-enhance-incident/ba-p/3330941)
+
+## SOAR post migration best practices
+
+Here are best practices you should take into account after your SOAR migration:
+
+- After you migrate your playbooks, test the playbooks extensively to ensure that the migrated actions work as expected.
+- Periodically review your automations to explore ways to further simplify or enhance your SOAR. Microsoft Sentinel constantly adds new connectors and actions that can help you to further simplify or increase the effectiveness of your current response implementations.
+- Monitor the performance of your playbooks using the [Playbooks health monitoring workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-monitoring-your-logic-apps-playbooks-in-azure/ba-p/1873211).
+- Use managed identities and service principals: Authenticate against various Azure services within your Logic Apps, store the secrets in Azure Key Vault, and obscure the output of the flow execution. We also recommend that you [monitor the activities of these service principals](https://techcommunity.microsoft.com/t5/azure-sentinel/non-interactive-logins-minimizing-the-blind-spot/ba-p/2287932).
+
+## Next steps
+
+In this article, you learned how to map your SOAR automation from ArcSight to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Export your historical data](migration-arcsight-historical-data.md)
\ No newline at end of file
diff --git a/articles/sentinel/migration-arcsight-detection-rules.md b/articles/sentinel/migration-arcsight-detection-rules.md
new file mode 100644
index 0000000000000..74ea163a5ce6b
--- /dev/null
+++ b/articles/sentinel/migration-arcsight-detection-rules.md
@@ -0,0 +1,359 @@
+---
+title: Migrate ArcSight detection rules to Microsoft Sentinel | Microsoft Docs
+description: Identify, compare, and migrate your ArcSight detection rules to Microsoft Sentinel built-in rules.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+ms.custom: ignite-fall-2021
+---
+
+# Migrate ArcSight detection rules to Microsoft Sentinel
+
+This article describes how to identify, compare, and migrate your ArcSight detection rules to Microsoft Sentinel analytics rules.
+
+## Identify and migrate rules
+
+Microsoft Sentinel uses machine learning analytics to create high-fidelity and actionable incidents, and some of your existing detections may be redundant in Microsoft Sentinel. Therefore, don't migrate all of your detection and analytics rules blindly. Review these considerations as you identify your existing detection rules.
+
+- Make sure to select use cases that justify rule migration, considering business priority and efficiency.
+- Check that you [understand Microsoft Sentinel rule types](detect-threats-built-in.md#view-built-in-detections).
+- Check that you understand the [rule terminology](#compare-rule-terminology).
+- Review any rules that haven't triggered any alerts in the past 6-12 months, and determine whether they're still relevant.
+- Eliminate low-level threats or alerts that you routinely ignore.
+- Use existing functionality, and check whether Microsoft Sentinel’s [built-in analytics rules](https://github.com/Azure/Azure-Sentinel/tree/master/Detections) might address your current use cases. Because Microsoft Sentinel uses machine learning analytics to produce high-fidelity and actionable incidents, it’s likely that some of your existing detections won’t be required anymore.
+- Confirm connected data sources and review your data connection methods. Revisit data collection conversations to ensure data depth and breadth across the use cases you plan to detect.
+- Explore community resources such as the [SOC Prime Threat Detection Marketplace](https://my.socprime.com/tdm/) to check whether your rules are available.
+- Consider whether an online query converter such as Uncoder.io might work for your rules.
+- If rules aren’t available or can’t be converted, they need to be created manually, using a KQL query. Review the [rules mapping](#map-and-compare-rule-samples) to create new queries.
+
+Learn more about [best practices for migrating detection rules](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417).
+
+**To migrate your analytics rules to Microsoft Sentinel**:
+
+1. Verify that you have a testing system in place for each rule you want to migrate.
+
+ 1. **Prepare a validation process** for your migrated rules, including full test scenarios and scripts.
+
+ 1. **Ensure that your team has useful resources** to test your migrated rules.
+
+ 1. **Confirm that you have any required data sources connected,** and review your data connection methods.
+
+1. Verify whether your detections are available as built-in templates in Microsoft Sentinel:
+
+ - **If the built-in rules are sufficient**, use built-in rule templates to create rules for your own workspace.
+
+ In Microsoft Sentinel, go to the **Configuration > Analytics > Rule templates** tab, and create and update each relevant analytics rule.
+
+ For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md).
+
+ - **If you have detections that aren't covered by Microsoft Sentinel's built-in rules**, try an online query converter, such as [Uncoder.io](https://uncoder.io/) to convert your queries to KQL.
+
+ Identify the trigger condition and rule action, and then construct and review your KQL query.
+
+ - **If neither the built-in rules nor an online rule converter is sufficient**, you'll need to create the rule manually. In such cases, use the following steps to start creating your rule:
+
+ 1. **Identify the data sources you want to use in your rule**. You'll want to create a mapping table between data sources and data tables in Microsoft Sentinel to identify the tables you want to query.
+
+ 1. **Identify any attributes, fields, or entities** in your data that you want to use in your rules.
+
+ 1. **Identify your rule criteria and logic**. At this stage, you may want to use rule templates as samples for how to construct your KQL queries.
+
+ Consider filters, correlation rules, active lists, reference sets, watchlists, detection anomalies, aggregations, and so on. You might use references provided by your legacy SIEM to understand [how to best map your query syntax](#map-and-compare-rule-samples).
+
+ 1. **Identify the trigger condition and rule action, and then construct and review your KQL query**. When reviewing your query, consider KQL optimization guidance resources.
+
+1. Test the rule with each of your relevant use cases. If it doesn't provide expected results, you may want to review the KQL and test it again.
+
+1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
+
+Learn more about analytics rules.
+
+- [**Create custom analytics rules to detect threats**](detect-threats-custom.md). Use [alert grouping](detect-threats-custom.md#alert-grouping) to reduce alert fatigue by grouping alerts that occur within a given timeframe.
+- [**Map data fields to entities in Microsoft Sentinel**](map-data-fields-to-entities.md) to enable SOC engineers to define entities as part of the evidence to track during an investigation. Entity mapping also makes it possible for SOC analysts to take advantage of an intuitive [investigation graph (investigate-cases.md#use-the-investigation-graph-to-deep-dive) that can help reduce time and effort.
+- [**Investigate incidents with UEBA data**](investigate-with-ueba.md), as an example of how to use evidence to surface events, alerts, and any bookmarks associated with a particular incident in the incident preview pane.
+- [**Kusto Query Language (KQL)**](/azure/data-explorer/kusto/query/), which you can use to send read-only requests to your [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md) database to process data and return results. KQL is also used across other Microsoft services, such as [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) and [Application Insights](../azure-monitor/app/app-insights-overview.md).
+
+## Compare rule terminology
+
+This table helps you to clarify the concept of a rule in Microsoft Sentinel compared to ArcSight.
+
+| |ArcSight |Microsoft Sentinel |
+|---------|---------|---------|
+|**Rule type** |• Filter rule • Join rule • Active list rule • And more |• Scheduled query • Fusion • Microsoft Security • Machine Learning (ML) Behavior Analytics |
+|**Criteria** |Define in rule conditions |Define in KQL |
+|**Trigger condition** |• Define in action • Define in aggregation (for event aggregation) |Threshold: Number of query results |
+|**Action** |• Set event field • Send notification • Create new case • Add to active list • And more |• Create alert or incident • Integrates with Logic Apps |
+
+## Map and compare rule samples
+
+Use these samples to compare and map rules from ArcSight to Microsoft Sentinel in various scenarios.
+
+|Rule |Description |Sample detection rule (ArcSight) |Sample KQL query |Resources |
+|---------|---------|---------|---------|---------|
+|Filter (`AND`) |A sample rule with `AND` conditions. The event must match all conditions. |[Filter (AND) example](#filter-and-example-arcsight) |[Filter (AND) example](#filter-and-example-kql) |String filter: • [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings)
Parsing: • [parse](/azure/data-explorer/kusto/query/parseoperator) • [extract](/azure/data-explorer/kusto/query/extractfunction) • [parse_json](/azure/data-explorer/kusto/query/parsejsonfunction) • [parse_csv](/azure/data-explorer/kusto/query/parseoperator) • [parse_path](/azure/data-explorer/kusto/query/parsepathfunction) • [parse_url](/azure/data-explorer/kusto/query/parseurlfunction) |
+|Filter (`OR`) |A sample rule with `OR` conditions. The event can match any of the conditions. |[Filter (OR) example](#filter-or-example-arcsight) |[Filter (OR) example](#filter-or-example-kql) |• [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings) • [in](/azure/data-explorer/kusto/query/inoperator) |
+|Nested filter |A sample rule with nested filtering conditions. The rule includes the `MatchesFilter` statement, which also includes filtering conditions. |[Nested filter example](#nested-filter-example-arcsight) |[Nested filter example](#nested-filter-example-kql) |• [Sample KQL function](https://techcommunity.microsoft.com/t5/azure-sentinel/using-kql-functions-to-speed-up-analysis-in-azure-sentinel/ba-p/712381) • [Sample parameter function](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/enriching-windows-security-events-with-parameterized-function/ba-p/1712564) • [join](/azure/data-explorer/kusto/query/joinoperator?pivots=azuredataexplorer) • [where](/azure/data-explorer/kusto/query/whereoperator) |
+|Active list (lookup) |A sample lookup rule that uses the `InActiveList` statement. |[Active list (lookup) example](#active-list-lookup-example-arcsight) |[Active list (lookup) example](#active-list-lookup-example-kql) |• A watchlist is the equivalent of the active list feature. Learn more about [watchlists](watchlists.md). • [Other ways to implement lookups](https://techcommunity.microsoft.com/t5/azure-sentinel/implementing-lookups-in-azure-sentinel/ba-p/1091306) |
+|Correlation (matching) |A sample rule that defines a condition against a set of base events, using the `Matching Event` statement. |[Correlation (matching) example](#correlation-matching-example-arcsight) |[Correlation (matching) example](#correlation-matching-example-kql) |join operator: • [join](/azure/data-explorer/kusto/query/joinoperator?pivots=azuredataexplorer) • [join with time window](/azure/data-explorer/kusto/query/join-timewindow) • [shuffle](/azure/data-explorer/kusto/query/shufflequery) • [Broadcast](/azure/data-explorer/kusto/query/broadcastjoin) • [Union](/azure/data-explorer/kusto/query/unionoperator?pivots=azuredataexplorer)
Aggregation: • [make_set](/azure/data-explorer/kusto/query/makeset-aggfunction) • [make_list](/azure/data-explorer/kusto/query/makelist-aggfunction) • [make_bag](/azure/data-explorer/kusto/query/make-bag-aggfunction) • [pack](/azure/data-explorer/kusto/query/packfunction) |
+|Correlation (time window) |A sample rule that defines a condition against a set of base events, using the `Matching Event` statement, and uses the `Wait time` filter condition. |[Correlation (time window) example](#correlation-time-window-example-arcsight) |[Correlation (time window) example](#correlation-time-window-example-kql) |• [join](/azure/data-explorer/kusto/query/joinoperator?pivots=azuredataexplorer) • [Microsoft Sentinel rules and join statement](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-correlation-rules-the-join-kql-operator/ba-p/1041500) |
+
+### Filter (AND) example: ArcSight
+
+Here's a sample filter rule with `AND` conditions in ArcSight.
+
+:::image type="content" source="media/migration-arcsight-detection-rules/rule-1-sample.png" alt-text="Diagram illustrating a sample filter rule." lightbox="media/migration-arcsight-detection-rules/rule-1-sample.png":::
+
+### Filter (AND) example: KQL
+
+Here's the filter rule with `AND` conditions in KQL.
+
+```kusto
+SecurityEvent
+| where EventID == 4728
+| where SubjectUserName =~ "AutoMatedService"
+| where isnotempty(SubjectDomainName)
+```
+This rule assumes that Microsoft Monitoring Agent (MMA) or Azure Monitoring Agent (AMA) collect the Windows Security Events. Therefore, the rule uses the Microsoft Sentinel SecurityEvent table.
+
+Consider these best practices:
+- To optimize your queries, avoid case-insensitive operators when possible: `=~`.
+- Use `==` if the value isn't case-sensitive.
+- Order the filters by starting with the `where` statement, which filters out the most data.
+
+### Filter (OR) example: ArcSight
+
+Here's a sample filter rule with `OR` conditions in ArcSight.
+
+:::image type="content" source="media/migration-arcsight-detection-rules/rule-2-sample.png" alt-text="Diagram illustrating a sample filter rule (or).":::
+
+### Filter (OR) example: KQL
+
+Here are a few ways to write the filter rule with `OR` conditions in KQL.
+
+As a first option, use the `in` statement:
+
+```kusto
+SecurityEvent
+| where SubjectUserName in
+ ("Adm1","ServiceAccount1","AutomationServices")
+```
+As a second option, use the `or` statement:
+
+```kusto
+SecurityEvent
+| where SubjectUserName == "Adm1" or
+SubjectUserName == "ServiceAccount1" or
+SubjectUserName == "AutomationServices"
+```
+While both options are identical in performance, we recommend the first option, which is easier to read.
+
+### Nested filter example: ArcSight
+
+Here's a sample nested filter rule in ArcSight.
+
+:::image type="content" source="media/migration-arcsight-detection-rules/rule-3-sample-1.png" alt-text="Diagram illustrating a sample nested filter rule.":::
+
+Here's a rule for the `/All Filters/Soc Filters/Exclude Valid Users` filter.
+
+:::image type="content" source="media/migration-arcsight-detection-rules/rule-3-sample-2.png" alt-text="Diagram illustrating an Exclude Valid Users filter.":::
+
+### Nested filter example: KQL
+
+Here are a few ways to write the filter rule with `OR` conditions in KQL.
+
+As a first option, use a direct filter with a `where` statement:
+
+```kusto
+SecurityEvent
+| where EventID == 4728
+| where isnotempty(SubjectDomainName) or
+isnotempty(TargetDomainName)
+| where SubjectUserName !~ "AutoMatedService"
+```
+As a second option, use a KQL function:
+
+1. Save the following query as a KQL function with the `ExcludeValidUsers` alias.
+
+ ```kusto
+ SecurityEvent
+ | where EventID == 4728
+ | where isnotempty(SubjectDomainName)
+ | where SubjectUserName =~ "AutoMatedService"
+ | project SubjectUserName
+ ```
+
+1. Use the following query to filter the `ExcludeValidUsers` alias.
+
+ ```kusto
+ SecurityEvent
+ | where EventID == 4728
+ | where isnotempty(SubjectDomainName) or
+ isnotempty(TargetDomainName)
+ | where SubjectUserName !in (ExcludeValidUsers)
+ ```
+
+As a third option, use a parameter function:
+
+1. Create a parameter function with `ExcludeValidUsers` as the name and alias.
+2. Define the parameters of the function. For example:
+
+ ```kusto
+ Tbl: (TimeGenerated:datatime, Computer:string,
+ EventID:string, SubjectDomainName:string,
+ TargetDomainName:string, SubjectUserName:string)
+ ```
+
+1. The `parameter` function has the following query:
+
+ ```kusto
+ Tbl
+ | where SubjectUserName !~ "AutoMatedService"
+ ```
+
+1. Run the following query to invoke the parameter function:
+
+ ```kusto
+ let Events = (
+ SecurityEvent
+ | where EventID == 4728
+ );
+ ExcludeValidUsers(Events)
+ ```
+
+As a fourth option, use the `join` function:
+
+```kusto
+let events = (
+SecurityEvent
+| where EventID == 4728
+| where isnotempty(SubjectDomainName)
+or isnotempty(TargetDomainName)
+);
+let ExcludeValidUsers = (
+SecurityEvent
+| where EventID == 4728
+| where isnotempty(SubjectDomainName)
+| where SubjectUserName =~ "AutoMatedService"
+);
+events
+| join kind=leftanti ExcludeValidUsers on
+$left.SubjectUserName == $right.SubjectUserName
+```
+Considerations:
+- We recommend that you use a direct filter with a `where` statement (first option) due to its simplicity. For optimized performance, avoid using `join` (fourth option).
+- To optimize your queries, avoid the `=~` and `!~` case-insensitive operators when possible. Use the `==` and `!=` operators if the value isn't case-sensitive.
+
+### Active list (lookup) example: ArcSight
+
+Here's an active list (lookup) rule in ArcSight.
+
+:::image type="content" source="media/migration-arcsight-detection-rules/rule-4-sample.png" alt-text="Diagram illustrating a sample active list rule (lookup).":::
+
+### Active list (lookup) example: KQL
+
+This rule assumes that the Cyber-Ark Exception Accounts watchlist exists in Microsoft Sentinel with an Account field.
+
+```kusto
+let Activelist=(
+_GetWatchlist('Cyber-Ark Exception Accounts')
+| project Account );
+CommonSecurityLog
+| where DestinationUserName in (Activelist)
+| where DeviceVendor == "Cyber-Ark"
+| where DeviceAction == "Get File Request"
+| where DeviceCustomNumber1 != ""
+| project DeviceAction, DestinationUserName,
+TimeGenerated,SourceHostName,
+SourceUserName, DeviceEventClassID
+```
+Order the filters by starting with the `where` statement that filters out the most data.
+
+### Correlation (matching) example: ArcSight
+
+Here's a sample ArcSight rule that defines a condition against a set of base events, using the `Matching Event` statement.
+
+:::image type="content" source="media/migration-arcsight-detection-rules/rule-5-sample.png" alt-text="Diagram illustrating a sample correlation rule (matching).":::
+
+### Correlation (matching) example: KQL
+
+```kusto
+let event1 =(
+SecurityEvent
+| where EventID == 4728
+);
+let event2 =(
+SecurityEvent
+| where EventID == 4729
+);
+event1
+| join kind=inner event2
+on $left.TargetUserName==$right.TargetUserName
+```
+Best practices:
+- To optimize your query, ensure that the smaller table is on the left side of the `join` function.
+- If the left side of the table is relatively small (up to 100 K records), add `hint.strategy=broadcast` for better performance.
+
+### Correlation (time window) example: ArcSight
+
+Here's a sample ArcSight rule that defines a condition against a set of base events, using the `Matching Event` statement, and uses the `Wait time` filter condition.
+
+:::image type="content" source="media/migration-arcsight-detection-rules/rule-6-sample.png" alt-text="Diagram illustrating a sample correlation rule (time window).":::
+
+### Correlation (time window) example: KQL
+
+```kusto
+let waittime = 10m;
+let lookback = 1d;
+let event1 = (
+SecurityEvent
+| where TimeGenerated > ago(waittime+lookback)
+| where EventID == 4728
+| project event1_time = TimeGenerated,
+event1_ID = EventID, event1_Activity= Activity,
+event1_Host = Computer, TargetUserName,
+event1_UPN=UserPrincipalName,
+AccountUsedToAdd = SubjectUserName
+);
+let event2 = (
+SecurityEvent
+| where TimeGenerated > ago(waittime)
+| where EventID == 4729
+| project event2_time = TimeGenerated,
+event2_ID = EventID, event2_Activity= Activity,
+event2_Host= Computer, TargetUserName,
+event2_UPN=UserPrincipalName,
+ AccountUsedToRemove = SubjectUserName
+);
+ event1
+| join kind=inner event2 on TargetUserName
+| where event2_time - event1_time < lookback
+| where tolong(event2_time - event1_time ) >=0
+| project delta_time = event2_time - event1_time,
+ event1_time, event2_time,
+ event1_ID,event2_ID,event1_Activity,
+ event2_Activity, TargetUserName, AccountUsedToAdd,
+ AccountUsedToRemove,event1_Host,event2_Host,
+ event1_UPN,event2_UPN
+```
+### Aggregation example: ArcSight
+
+Here's a sample ArcSight rule with aggregation settings: three matches within 10 minutes.
+
+:::image type="content" source="media/migration-arcsight-detection-rules/rule-7-sample.png" alt-text="Diagram illustrating a sample aggregation rule.":::
+
+### Aggregation example: KQL
+
+```kusto
+SecurityEvent
+| summarize Count = count() by SubjectUserName,
+SubjectDomainName
+| where Count >3
+```
+
+## Next steps
+
+In this article, you learned how to map your migration rules from ArcSight to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Migrate your SOAR automation](migration-arcsight-automation.md)
diff --git a/articles/sentinel/migration-arcsight-historical-data.md b/articles/sentinel/migration-arcsight-historical-data.md
new file mode 100644
index 0000000000000..dd621effc198f
--- /dev/null
+++ b/articles/sentinel/migration-arcsight-historical-data.md
@@ -0,0 +1,49 @@
+---
+title: "Microsoft Sentinel migration: Export ArcSight data to target platform | Microsoft Docs"
+description: Learn how to export your historical data from ArcSight.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+---
+
+# Export historical data from ArcSight
+
+This article describes how to export your historical data from ArcSight. After you complete the steps in this article, you can [select a target platform](migration-ingestion-target-platform.md) to host the exported data, and then [select an ingestion tool](migration-ingestion-tool.md) to migrate the data.
+
+:::image type="content" source="media/migration-export-ingest/export-data.png" alt-text="Diagram illustrating steps involved in export and ingestion." border="false":::
+
+You can export data from ArcSight in several ways. Your selection of an export method depends on the data volumes and the deployed ArcSight environment. You can export the logs to a local folder on the ArcSight server or to another server accessible by ArcSight.
+
+To export the data, use one of the following methods:
+- [ArcSight Event Data Transfer Tool](#arcsight-event-data-transfer-tool): Use this option for large volumes of data, namely terabytes (TB).
+- [lacat tool](#lacat-utility): Use for volumes of data smaller than a TB.
+
+## ArcSight Event Data Transfer tool
+
+Use the Event Data Transfer tool to export data from ArcSight Enterprise Security Manager (ESM) version 7.x. To export data from ArcSight Logger, use the [lacat utility](#lacat-utility).
+
+The Event Data Transfer tool retrieves event data from ESM, which allows you to combine analysis with unstructured data, in addition to the CEF data. The Event Data Transfer tool exports ESM events in three formats: CEF, CSV, and key-value pairs.
+
+To export data using the Event Data Transfer tool:
+
+1. [Install and configure the Event Transfer Tool](https://www.microfocus.com/documentation/arcsight/arcsight-esm-7.6/ESM_AdminGuide/#ESM_AdminGuide/EventDataTransfer/EventDataTransfer.htm).
+1. Configure the logs export to use a CSV format. For example, this command exports data recorded between 15:45 and 16:45 on May 4, 2016 to a CSV file:
+
+ ```
+ arcsight event_transfer -dtype File -dpath <***path***> -format csv -start "05/04/2016 15:45:00" -end "05/04/2016 16:45:00"
+ ```
+## lacat utility
+
+Use the lacat utility to export data from ArcSight Logger. lacat exports CEF records from a Logger archive file, and prints the records to `stdout`. You can redirect the records to a file, or pipe the file for further manipulation with options such as `grep` or `awk`.
+
+To export data with the lacat utility:
+
+1. [Download the lacat utility](https://github.com/hpsec/lacat).
+1. Follow the examples in the lacat repository on how to run the script.
+
+## Next steps
+
+- [Select a target Azure platform to host the exported historical data](migration-ingestion-target-platform.md)
+- [Select a data ingestion tool](migration-ingestion-tool.md)
+- [Ingest historical data into your target platform](migration-export-ingest.md)
\ No newline at end of file
diff --git a/articles/sentinel/migration-convert-dashboards.md b/articles/sentinel/migration-convert-dashboards.md
new file mode 100644
index 0000000000000..78836617236ac
--- /dev/null
+++ b/articles/sentinel/migration-convert-dashboards.md
@@ -0,0 +1,96 @@
+---
+title: "Convert dashboards to Azure Monitor Workbooks | Microsoft Docs"
+description: Learn how to review, planning and migrate your current workbooks to Azure Workbooks.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+ms.custom: ignite-fall-2021
+---
+
+# Convert dashboards to Azure Workbooks
+
+Dashboards in your existing SIEM will convert to [Azure Monitor Workbooks](monitor-your-data.md#use-built-in-workbooks), the Microsoft Sentinel adoption of Azure Monitor Workbooks, which provides versatility in creating custom dashboards.
+
+This article describes how to review, plan, and convert your current workbooks to Azure Monitor Workbooks.
+
+## Review dashboards in your current SIEM
+
+ Review these considerations when designing your migration.
+
+- **Discover dashboards**. Gather information about your dashboards, including design, parameters, data sources, and other details. Identify the purpose or usage of each dashboard.
+- **Select**. Don’t migrate all dashboards without consideration. Focus on dashboards that are critical and used regularly.
+- **Consider permissions**. Consider who are the target users for workbooks. Microsoft Sentinel uses Azure Workbooks, and [access is controlled](../azure-monitor/visualize/workbooks-access-control.md) using Azure Role Based Access Control (RBAC). To create dashboards outside Azure, for example for business execs without Azure access, using a reporting tool such as Power BI.
+
+## Prepare for the dashboard conversion
+
+After reviewing your dashboards, do the following to prepare for your dashboard migration:
+
+- Review all of the visualizations in each dashboard. The dashboards in your current SIEM might contain several charts or panels. It's crucial to review the content of your short-listed dashboards to eliminate any unwanted visualizations or data.
+- Capture the dashboard design and interactivity.
+- Identify any design elements that are important to your users. For example, the layout of the dashboard, the arrangement of the charts or even the font size or color of the graphs.
+- Capture any interactivity such as drilldown, filtering, and others that you need to carry over to Azure Monitor Workbooks. We'll also discuss parameters and user inputs in the next step.
+- Identify required parameters or user inputs. In most cases, you need to define parameters for users to perform search, filtering, or scoping the results (for example, date range, account name and others). Hence, it's crucial to capture the details around parameters. Here are some of the key points to help you with collecting the parameter requirements:
+ - The type of parameter for users to perform selection or input. For example, date range, text, or others.
+ - How the parameters are represented, such as drop-down, text box, or others.
+ - The expected value format, for example, time, string, integer, or others.
+ - Other properties, such as the default value, allow multi-select, conditional visibility, or others.
+
+## Convert dashboards
+
+Perform the following tasks in Azure Workbook and Microsoft Sentinel to convert your dashboard.
+
+#### 1. Identify data sources
+
+Azure Monitor workbooks are [compatible with a large number of data sources](../azure-monitor/visualize/workbooks-data-sources.md). In most cases, use the Azure Monitor Logs data source and use Kusto Query Language (KQL) queries to visualize the underlying logs in your Microsoft Sentinel workspace.
+
+#### 2. Construct or review KQL queries
+
+In this step, you mainly work with KQL to visualize your data. You can construct and test your queries in the Microsoft Sentinel Logs page before converting them to Azure Monitor workbooks. Before finalizing your KQL queries, always review and tune the queries to improve query performance. Optimized queries:
+- Run faster, reduce the overall duration of the query execution.
+- Have a smaller chance of being throttled or rejected.
+
+Learn how to optimize KQL queries:
+- [KQL query best practices](/azure/data-explorer/kusto/query/best-practices)
+- [Optimize queries in Azure Monitor Logs](../azure-monitor/logs/query-optimization.md)
+- [Optimizing KQL performance (webinar)](https://youtu.be/jN1Cz0JcLYU)
+
+#### 3. Create or update the workbook
+
+[Create](tutorial-monitor-your-data.md#create-new-workbook) a workbook, update the workbook, or clone an existing workbook so that you don’t have to start from scratch. Also, specify how the data or visualizations will be represented, arranged and [grouped](../azure-monitor/visualize/workbooks-groups.md). There are two common designs:
+
+- Vertical workbook
+- Tabbed workbook
+
+#### 4. Create or update workbook parameters or user inputs
+
+By the time you arrive at this stage, you should have [identified the required parameters](#prepare-for-the-dashboard-conversion). With parameters, you can collect input from the consumers and reference the input in other parts of the workbook. This input is typically used to scope the result set, to set the correct visualization, and allows you to build interactive reports and experiences.
+
+Workbooks allow you to control how your parameter controls are presented to consumers. For example, you select whether the controls are presented as a text box vs. drop down, or single- vs. multi-select. You can also select which values to use, from text, JSON, KQL, or Azure Resource Graph, and more.
+
+Review the [supported workbook parameters](../azure-monitor/visualize/workbooks-parameters.md). You can reference these parameter values in other parts of workbooks either via bindings or value expansions.
+
+#### 5. Create or update visualizations
+
+Workbooks provide a rich set of capabilities for visualizing your data. Review these detailed examples of each visualization type.
+
+- [Text](../azure-monitor/visualize/workbooks-text-visualizations.md)
+- [Charts](../azure-monitor/visualize/workbooks-chart-visualizations.md)
+- [Grids](../azure-monitor/visualize/workbooks-grid-visualizations.md)
+- [Tiles](../azure-monitor/visualize/workbooks-tile-visualizations.md)
+- [Trees](../azure-monitor/visualize/workbooks-tree-visualizations.md)
+- [Graphs](../azure-monitor/visualize/workbooks-graph-visualizations.md)
+- [Map](../azure-monitor/visualize/workbooks-map-visualizations.md)
+- [Honey comb](../azure-monitor/visualize/workbooks-honey-comb.md)
+- [Composite bar](../azure-monitor/visualize/workbooks-composite-bar.md)
+
+#### 6. Preview and save the workbook
+
+Once you've saved your workbook, specify the parameters, if any exist, and validate the results. You can also try the [auto refresh](tutorial-monitor-your-data.md#refresh-your-workbook-data) or the print feature to [save as a PDF](monitor-your-data.md#print-a-workbook-or-save-as-pdf).
+
+## Next steps
+
+In this article, you learned how to convert your dashboards to Azure workbooks.
+
+> [!div class="nextstepaction"]
+> [Update SOC processes](migration-security-operations-center-processes.md)
\ No newline at end of file
diff --git a/articles/sentinel/migration-export-ingest.md b/articles/sentinel/migration-export-ingest.md
new file mode 100644
index 0000000000000..06b01c16960f8
--- /dev/null
+++ b/articles/sentinel/migration-export-ingest.md
@@ -0,0 +1,72 @@
+---
+title: "Microsoft Sentinel migration: Ingest data into target platform | Microsoft Docs"
+description: Learn how to ingest historical data into your selected target platform.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+---
+
+# Ingest historical data into your target platform
+
+In previous articles, you [selected a target platform](migration-ingestion-target-platform.md) for your historical data. You also selected [a tool to transfer your data](migration-ingestion-tool.md) and stored the historical data in a staging location. You can now start to ingest the data into the target platform.
+
+This article describes how to ingest your historical data into your selected target platform.
+
+## Export data from the legacy SIEM
+
+In general, SIEMs can export or dump data to a file in your local file system, so you can use this method to extract the historical data. It’s also important to set up a staging location for your exported files. The tool you use to transfer the data ingestion can copy the files from the staging location to the target platform.
+
+This diagram shows the high-level export and ingestion process.
+
+:::image type="content" source="media/migration-export-ingest/export-data.png" alt-text="Diagram illustrating steps involved in export and ingestion." lightbox="media/migration-export-ingest/export-data.png" border="false":::
+
+To export data from your current SIEM, see one of the following sections:
+- [Export data from ArcSight](migration-arcsight-historical-data.md)
+- [Export data from Splunk](migration-splunk-historical-data.md)
+- [Export data from QRadar](migration-qradar-historical-data.md)
+
+## Ingest to Azure Data Explorer
+
+To ingest your historical data into Azure Data Explorer (ADX) (option 1 in the [diagram above](#export-data-from-the-legacy-siem)):
+
+1. [Install and configure LightIngest](/azure/data-explorer/lightingest) on the system where logs are exported, or install LightIngest on another system that has access to the exported logs. LightIngest supports Windows only.
+1. If you don't have an existing ADX cluster, create a new cluster and copy the connection string. Learn how to [set up ADX](/azure/data-explorer/create-cluster-database-portal).
+1. In ADX, create tables and define a schema for the CSV or JSON format (for QRadar). Learn how to create a table and define a schema [with sample data](/azure/data-explorer/ingest-sample-data) or [without sample data](/azure/data-explorer/one-click-table).
+1. [Run LightIngest](/azure/data-explorer/lightingest#run-lightingest) with the folder path that includes the exported logs as the path, and the ADX connection string as the output. When you run LightIngest, ensure that you provide the target ADX table name, that the argument pattern is set to `*.csv`, and the format is set to `.csv` (or `json` for QRadar).
+
+## Ingest data to Microsoft Sentinel Basic Logs
+
+To ingest your historical data into Microsoft Sentinel Basic Logs (option 2 in the [diagram above](#export-data-from-the-legacy-siem)):
+
+1. If you don't have an existing Log Analytics workspace, create a new workspace and [install Microsoft Sentinel](quickstart-onboard.md#enable-microsoft-sentinel-).
+1. [Create an App registration to authenticate against the API](../azure-monitor/logs/tutorial-custom-logs.md#configure-application).
+1. [Create a data collection endpoint](../azure-monitor/logs/tutorial-custom-logs.md#create-data-collection-endpoint). This endpoint acts as the API endpoint that accepts the data.
+1. [Create a custom log table](../azure-monitor/logs/tutorial-custom-logs.md#add-custom-log-table) to store the data, and provide a data sample. In this step, you can also define a transformation before the data is ingested.
+1. [Collect information from the data collection rule](../azure-monitor/logs/tutorial-custom-logs.md#collect-information-from-dcr) and assign permissions to the rule.
+1. [Change the table from Analytics to Basic Logs](../azure-monitor/logs/basic-logs-configure.md).
+1. Run the [Custom Log Ingestion script](https://github.com/Azure/Azure-Sentinel/tree/master/Tools/CustomLogsIngestion-DCE-DCR). The script asks for the following details:
+ - Path to the log files to ingest
+ - Azure AD tenant ID
+ - Application ID
+ - Application secret
+ - DCE endpoint
+ - DCR immutable ID
+ - Data stream name from the DCR
+
+ The script returns the number of events that have been sent to the workspace.
+
+## Ingest to Azure Blob Storage
+
+To ingest your historical data into Azure Blob Storage (option 3 in the [diagram above](#export-data-from-the-legacy-siem)):
+
+1. [Install and configure AzCopy](../storage/common/storage-use-azcopy-v10.md) on the system to which you exported the logs. Alternatively, install AzCopy on another system that has access to the exported logs.
+1. [Create an Azure Blob Storage account](../storage/common/storage-account-create.md) and copy the authorized [Azure Active Directory](../storage/common/storage-use-azcopy-v10.md#option-1-use-azure-active-directory) credentials or [Shared Access Signature](../storage/common/storage-use-azcopy-v10.md#option-2-use-a-sas-token) token.
+1. [Run AzCopy](../storage/common/storage-use-azcopy-v10.md#run-azcopy) with the folder path that includes the exported logs as the source, and the Azure Blob Storage connection string as the output.
+
+## Next steps
+
+In this article, you learned how to ingest your data into the target platform.
+
+> [!div class="nextstepaction"]
+> [Convert your dashboards to workbooks](migration-convert-dashboards.md)
\ No newline at end of file
diff --git a/articles/sentinel/migration-ingestion-target-platform.md b/articles/sentinel/migration-ingestion-target-platform.md
new file mode 100644
index 0000000000000..6e8b360a185fb
--- /dev/null
+++ b/articles/sentinel/migration-ingestion-target-platform.md
@@ -0,0 +1,93 @@
+---
+title: "Microsoft Sentinel migration: Select a target Azure platform to host exported data | Microsoft Docs"
+description: Select a target Azure platform to host the exported historical data
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+---
+
+# Select a target Azure platform to host the exported historical data
+
+One of the important decisions you make during your migration process is where to store your historical data. To make this decision, you need to understand and be able to compare the various target platforms.
+
+This article compares target platforms in terms of performance, cost, usability and management overhead.
+
+> [!NOTE]
+> The considerations in this table only apply to historical log migration, and don't apply in other scenarios, such as long-term retention.
+
+| |[Basic Logs/Archive](../azure-monitor/logs/basic-logs-configure.md) |[Azure Data Explorer (ADX)](/azure/data-explorer/data-explorer-overview) |[Azure Blob Storage](../storage/blobs/storage-blobs-overview.md) |[ADX + Azure Blob Storage](../azure-monitor/logs/azure-data-explorer-query-storage.md) |
+|---------|---------|---------|---------|---------|
+|**Capabilities**: |• Apply most of the existing Azure Monitor Logs experiences at a lower cost. • Basic Logs are retained for eight days, and are then automatically transferred to the archive (according to the original retention period). • Use [search jobs](../azure-monitor/logs/search-jobs.md) to search across petabytes of data and find specific events. • For deep investigations on a specific time range, [restore data from the archive](../azure-monitor/logs/restore.md). The data is then available in the hot cache for further analytics. |• Both ADX and Microsoft Sentinel use the Kusto Query Language (KQL), allowing you to query, aggregate, or correlate data in both platforms. For example, you can run a KQL query from Microsoft Sentinel to [join data stored in ADX with data stored in Log Analytics](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md). • With ADX, you have substantial control over the cluster size and configuration. For example, you can create a larger cluster to achieve higher ingestion throughput, or create a smaller cluster to control your costs. |• Blob storage is optimized for storing massive amounts of unstructured data. • Offers competitive costs. • Suitable for a scenario where your organization doesn't prioritize accessibility or performance, such as when there the organization must align with compliance or audit requirements. |• Data is stored in a blob storage, which is low in costs. • You use ADX to query the data in KQL, allowing you to easily access the data. [Learn how to query Azure Monitor data with ADX](../azure-monitor/logs/azure-data-explorer-query-storage.md) |
+|**Usability**: |**Great**
The archive and search options are simple to use and accessible from the Microsoft Sentinel portal. However, the data isn't immediately available for queries. You need to perform a search to retrieve the data, which might take some time, depending on the amount of data being scanned and returned. |**Good**
Fairly easy to use in the context of Microsoft Sentinel. For example, you can use an Azure workbook to visualize data spread across both Microsoft Sentinel and ADX. You can also query ADX data from the Microsoft Sentinel portal using the [ADX proxy](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md). |**Poor**
With historical data migrations, you might have to deal with millions of files, and exploring the data becomes a challenge. |**Fair**
While using the `externaldata` operator is very challenging with large numbers of blobs to reference, using external ADX tables eliminates this issue. The external table definition understands the blob storage folder structure, and allows you to transparently query the data contained in many different blobs and folders. |
+|**Management overhead**: |**Fully managed**
The search and archive options are fully managed and don't add management overhead. |**High**
ADX is external to Microsoft Sentinel, which requires monitoring and maintenance. |**Low**
While this platform requires little maintenance, selecting this platform adds monitoring and configuration tasks, such as setting up lifecycle management. |**Medium**
With this option, you maintain and monitor ADX and Azure Blob Storage, both of which are external components to Microsoft Sentinel. While ADX can be shut down at times, consider the extra management overhead with this option. |
+|**Performance**: |**Medium**
You typically interact with basic logs within the archive using [search jobs](../azure-monitor/logs/search-jobs.md), which are suitable when you want to maintain access to the data, but don't need immediate access to the data. |**High to low**
• The query performance of an ADX cluster depends on the number of nodes in the cluster, the cluster virtual machine SKU, data partitioning, and more. • As you add nodes to the cluster, the performance improves, with added cost. • If you use ADX, we recommend that you configure your cluster size to balance performance and cost. This configuration depends on your organization's needs, including how fast your migration needs to complete, how often the data is accessed, and the expected response time. |**Low**
Offers two performance tiers: Premium or Standard. Although both tiers are an option for long-term storage, Standard is more cost-efficient. Learn about [performance and scalability limits](../storage/common/scalability-targets-standard-account.md). |**Low**
Because the data resides in the Blob Storage, the performance is limited by that platform. |
+|**Cost**: |**High**
The cost is composed of two components: • **Ingestion cost**. Every GB of data ingested into Basic Logs is subject to Microsoft Sentinel and Azure Monitor Logs ingestion costs, which sum up to approximately $1/GB. See the [pricing details](https://azure.microsoft.com/pricing/details/microsoft-sentinel/). • **Archival cost**. The cost for data in the archive tier sums up to approximately $0.02/GB per month. See the [pricing details](https://azure.microsoft.com/pricing/details/monitor/). In addition to these two cost components, if you need frequent access to the data, extra costs apply when you access data via search jobs. |**High to low**
• Because ADX is a cluster of virtual machines, you're charged based on compute, storage and networking usage, plus an ADX markup (see the [pricing details](https://azure.microsoft.com/pricing/details/data-explorer/). Therefore, the more nodes you add to your cluster and the more data you store, the higher the cost will be. • ADX also offers autoscaling capabilities to adapt to workload on demand. ADX can also benefit from Reserved Instance pricing. You can run your own cost calculations in the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/). |**Low**
With optimal setup, Azure Blob Storage has the lowest costs. For greater efficiency and cost savings, [Azure Storage lifecycle management](https://docs.microsoft.com/azure/storage/blobs/lifecycle-management-overview) can be used to automatically place older blobs into cheaper storage tiers. |**Low**
ADX only acts as a proxy in this case, so the cluster can be small. In addition, the cluster can be shut down when you don't need access to the data and only start it when data access is needed.. |
+|**How to access data**: |[Search jobs](search-jobs.md) |Direct KQL queries |[externaldata](/azure/data-explorer/kusto/query/externaldata-operator) |Modified KQL queries |
+|**Scenario**: |**Occasional access**
Relevant in scenarios where you don’t need to run heavy analytics or trigger analytics rules, and you only need to access the data occasionally. |**Frequent access**
Relevant in scenarios where you need to access the data frequently, and need to control how the cluster is sized and configured. |**Compliance/audit**
• Optimal for storing massive amounts of unstructured data. • Relevant in scenarios where you don't need quick access to the data or high performance, such as for compliance or audit purposes. |**Occasional access**
Relevant in scenarios where you want to benefit from the low cost of Azure Blob Storage, and maintain relatively quick access to the data. |
+|**Complexity**: |Very low |Medium |Low |High |
+|**Readiness**: |Public Preview |GA |GA |GA |
+
+## General considerations
+
+Now that you know more about the available target platforms, review these main factors to finalize your decision.
+
+- [How will your organization use the ingested logs?](#use-of-ingested-logs)
+- [How fast does the migration need to run?](#migration-speed)
+- [What is the amount of data to ingest?](#amount-of-data)
+- What are the estimated migration costs, during and after migration? See the [platform comparison](#select-a-target-azure-platform-to-host-the-exported-historical-data) to compare the costs.
+
+### Use of ingested logs
+
+Define how your organization will use the ingested logs to guide your selection of the ingestion platform.
+
+Consider these three general scenarios:
+
+- Your organization needs to keep the logs only for compliance or audit purposes. In this case, your organization will rarely access the data. Even if your organization accesses the data, high performance or ease of use aren't a priority.
+- Your organization needs to retain the logs so that your teams can access the logs easily and fairly quickly.
+- Your organization needs to retain the logs so that your teams can access the logs occasionally. Performance and ease of use are secondary.
+
+See the [platform comparison](#select-a-target-azure-platform-to-host-the-exported-historical-data) to understand which platform suits each of these scenarios.
+
+### Migration speed
+
+In some scenarios, you might need to meet a tight deadline, for example, your organization might need to urgently move from the previous SIEM due to a license expiration event.
+
+Review the components and factors that determine the speed of your migration.
+- [Data source](#data-source)
+- [Compute power](#compute-power)
+- [Target platform](#target-platform)
+
+#### Data source
+
+The data source is typically a local file system or cloud storage, for example, S3. A server's storage performance depends on multiple factors, such as disk technology (SSD vs HDD), the nature of the IO requests, and the size of each request.
+
+For example, Azure virtual machine performance ranges from 30 MB per second on smaller VM SKUs, to 20 GB per second for some of the storage-optimized SKUs using NVM Express (NVMe) disks. Learn how to [design your Azure VM for high storage performance](/azure/virtual-machines/premium-storage-performance). You can also apply most concepts to on-premises servers.
+
+#### Compute power
+
+In some cases, even if your disk is capable of copying your data quickly, compute power is the bottleneck in the copy process. In these cases, you can choose one these scaling options:
+
+- **Scale vertically**. You increase the power of a single server by adding more CPUs, or increase the CPU speed.
+- **Scale horizontally**. You add more servers, which increases the parallelism of the copy process.
+
+#### Target platform
+
+Each of the target platforms discussed in this section has a different performance profile.
+
+- **Azure Monitor Basic logs**. By default, Basic logs can be pushed to Azure Monitor at a rate of approximately 1 GB per minute. This rate allows you to ingest approximately 1.5 TB per day or 43 TB per month.
+- **Azure Data Explorer**. Ingestion performance varies, depending on the size of the cluster you provision, and the batching settings you apply. [Learn about ingestion best practices](/azure/data-explorer/kusto/management/ingestion-faq), including performance and monitoring.
+- **Azure Blob Storage**. The performance of an Azure Blob Storage account can greatly vary depending on the number and size of the files, job size, concurrency, and so in. [Learn how to optimize AzCopy performance with Azure Storage](/azure/data-explorer/kusto/management/ingestion-faq).
+
+### Amount of data
+
+The amount of data is the main factor that affects the duration of the migration process. You should therefore consider how to set up your environment depending on your data set.
+
+To determine the minimum duration of the migration and where the bottleneck could be, consider the amount of data and the ingestion speed of the target platform. For example, you select a target platform that can ingest 1 GB per second, and you have to migrate 100 TB. In this case, your migration will take a minimum of 100,000 GB, multiplied by the 1 GB per second speed. Divide the result by 3600, which calculates to 27 hours. This calculation is correct if the rest of the components in the pipeline, such as the local disk, the network, and the virtual machines, can perform at a speed of 1 GB per second.
+
+## Next steps
+
+In this article, you learned how to map your migration rules from QRadar to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Select a data ingestion tool](migration-ingestion-tool.md)
diff --git a/articles/sentinel/migration-ingestion-tool.md b/articles/sentinel/migration-ingestion-tool.md
new file mode 100644
index 0000000000000..52ed0afbe2955
--- /dev/null
+++ b/articles/sentinel/migration-ingestion-tool.md
@@ -0,0 +1,137 @@
+---
+title: "Microsoft Sentinel migration: Select a data ingestion tool | Microsoft Docs"
+description: Select a tool to transfer your historical data to the selected target platform.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+---
+
+# Select a data ingestion tool
+
+After you [select a target platform](migration-ingestion-target-platform.md) for your historical data, the next step is to select a tool to transfer your data.
+
+This article describes a set of different tools used to transfer your historical data to the selected target platform. This table lists the tools available for each target platform, and general tools to help you with the ingestion process.
+
+|Azure Monitor Basic Logs/Archive |Azure Data Explorer |Azure Blob Storage |General tools |
+|---------|---------|---------|---------|
+|• [Azure Monitor custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool) • [Direct API](#direct-api) |• [LightIngest](#lightingest) • [Logstash](#logstash) |• [Azure Data Factory or Azure Synapse](#azure-data-factory-or-azure-synapse) • [AzCopy](#azcopy) |• [Azure Data Box](#azure-data-box) • [SIEM data migration accelerator](#siem-data-migration-accelerator) |
+
+## Azure Monitor Basic Logs/Archive
+
+Before you ingest data to Azure Monitor Basic Logs or Archive, for lower ingestion prices, ensure that the table you're writing to is [configured as Basic Logs](../azure-monitor/logs/basic-logs-configure.md#check-table-configuration). Review the [Azure Monitor custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool) and the [direct API](#direct-api) method for Azure Monitor Basic Logs.
+
+### Azure Monitor custom log ingestion tool
+
+The [custom log ingestion tool](https://github.com/Azure/Azure-Sentinel/tree/master/Tools/CustomLogsIngestion-DCE-DCR) is a PowerShell script that sends custom data to an Azure Monitor Logs workspace. You can point the script to the folder where all your log files reside, and the script pushes the files to that folder. The script accepts a CSV or JSON format for log files.
+
+### Direct API
+
+With this option, you [ingest your custom logs into Azure Monitor Logs](../azure-monitor/logs/tutorial-custom-logs.md). You ingest the logs with a PowerShell script that uses a REST API. Alternatively, you can use any other programming language to perform the ingestion, and you can use other Azure services to abstract the compute layer, such as Azure Functions or Azure Logic Apps.
+
+## Azure Data Explorer
+
+You can [ingest data to Azure Data Explorer](/azure/data-explorer/ingest-data-overview) (ADX) in several ways.
+
+The ingestion methods that ADX accepts are based on different components:
+- SDKs for different languages, such as .NET, Go, Python, Java, NodeJS, and APIs.
+- Managed pipelines, such as Event Grid or Storage Blob Event Hubs, and Azure Data Factory.
+- Connectors or plugins, such as Logstash, Kafka, Power Automate, and Apache Spark.
+
+Review the [LightIngest](#lightingest) and [Logstash](#logstash), two methods that are better tailored to the data migration use case.
+
+### LightIngest
+
+ADX has developed the [LightIngest utility](/azure/data-explorer/lightingest) specifically for the historical data migration use case. You can use LightIngest to copy data from a local file system or Azure Blob Storage to ADX.
+
+Here are a few main benefits and capabilities of LightIngest:
+
+- Because there's no time constraint on ingestion duration, LightIngest is most useful when you want to ingest large amounts of data.
+- LightIngest is useful when you want to query records according to the time they were created, and not the time they were ingested.
+- You don't need to deal with complex sizing for LightIngest, because the utility doesn't perform the actual copy. LightIngest informs ADX about the blobs that need to be copied, and ADX copies the data.
+
+If you choose LightIngest, review these tips and best practices.
+
+- To speed up your migration and reduce costs, increase the size of your ADX cluster to create more available nodes for ingestion. Decrease the size once the migration is over.
+- For more efficient queries after you ingest the data to ADX, ensure that the copied data uses the timestamp for the original events. The data shouldn't use the timestamp from when the data is copied to ADX. You provide the timestamp to LightIngest as the path of file name as part of the [CreationTime property](/azure/data-explorer/lightingest#how-to-ingest-data-using-creationtime).
+- If your path or file names don't include a timestamp, you can still instruct ADX to organize the data using a [partitioning policy](/azure/data-explorer/kusto/management/partitioningpolicy).
+
+### Logstash
+
+[Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from many sources simultaneously, transforms the data, and then sends the data to your favorite "stash". Learn how to [ingest data from Logstash to Azure Data Explorer](/azure/data-explorer/ingest-data-logstash). Logstash runs only on Windows machines.
+
+To optimize performance, [configure the Logstash tier size](https://www.elastic.co/guide/en/logstash/current/deploying-and-scaling.html) according to the events per second. We recommend that you use [LightIngest](#lightingest) wherever possible, because LightIngest relies on the ADX cluster computing to perform the copy.
+
+## Azure Blob Storage
+
+You can ingest data to Azure Blob Storage in several ways.
+- [Azure Data Factory or Azure Synapse](../data-factory/connector-azure-blob-storage.md)
+- [AzCopy](../storage/common/storage-use-azcopy-v10.md)
+- [Azure Storage Explorer](/architecture/data-science-process/move-data-to-azure-blob-using-azure-storage-explorer)
+- [Python](../storage/blobs/storage-quickstart-blobs-python.md)
+- [SSIS](/azure/architecture/data-science-process/move-data-to-azure-blob-using-ssis)
+
+Review the Azure Data Factory (ADF) and Azure Synapse methods, which are better tailored to the data migration use case.
+
+### Azure Data Factory or Azure Synapse
+
+To use the Copy activity in Azure Data Factory (ADF) or Synapse pipelines:
+1. Create and configure a self-hosted integration runtime. This component is responsible for copying the data from your on-premises host.
+1. Create linked services for the source data store ([filesystem](../data-factory/connector-file-system.md?tabs=data-factory#create-a-file-system-linked-service-using-ui) and the sink data store [blob storage](../data-factory/connector-azure-blob-storage.md?tabs=data-factory#create-an-azure-blob-storage-linked-service-using-ui).
+3. To copy the data, use the [Copy data tool](../data-factory/quickstart-create-data-factory-copy-data-tool.md). Alternatively, you can use method such as PowerShell, Azure portal, a .NET SDK, and so on.
+
+### AzCopy
+
+[AzCopy](../storage/common/storage-use-azcopy-v10.md) is a simple command-line utility that copies files to or from storage accounts. AzCopy is available for Windows, Linux, and macOS. Learn how to [copy on-premises data to Azure Blob storage with AzCopy](../storage/common/storage-use-azcopy-v10.md).
+
+You can also use these options to copy the data:
+- Learn how to [optimize the performance](../storage/common/storage-use-azcopy-optimize.md) of AzCopy.
+- Learn how to [configure AzCopy](../storage/common/storage-ref-azcopy-configuration-settings.md).
+- Learn how to use the [copy command](../storage/common/storage-ref-azcopy-copy.md).
+
+## Azure Data Box
+
+In a scenario where the source SIEM doesn't have good connectivity to Azure, ingesting the data using the tools reviewed in this section might be slow or even impossible. To address this scenario, you can use [Azure Data Box](../databox/data-box-overview.md) to copy the data locally from the customer's data center into an appliance, and then ship that appliance to an Azure data center. While Azure Data Box isn't a replacement for AzCopy or LightIngest, you can use this tool to accelerate the data transfer between the customer data center and Azure.
+
+Azure Data Box offers three different SKUs, depending on the amount of data to migrate:
+
+- [Data Box Disk](../databox/data-box-disk-overview.md)
+- [Data Box](../databox/data-box-overview.md)
+- [Data Box Heavy](../databox/data-box-heavy-overview.md)
+
+After you complete the migration, the data is available in a storage account under one of your Azure subscriptions. You can then use [AzCopy](#azcopy), [LightIngest](#lightingest), or [ADF](#azure-data-factory-or-azure-synapse) to ingest data from the storage account.
+
+## SIEM data migration accelerator
+
+In addition to selecting an ingestion tool, your team needs to invest time in setting up the foundation environment. To ease this process, you can use the [SIEM data migration accelerator](https://aka.ms/siemdatamigration), which automates the following tasks:
+
+- Deploys a Windows virtual machine that will be used to move the logs from the source to the target platform
+- Downloads and extracts the following tools into the virtual machine desktop:
+ - [LightIngest](#lightingest): Used to migrate data to ADX
+ - [Azure Monitor Custom log ingestion tool](#azure-monitor-custom-log-ingestion-tool): Used to migrate data to Log Analytics
+ - [AzCopy](#azcopy): Used to migrate data to Azure Blob Storage
+- Deploys the target platform that will host your historical logs:
+ - Azure Storage account (Azure Blob Storage)
+ - Azure Data Explorer cluster and database
+ - Azure Monitor Logs workspace (Basic Logs; enabled with Microsoft Sentinel)
+
+To use the SIEM data migration accelerator:
+
+1. From the [SIEM data migration accelerator page](https://aka.ms/siemdatamigration), click **Deploy to Azure** at the bottom of the page, and authenticate.
+1. Select **Basics**, select your resource group and location, and then select **Next**.
+1. Select **Migration VM**, and do the following:
+ - Type the virtual machine name, username and password.
+ - Select an existing vNet or create a new vNet for the virtual machine connection.
+ - Select the virtual machine size.
+1. Select **Target platform** and do one of the following:
+ - Skip this step.
+ - Provide the ADX cluster and database name, SKU, and number of nodes.
+ - For Azure Blob Storage accounts, select an existing account. If you don't have an account, provide a new account name, type, and redundancy.
+ - For Azure Monitor Logs, type the name of the new workspace.
+
+## Next steps
+
+In this article, you learned how to select a tool to ingest your data into the target platform.
+
+> [!div class="nextstepaction"]
+> [Ingest your data](migration-export-ingest.md)
\ No newline at end of file
diff --git a/articles/sentinel/migration-qradar-automation.md b/articles/sentinel/migration-qradar-automation.md
new file mode 100644
index 0000000000000..253bf06591dcf
--- /dev/null
+++ b/articles/sentinel/migration-qradar-automation.md
@@ -0,0 +1,85 @@
+---
+title: Migrate IBM Security QRadar SOAR automation to Microsoft Sentinel | Microsoft Docs
+description: Learn how to identify SOAR use cases, and how to migrate your QRadar SOAR automation to Microsoft Sentinel.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+---
+
+# Migrate IBM Security QRadar SOAR automation to Microsoft Sentinel
+
+Microsoft Sentinel provides Security Orchestration, Automation, and Response (SOAR) capabilities with [automation rules](automate-incident-handling-with-automation-rules.md) and [playbooks](tutorial-respond-threats-playbook.md). Automation rules automate incident handling and response, and playbooks run predetermined sequences of actions to response and remediate threats. This article discusses how to identify SOAR use cases, and how to migrate your IBM Security QRadar SOAR automation to Microsoft Sentinel.
+
+Automation rules simplify complex workflows for your incident orchestration processes, and allow you to centrally manage your incident handling automation.
+
+With automation rules, you can:
+- Perform simple automation tasks without necessarily using playbooks. For example, you can assign, tag incidents, change status, and close incidents.
+- Automate responses for multiple analytics rules at once.
+- Control the order of actions that are executed.
+- Run playbooks for those cases where more complex automation tasks are necessary.
+
+## Identify SOAR use cases
+
+Here’s what you need to think about when migrating SOAR use cases from IBM Security QRadar SOAR.
+- **Use case quality**. Choose good use cases for automation. Use cases should be based on procedures that are clearly defined, with minimal variation, and a low false-positive rate. Automation should work with efficient use cases.
+- **Manual intervention**. Automated response can have wide ranging effects and high impact automations should have human input to confirm high impact actions before they’re taken.
+- **Binary criteria**. To increase response success, decision points within an automated workflow should be as limited as possible, with binary criteria. Binary criteria reduces the need for human intervention, and enhances outcome predictability.
+- **Accurate alerts or data**. Response actions are dependent on the accuracy of signals such as alerts. Alerts and enrichment sources should be reliable. Microsoft Sentinel resources such as watchlists and reliable threat intelligence can enhance reliability.
+- **Analyst role**. While automation where possible is great, reserve more complex tasks for analysts, and provide them with the opportunity for input into workflows that require validation. In short, response automation should augment and extend analyst capabilities.
+
+## Migrate SOAR workflow
+
+This section shows how key SOAR concepts in IBM Security QRadar SOAR translate to Microsoft Sentinel components. The section also provides general guidelines for how to migrate each step or component in the SOAR workflow.
+
+:::image type="content" source="media/migration-qradar-automation/qradar-sentinel-soar-workflow.png" alt-text="Diagram displaying the QRadar and Microsoft Sentinel SOAR workflows." lightbox="media/migration-qradar-automation/qradar-sentinel-soar-workflow.png" border="false":::
+
+|Step (in diagram) |IBM Security QRadar SOAR |Microsoft Sentinel |
+|---------|---------|---------|
+|1 |Define rules and conditions. |Define automation rules. |
+|2 |Execute ordered activities. |Execute automation rules containing multiple playbooks. |
+|3 |Execute selected workflows. |Execute other playbooks according to tags applied by playbooks that were executed previously. |
+|4 |Post data to message destinations. |Execute code snippets using inline actions in Logic Apps. |
+
+## Map SOAR components
+
+Review which Microsoft Sentinel or Azure Logic Apps features map to the main QRadar SOAR components.
+
+|QRadar |Microsoft Sentinel/Azure Logic Apps |
+|---------|---------|
+|Rules |[Analytics rules](detect-threats-built-in.md#use-built-in-analytics-rules) attached to playbooks or automation rules |
+|Gateway |[Condition control](../logic-apps/logic-apps-control-flow-conditional-statement.md) |
+|Scripts |[Inline code](../logic-apps/logic-apps-add-run-inline-code.md) |
+|Custom action processors |[Custom API calls](../logic-apps/logic-apps-create-api-app.md) in Azure Logic Apps or third party connectors |
+|Functions |[Azure Function connector](../logic-apps/logic-apps-azure-functions.md) |
+|Message destinations |[Azure Logic Apps with Azure Service Bus](../connectors/connectors-create-api-servicebus.md) |
+|IBM X-Force Exchange |• [Automation > Templates tab](use-playbook-templates.md) • [Content hub catalog](sentinel-solutions-catalog.md) • [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser) |
+
+## Operationalize playbooks and automation rules in Microsoft Sentinel
+
+Most of the playbooks that you use with Microsoft Sentinel are available in either the [Automation > Templates tab](use-playbook-templates.md), the [Content hub catalog](sentinel-solutions-catalog.md), or [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser). In some cases, however, you might need to create playbooks from scratch or from existing templates.
+
+You typically build your custom logic app using the Azure Logic App Designer feature. The logic apps code is based on [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md), which facilitate development, deployment and portability of Azure Logic Apps across multiple environments. To convert your custom playbook into a portable ARM template, you can use the [ARM template generator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/export-microsoft-sentinel-playbooks-or-azure-logic-apps-with/ba-p/3275898).
+
+Use these resources for cases where you need to build your own playbooks either from scratch or from existing templates.
+- [Automate incident handling in Microsoft Sentinel](automate-incident-handling-with-automation-rules.md)
+- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
+- [How to use Microsoft Sentinel for Incident Response, Orchestration and Automation](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-for-incident-response-orchestration/ba-p/2242397)
+- [Adaptive Cards to enhance incident response in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-microsoft-teams-adaptive-cards-to-enhance-incident/ba-p/3330941)
+
+## SOAR post migration best practices
+
+Here are best practices you should take into account after your SOAR migration:
+
+- After you migrate your playbooks, test the playbooks extensively to ensure that the migrated actions work as expected.
+- Periodically review your automations to explore ways to further simplify or enhance your SOAR. Microsoft Sentinel constantly adds new connectors and actions that can help you to further simplify or increase the effectiveness of your current response implementations.
+- Monitor the performance of your playbooks using the [Playbooks health monitoring workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-monitoring-your-logic-apps-playbooks-in-azure/ba-p/1873211).
+- Use managed identities and service principals: Authenticate against various Azure services within your Logic Apps, store the secrets in Azure Key Vault, and obscure the output of the flow execution. We also recommend that you [monitor the activities of these service principals](https://techcommunity.microsoft.com/t5/azure-sentinel/non-interactive-logins-minimizing-the-blind-spot/ba-p/2287932).
+
+## Next steps
+
+In this article, you learned how to map your SOAR automation from IBM Security QRadar SOAR to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Export your historical data](migration-qradar-historical-data.md)
\ No newline at end of file
diff --git a/articles/sentinel/migration-qradar-detection-rules.md b/articles/sentinel/migration-qradar-detection-rules.md
new file mode 100644
index 0000000000000..887524e57e1e7
--- /dev/null
+++ b/articles/sentinel/migration-qradar-detection-rules.md
@@ -0,0 +1,428 @@
+---
+title: Migrate QRadar detection rules to Microsoft Sentinel | Microsoft Docs
+description: Identify, compare, and migrate your QRadar detection rules to Microsoft Sentinel built-in rules.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+---
+
+# Migrate QRadar detection rules to Microsoft Sentinel
+
+This article describes how to identify, compare, and migrate your QRadar detection rules to Microsoft Sentinel built-in rules.
+
+## Identify and migrate rules
+
+Microsoft Sentinel uses machine learning analytics to create high-fidelity and actionable incidents, and some of your existing detections may be redundant in Microsoft Sentinel. Therefore, don't migrate all of your detection and analytics rules blindly. Review these considerations as you identify your existing detection rules.
+
+- Make sure to select use cases that justify rule migration, considering business priority and efficiency.
+- Check that you [understand Microsoft Sentinel rule types](detect-threats-built-in.md#view-built-in-detections).
+- Check that you understand the [rule terminology](#compare-rule-terminology).
+- Review any rules that haven't triggered any alerts in the past 6-12 months, and determine whether they're still relevant.
+- Eliminate low-level threats or alerts that you routinely ignore.
+- Use existing functionality, and check whether Microsoft Sentinel’s [built-in analytics rules](https://github.com/Azure/Azure-Sentinel/tree/master/Detections) might address your current use cases. Because Microsoft Sentinel uses machine learning analytics to produce high-fidelity and actionable incidents, it’s likely that some of your existing detections won’t be required anymore.
+- Confirm connected data sources and review your data connection methods. Revisit data collection conversations to ensure data depth and breadth across the use cases you plan to detect.
+- Explore community resources such as the [SOC Prime Threat Detection Marketplace](https://my.socprime.com/tdm/) to check whether your rules are available.
+- Consider whether an online query converter such as Uncoder.io might work for your rules.
+- If rules aren’t available or can’t be converted, they need to be created manually, using a KQL query. Review the [rules mapping](#map-and-compare-rule-samples) to create new queries.
+
+Learn more about [best practices for migrating detection rules](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417).
+
+**To migrate your analytics rules to Microsoft Sentinel**:
+
+1. Verify that you have a testing system in place for each rule you want to migrate.
+
+ 1. **Prepare a validation process** for your migrated rules, including full test scenarios and scripts.
+
+ 1. **Ensure that your team has useful resources** to test your migrated rules.
+
+ 1. **Confirm that you have any required data sources connected,** and review your data connection methods.
+
+1. Verify whether your detections are available as built-in templates in Microsoft Sentinel:
+
+ - **If the built-in rules are sufficient**, use built-in rule templates to create rules for your own workspace.
+
+ In Microsoft Sentinel, go to the **Configuration > Analytics > Rule templates** tab, and create and update each relevant analytics rule.
+
+ For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md).
+
+ - **If you have detections that aren't covered by Microsoft Sentinel's built-in rules**, try an online query converter, such as [Uncoder.io](https://uncoder.io/) to convert your queries to KQL.
+
+ Identify the trigger condition and rule action, and then construct and review your KQL query.
+
+ - **If neither the built-in rules nor an online rule converter is sufficient**, you'll need to create the rule manually. In such cases, use the following steps to start creating your rule:
+
+ 1. **Identify the data sources you want to use in your rule**. You'll want to create a mapping table between data sources and data tables in Microsoft Sentinel to identify the tables you want to query.
+
+ 1. **Identify any attributes, fields, or entities** in your data that you want to use in your rules.
+
+ 1. **Identify your rule criteria and logic**. At this stage, you may want to use rule templates as samples for how to construct your KQL queries.
+
+ Consider filters, correlation rules, active lists, reference sets, watchlists, detection anomalies, aggregations, and so on. You might use references provided by your legacy SIEM to understand [how to best map your query syntax](#map-and-compare-rule-samples).
+
+ 1. **Identify the trigger condition and rule action, and then construct and review your KQL query**. When reviewing your query, consider KQL optimization guidance resources.
+
+1. Test the rule with each of your relevant use cases. If it doesn't provide expected results, you may want to review the KQL and test it again.
+
+1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
+
+Learn more about analytics rules.
+
+- [**Create custom analytics rules to detect threats**](detect-threats-custom.md). Use [alert grouping](detect-threats-custom.md#alert-grouping) to reduce alert fatigue by grouping alerts that occur within a given timeframe.
+- [**Map data fields to entities in Microsoft Sentinel**](map-data-fields-to-entities.md) to enable SOC engineers to define entities as part of the evidence to track during an investigation. Entity mapping also makes it possible for SOC analysts to take advantage of an intuitive [investigation graph (investigate-cases.md#use-the-investigation-graph-to-deep-dive) that can help reduce time and effort.
+- [**Investigate incidents with UEBA data**](investigate-with-ueba.md), as an example of how to use evidence to surface events, alerts, and any bookmarks associated with a particular incident in the incident preview pane.
+- [**Kusto Query Language (KQL)**](/azure/data-explorer/kusto/query/), which you can use to send read-only requests to your [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md) database to process data and return results. KQL is also used across other Microsoft services, such as [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) and [Application Insights](../azure-monitor/app/app-insights-overview.md).
+
+## Compare rule terminology
+
+This table helps you to clarify the concept of a rule in Microsoft Sentinel compared to QRadar.
+
+| |QRadar |Microsoft Sentinel |
+|---------|---------|---------|
+|**Rule type** |• Events • Flow • Common • Offense • Anomaly detection rules |• Scheduled query • Fusion • Microsoft Security • Machine Learning (ML) Behavior Analytics |
+|**Criteria** |Define in test condition |Define in KQL |
+|**Trigger condition** |Define in rule |Threshold: Number of query results |
+|**Action** |• Create offense • Dispatch new event • Add to reference set or data • And more |• Create alert or incident • Integrates with Logic Apps |
+
+## Map and compare rule samples
+
+Use these samples to compare and map rules from QRadar to Microsoft Sentinel in various scenarios.
+
+|Rule |Syntax |Sample detection rule (QRadar) |Sample KQL query |Resources |
+|---------|---------|---------|---------|---------|
+|Common property tests |[QRadar syntax](#common-property-tests-syntax) |• [Regular expression example](#common-property-tests-regular-expression-example-qradar) • [AQL filter query example](#common-property-tests-aql-filter-query-example-qradar) • [equals/not equals example](#common-property-tests-equalsnot-equals-example-qradar) |• [Regular expression example](#common-property-tests-regular-expression-example-kql) • [AQL filter query example](#common-property-tests-aql-filter-query-example-kql) • [equals/not equals example](#common-property-tests-equalsnot-equals-example-kql) |• Regular expression: [matches regex](/azure/data-explorer/kusto/query/re2) • AQL filter query: [string operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings) • equals/not equals: [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings) |
+|Date/time tests |[QRadar syntax](#datetime-tests-syntax) |• [Selected day of the month example](#datetime-tests-selected-day-of-the-month-example-qradar) • [Selected day of the week example](#datetime-tests-selected-day-of-the-week-example-qradar) • [after/before/at example](#datetime-tests-afterbeforeat-example-qradar) |• [Selected day of the month example](#datetime-tests-selected-day-of-the-month-example-kql) • [Selected day of the week example](#datetime-tests-selected-day-of-the-week-example-kql) • [after/before/at example](#datetime-tests-afterbeforeat-example-kql) |• [Date and time operators](/azure/data-explorer/kusto/query/samples?pivots=azuremonitor#date-and-time-operations) • Selected day of the month: [dayofmonth()](/azure/data-explorer/kusto/query/dayofmonthfunction) • Selected day of the week: [dayofweek()](/azure/data-explorer/kusto/query/dayofweekfunction) • after/before/at: [format_datetime()](/azure/data-explorer/kusto/query/format-datetimefunction) |
+|Event property tests |[QRadar syntax](#event-property-tests-syntax) |• [IP protocol example](#event-property-tests-ip-protocol-example-qradar) • [Event Payload string example](#event-property-tests-event-payload-string-example-qradar) |• [IP protocol example](#event-property-tests-ip-protocol-example-kql) • [Event Payload string example](#event-property-tests-event-payload-string-example-kql) |• IP protocol: [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings) • Event Payload string: [has](/azure/data-explorer/kusto/query/datatypes-string-operators) |
+|Functions: counters |[QRadar syntax](#functions-counters-syntax) |[Event property and time example](#counters-event-property-and-time-example-qradar) |[Event property and time example](#counters-event-property-and-time-example-kql) |[summarize](/azure/data-explorer/kusto/query/summarizeoperator) |
+|Functions: negative conditions |[QRadar syntax](#functions-negative-conditions-syntax) |[Negative conditions example](#negative-conditions-example-qradar) |[Negative conditions example](#negative-conditions-example-kql) |• [join()](/azure/data-explorer/kusto/query/joinoperator?pivots=azuredataexplorer) • [String operators](/azure/data-explorer/kusto/query/datatypes-string-operators#operators-on-strings) • [Numerical operators](/azure/data-explorer/kusto/query/numoperators) |
+|Functions: simple |[QRadar syntax](#functions-simple-conditions-syntax) |[Simple conditions example](#simple-conditions-example-qradar) |[Simple conditions example](#simple-conditions-example-kql) |[or](/azure/data-explorer/kusto/query/logicaloperators) |
+|IP/port tests |[QRadar syntax](#ipport-tests-syntax) |• [Source port example](#ipport-tests-source-port-example-qradar) • [Source IP example](#ipport-tests-source-ip-example-qradar) |• [Source port example](#ipport-tests-source-port-example-kql) • [Source IP example](#ipport-tests-source-ip-example-kql) | |
+|Log source tests |[QRadar syntax](#log-source-tests-syntax) |[Log source example](#log-source-example-qradar) |[Log source example](#log-source-example-kql) | |
+
+### Common property tests syntax
+
+Here's the QRadar syntax for a common property tests rule.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-1-syntax.png" alt-text="Diagram illustrating a common property test rule syntax.":::
+
+### Common property tests: Regular expression example (QRadar)
+
+Here's the syntax for a sample QRadar common property tests rule that uses a regular expression:
+
+```
+when any of match
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-1-sample.png" alt-text="Diagram illustrating a common property test rule that uses a regular expression.":::
+
+### Common property tests: Regular expression example (KQL)
+
+Here's the common property tests rule with a regular expression in KQL.
+
+```kusto
+CommonSecurityLog
+| where tostring(SourcePort) matches regex @"\d{1,5}" or tostring(DestinationPort) matches regex @"\d{1,5}"
+```
+### Common property tests: AQL filter query example (QRadar)
+
+Here's the syntax for a sample QRadar common property tests rule that uses an AQL filter query.
+
+```
+when the event matches AQL filter query
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-1-sample-aql.png" alt-text="Diagram illustrating a common property test rule that uses an A Q L filter query.":::
+
+### Common property tests: AQL filter query example (KQL)
+
+Here's the common property tests rule with an AQL filter query in KQL.
+
+```kusto
+CommonSecurityLog
+| where SourceIP == '10.1.1.10'
+```
+### Common property tests: equals/not equals example (QRadar)
+
+Here's the syntax for a sample QRadar common property tests rule that uses the `equals` or `not equals` operator.
+
+```
+and when
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-1-sample-equals.png" alt-text="Diagram illustrating a common property test rule that uses equals/not equals.":::
+
+### Common property tests: equals/not equals example (KQL)
+
+Here's the common property tests rule with the `equals` or `not equals` operator in KQL.
+
+```kusto
+CommonSecurityLog
+| where SourceIP == DestinationIP
+```
+### Date/time tests syntax
+
+Here's the QRadar syntax for a date/time tests rule.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-2-syntax.png" alt-text="Diagram illustrating a date/time tests rule syntax.":::
+
+### Date/time tests: Selected day of the month example (QRadar)
+
+Here's the syntax for a sample QRadar date/time tests rule that uses a selected day of the month.
+
+```
+and when the event(s) occur the day of the month
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-2-sample-selected-day.png" alt-text="Diagram illustrating a date/time tests rule that uses a selected day.":::
+
+### Date/time tests: Selected day of the month example (KQL)
+
+Here's the date/time tests rule with a selected day of the month in KQL.
+
+```kusto
+SecurityEvent
+ | where dayofmonth(TimeGenerated) < 4
+```
+### Date/time tests: Selected day of the week example (QRadar)
+
+Here's the syntax for a sample QRadar date/time tests rule that uses a selected day of the week:
+
+```
+and when the event(s) occur on any of
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-2-sample-selected-day-week.png" alt-text="Diagram illustrating a date/time tests rule that uses a selected day of the week.":::
+
+### Date/time tests: Selected day of the week example (KQL)
+
+Here's the date/time tests rule with a selected day of the week in KQL.
+
+```kusto
+SecurityEvent
+ | where dayofweek(TimeGenerated) between (3d .. 5d)
+```
+### Date/time tests: after/before/at example (QRadar)
+
+Here's the syntax for a sample QRadar date/time tests rule that uses the `after`, `before`, or `at` operator.
+
+```
+and when the event(s) occur
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-2-sample-after-before-at.png" alt-text="Diagram illustrating a date/time tests rule that uses the after/before/at operator.":::
+
+### Date/time tests: after/before/at example (KQL)
+
+Here's the date/time tests rule that uses the `after`, `before`, or `at` operator in KQL.
+
+```kusto
+SecurityEvent
+| where format_datetime(TimeGenerated,'HH:mm')=="23:55"
+```
+`TimeGenerated` is in UTC/GMT.
+
+### Event property tests syntax
+
+Here's the QRadar syntax for an event property tests rule.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-3-syntax.png" alt-text="Diagram illustrating an event property tests rule syntax.":::
+
+### Event property tests: IP protocol example (QRadar)
+
+Here's the syntax for a sample QRadar event property tests rule that uses an IP protocol.
+
+```
+and when the IP protocol is one of the following
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-3-sample-protocol.png" alt-text="Diagram illustrating an event property tests rule that uses an I P protocol.":::
+
+### Event property tests: IP protocol example (KQL)
+
+```kusto
+CommonSecurityLog
+| where Protocol in ("UDP","ICMP")
+```
+### Event property tests: Event Payload string example (QRadar)
+
+Here's the syntax for a sample QRadar event property tests rule that uses an `Event Payload` string value.
+
+```
+and when the Event Payload contains
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-3-sample-payload.png" alt-text="Diagram illustrating an event property tests rule that uses an Event Payload string.":::
+
+### Event property tests: Event Payload string example (KQL)
+
+```kusto
+CommonSecurityLog
+| where DeviceVendor has "Palo Alto"
+
+search "Palo Alto"
+```
+To optimize performance, avoid using the `search` command if you already know the table name.
+
+### Functions: counters syntax
+
+Here's the QRadar syntax for a functions rule that uses counters.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-4-syntax.png" alt-text="Diagram illustrating the syntax of a functions rule that uses counters.":::
+
+### Counters: Event property and time example (QRadar)
+
+Here's the syntax for a sample QRadar functions rule that uses a defined number of event properties in a defined number of minutes.
+
+```
+and when at least events are seen with the same in
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-4-sample-event-property.png" alt-text="Diagram illustrating a functions rule that uses event properties.":::
+
+### Counters: Event property and time example (KQL)
+
+```kusto
+CommonSecurityLog
+| summarize Count = count() by SourceIP, DestinationIP
+| where Count >= 5
+```
+### Functions: negative conditions syntax
+
+Here's the QRadar syntax for a functions rule that uses negative conditions.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-5-syntax.png" alt-text="Diagram illustrating the syntax of a functions rule that uses negative conditions.":::
+
+### Negative conditions example (QRadar)
+
+Here's the syntax for a sample QRadar functions rule that uses negative conditions.
+
+```
+and when none of match in after match with the same
+```
+Here are two defined rules in QRadar. The negative conditions will be based on these rules.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-5-sample-1.png" alt-text="Diagram illustrating an event property tests rule to be used for a negative conditions rule.":::
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-5-sample-2.png" alt-text="Diagram illustrating a common property tests rule to be used for a negative conditions rule.":::
+
+Here's a sample of the negative conditions rule based on the rules above.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-5-sample-3.png" alt-text="Diagram illustrating a functions rule with negative conditions.":::
+
+### Negative conditions example (KQL)
+
+```kusto
+let spanoftime = 10m;
+let Test2 = (
+CommonSecurityLog
+| where Protocol !in ("UDP","ICMP")
+| where TimeGenerated > ago(spanoftime)
+);
+let Test6 = (
+CommonSecurityLog
+| where SourceIP == DestinationIP
+);
+Test2
+| join kind=rightanti Test6 on $left. SourceIP == $right. SourceIP and $left. Protocol ==$right. Protocol
+```
+### Functions: simple conditions syntax
+
+Here's the QRadar syntax for a functions rule that uses simple conditions.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-6-syntax.png" alt-text="Diagram illustrating the syntax of a functions rule that uses simple conditions.":::
+
+### Simple conditions example (QRadar)
+
+Here's the syntax for a sample QRadar functions rule that uses simple conditions.
+
+```
+and when an event matches of the following
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-6-sample-1.png" alt-text="Diagram illustrating a functions rule with simple conditions.":::
+
+### Simple conditions example (KQL)
+
+```kusto
+CommonSecurityLog
+| where Protocol !in ("UDP","ICMP") or SourceIP == DestinationIP
+```
+### IP/port tests syntax
+
+Here's the QRadar syntax for an IP/port tests rule.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-7-syntax.png" alt-text="Diagram illustrating the syntax of an IP/port tests rule.":::
+
+### IP/port tests: Source port example (QRadar)
+
+Here's the syntax for a sample QRadar rule specifying a source port.
+
+```
+and when the source port is one of the following
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-7-sample-1-port.png" alt-text="Diagram illustrating a rule that specifies a source port.":::
+
+### IP/port tests: Source port example (KQL)
+
+```kusto
+CommonSecurityLog
+| where SourcePort == 20
+```
+### IP/port tests: Source IP example (QRadar)
+
+Here's the syntax for a sample QRadar rule specifying a source IP.
+
+```
+and when the source IP is one of the following
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-7-sample-2-ip.png" alt-text="Diagram illustrating a rule that specifies a source IP address.":::
+
+### IP/port tests: Source IP example (KQL)
+
+```kusto
+CommonSecurityLog
+| where SourceIP in (“10.1.1.1”,”10.2.2.2”)
+```
+### Log source tests syntax
+
+Here's the QRadar syntax for a log source tests rule.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-8-syntax.png" alt-text="Diagram illustrating the syntax of a log source tests rule.":::
+
+#### Log source example (QRadar)
+
+Here's the syntax for a sample QRadar rule specifying log sources.
+
+```
+and when the event(s) were detected by one or more of these
+```
+Here's the sample rule in QRadar.
+
+:::image type="content" source="media/migration-qradar-detection-rules/rule-8-sample-1.png" alt-text="Diagram illustrating a rule that specifies log sources.":::
+
+#### Log source example (KQL)
+
+```kusto
+OfficeActivity
+| where OfficeWorkload == "Exchange"
+```
+## Next steps
+
+In this article, you learned how to map your migration rules from QRadar to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Migrate your SOAR automation](migration-qradar-automation.md)
\ No newline at end of file
diff --git a/articles/sentinel/migration-qradar-historical-data.md b/articles/sentinel/migration-qradar-historical-data.md
new file mode 100644
index 0000000000000..d1e6b9356fb29
--- /dev/null
+++ b/articles/sentinel/migration-qradar-historical-data.md
@@ -0,0 +1,47 @@
+---
+title: "Microsoft Sentinel migration: Export QRadar data to target platform | Microsoft Docs"
+description: Learn how to export your historical data from QRadar.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+ms.custom: ignite-fall-2021
+---
+
+# Export historical data from QRadar
+
+This article describes how to export your historical data from QRadar. After you complete the steps in this article, you can [select a target platform](migration-ingestion-target-platform.md) to host the exported data, and then [select an ingestion tool](migration-ingestion-tool.md) to migrate the data.
+
+:::image type="content" source="media/migration-export-ingest/export-data.png" alt-text="Diagram illustrating steps involved in export and ingestion." lightbox="media/migration-export-ingest/export-data.png" border="false":::
+
+Follow the steps in these sections to export your historical data using [QRadar forwarding destination](https://www.ibm.com/docs/en/qsip/7.5?topic=administration-forward-data-other-systems).
+
+## Configure QRadar forwarding destination
+
+Configure the QRadar forwarding destination, including your profile, rules, and destination address:
+
+1. [Configure a forwarding profile](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-configuring-forwarding-profiles).
+1. [Add a forwarding destination](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-adding-forwarding-destinations):
+ 1. Set the **Event Format** to **JSON**.
+ 2. Set the **Destination Address** to a server that has syslog running on TCP port 5141 and stores the ingested logs to a local folder path.
+ 3. Select the forwarding profile created in step 1.
+ 4. Enable the forwarding destination configuration.
+
+## Configure routing rules
+
+Configure routing rules:
+
+1. [Configure routing rules to forward data](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-configuring-routing-rules-forward-data).
+1. Set the **Mode** to **Offline**.
+1. Select the relevant **Forwarding Event Processor**.
+1. Set the **Data Source** to **Events**.
+1. Select **Add Filter** to add filter criteria for data that needs to be exported. For example, use the **Log Source Time** field to set a timestamp range.
+1. Select **Forward** and select the forwarding destination created when you [configured the QRadar forwarding destination](#configure-qradar-forwarding-destination) in step 2.
+1. [Enable the routing rule configuration](https://www.ibm.com/docs/en/qsip/7.5?topic=systems-viewing-managing-routing-rules).
+1. Repeat steps 1-7 for each event processor from which you need to export data.
+
+## Next steps
+
+- [Select a target Azure platform to host the exported historical data](migration-ingestion-target-platform.md)
+- [Select a data ingestion tool](migration-ingestion-tool.md)
+- [Ingest historical data into your target platform](migration-export-ingest.md)
\ No newline at end of file
diff --git a/articles/sentinel/migration-security-operations-center-processes.md b/articles/sentinel/migration-security-operations-center-processes.md
new file mode 100644
index 0000000000000..35d45543121bc
--- /dev/null
+++ b/articles/sentinel/migration-security-operations-center-processes.md
@@ -0,0 +1,133 @@
+---
+title: "Microsoft Sentinel migration: Update SOC and analyst processes | Microsoft Docs"
+description: Learn how to update your SOC and analyst processes as part of your migration to Microsoft Sentinel.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+---
+
+# Update SOC processes
+
+A security operations center (SOC) is a centralized function within an organization that integrates people, processes, and technology. A SOC implements the organization's overall cybersecurity framework. The SOC collaborates the organizational efforts to monitor, alert, prevent, detect, analyze, and respond to cybersecurity incidents. SOC teams, led by a SOC manager, may include incident responders, SOC analysts at levels 1, 2, and 3, threat hunters, and incident response managers.
+
+SOC teams use telemetry from across the organization's IT infrastructure, including networks, devices, applications, behaviors, appliances, and information stores. The teams then co-relate and analyze the data, to determine how to manage the data and which actions to take.
+
+To successfully migrate to Microsoft Sentinel, you need to update not only the technology that the SOC uses, but also the SOC tasks and processes. This article describes how to update your SOC and analyst processes as part of your migration to Microsoft Sentinel.
+
+## Update analyst workflow
+
+Microsoft Sentinel offers a range of tools that map to a typical analyst workflow, from incident assignment to closure. Analysts can flexibly use some or all of the available tools to triage and investigate incidents. As your organization migrates to Microsoft Sentinel, your analysts need to adapt to these new toolsets, features, and workflows.
+
+### Incidents in Microsoft Sentinel
+
+In Microsoft Sentinel, an incident is a collection of alerts that Microsoft Sentinel determines have sufficient fidelity to trigger the incident. Hence, with Microsoft Sentinel, the analyst triages incidents in the **Incidents** page first, and then proceeds to analyze alerts, if a deeper dive is needed. [Compare your SIEM's incident terminology and management areas](#compare-siem-concepts) with Microsoft Sentinel.
+
+### Analyst workflow stages
+
+This table describes the key stages in the analyst workflow, and highlights the specific tools relevant to each activity in the workflow.
+
+|Assign |Triage |Investigate |Respond |
+|---------|---------|---------|---------|
+|**[Assign incidents](#assign)**: • Manually, in the **Incidents** page • Automatically, using playbooks or automation rules |**[Triage incidents](#triage)** using: • The incident details in the **Incident** page • Entity information in the **Incident page**, under the **Entities** tab • Jupyter Notebooks |**[Investigate incidents](#investigate)** using: • The investigation graph • Microsoft Sentinel Workbooks • The Log Analytics query window |**[Respond to incidents](#respond)** using: • Playbooks and automation rules • Microsoft Teams War Room |
+
+The next sections map both the terminology and analyst workflow to specific Microsoft Sentinel features.
+
+#### Assign
+
+Use the Microsoft Sentinel **Incidents** page to assign incidents. The **Incidents** page includes an incident preview, and a detailed view for single incidents.
+
+:::image type="content" source="media/migration-soc-processes/analyst-workflow-incidents.png" alt-text="Screenshot of Microsoft Sentinel Incidents page." lightbox="media/migration-soc-processes/analyst-workflow-incidents.png":::
+
+To assign an incident:
+- **Manually**. Set the **Owner** field to the relevant user name.
+- **Automatically**. [Use a custom solution based on Microsoft Teams and Logic Apps](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/automate-incident-assignment-with-shifts-for-teams/ba-p/2297549), [or an automation rule](automate-incident-handling-with-automation-rules.md).
+
+:::image type="content" source="media/migration-soc-processes/analyst-workflow-assign-incidents.png" alt-text="Screenshot of assigning an owner in the Incidents page." lightbox="media/migration-soc-processes/analyst-workflow-assign-incidents.png":::
+
+#### Triage
+
+To conduct a triage exercise in Microsoft Sentinel, you can start with various Microsoft Sentinel features, depending on your level of expertise and the nature of the incident under investigation. As a typical starting point, select **View full details** in the **Incident** page. You can now examine the alerts that comprise the incident, review bookmarks, select entities to drill down further into specific entities, or add comments.
+
+:::image type="content" source="media/migration-soc-processes/analyst-workflow-incident-details.png" alt-text="Screenshot of viewing incident details in the Incidents page." lightbox="media/migration-soc-processes/analyst-workflow-incidents.png":::
+
+Here are suggested actions to continue your incident review:
+- Select **Investigation** for a visual representation of the relationships between the incidents and the relevant entities.
+- Use a [Jupyter notebook](notebooks.md) to perform an in-depth triage exercise for a particular entity. You can use the **Incident triage** notebook for this exercise.
+
+:::image type="content" source="media/migration-soc-processes/analyst-workflow-incident-triage-notebook.png" alt-text="Screenshot of Incident triage notebook, including detailed steps in TOC." lightbox="media/migration-soc-processes/analyst-workflow-incident-triage-notebook.png":::
+
+##### Expedite triage
+
+Use these features and capabilities to expedite triage:
+
+- For quick filtering, in the **Incidents** page, [search for incidents](investigate-cases.md#search-for-incidents) associated to a specific entity. Filtering by entity in the **Incidents** page is faster than filtering by the entity column in legacy SIEM incident queues.
+- For faster triage, use the **[Alert details](customize-alert-details.md)** screen to include key incident information in the incident name and description, such as the related user name, IP address, or host. For example, an incident could be dynamically renamed to `Ransomware activity detected in DC01`, where `DC01` is a critical asset, dynamically identified via the customizable alert properties.
+- For deeper analysis, in the **Incidents page**, select an incident and select **Events** under **Evidence** to view specific events that triggered the incident. The event data is visible as the output of the query associated with the analytics rule, rather than the raw event. The rule migration engineer can use this output to ensure that the analyst gets the correct data.
+- For detailed entity information, in the **Incidents page**, select an incident and select an entity name under **Entities** to view the entity's directory information, timeline, and insights. Learn how to [map entities](map-data-fields-to-entities.md).
+- To link to relevant workbooks, select **Incident preview**. You can customize the workbook to display additional information about the incident, or associated entities and custom fields.
+
+#### Investigate
+
+Use the investigation graph to deeply investigate incidents. From the **Incidents** page, select an incident and select **Investigate** to view the [investigation graph](investigate-cases.md#use-the-investigation-graph-to-deep-dive).
+
+:::image type="content" source="media/migration-soc-processes/analyst-workflow-investigation-graph.png" alt-text="Screenshot of the investigation graph." lightbox="media/migration-soc-processes/analyst-workflow-investigation-graph.png":::
+
+With the investigation graph, you can:
+- Understand the scope and identify the root cause of potential security threats by correlating relevant data with any involved entity.
+- Dive deeper into entities, and choose between different expansion options.
+- Easily see connections across different data sources by viewing relationships extracted automatically from the raw data.
+- Expand your investigation scope using built-in exploration queries to surface the full scope of a threat.
+- Use predefined exploration options to help you ask the right questions while investigating a threat.
+
+From the investigation graph, you can also open workbooks to further support your investigation efforts. Microsoft Sentinel includes several workbook templates that you can customize to suit your specific use case.
+
+:::image type="content" source="media/migration-soc-processes/analyst-workflow-investigation-workbooks.png" alt-text="Screenshot of a workbook opened from the investigation graph." lightbox="media/migration-soc-processes/analyst-workflow-investigation-workbooks.png":::
+
+#### Respond
+
+Use Microsoft Sentinel automated response capabilities to respond to complex threats and reduce alert fatigue. Microsoft Sentinel provides automated response using [Logic Apps playbooks and automation rules](automate-responses-with-playbooks.md).
+
+:::image type="content" source="media/migration-soc-processes/analyst-workflow-playbooks.png" alt-text="Screenshot of Playbook templates tab in Automation blade." lightbox="media/migration-soc-processes/analyst-workflow-playbooks.png":::
+
+Use one of the following options to access playbooks:
+- The [Automation > Playbook templates tab](use-playbook-templates.md)
+- The Microsoft Sentinel [Content hub](sentinel-solutions-deploy.md)
+- The Microsoft Sentinel [GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks)
+
+These sources include a wide range of security-oriented playbooks to cover a substantial portion of use cases of varying complexity. To streamline your work with playbooks, use the templates under **Automation > Playbook templates**. Templates allow you to easily deploy playbooks into the Microsoft Sentinel instance, and then modify the playbooks to suit your organization's needs.
+
+See the [SOC Process Framework](https://github.com/Azure/Azure-Sentinel/wiki/SOC-Process-Framework) to map your SOC process to Microsoft Sentinel capabilities.
+
+## Compare SIEM concepts
+
+Use this table to compare the main concepts of your legacy SIEM to Microsoft Sentinel concepts.
+
+| ArcSight | QRadar | Splunk | Microsoft Sentinel |
+|--|--|--|--|
+| Event | Event | Event | Event |
+| Correlation Event | Correlation Event | Notable Event | Alert |
+| Incident | Offense | Notable Event | Incident |
+| | List of offenses | Tags | Incidents page |
+| Labels | Custom field in SOAR | Tags | Tags |
+| | Jupyter Notebooks | Jupyter Notebooks | Microsoft Sentinel notebooks |
+| Dashboards | Dashboards | Dashboards | Workbooks |
+| Correlation rules | Building blocks | Correlation rules | Analytics rules |
+|Incident queue |Offences tab |Incident review |**Incident** page |
+
+## Next steps
+
+After migration, explore Microsoft's Microsoft Sentinel resources to expand your skills and get the most out of Microsoft Sentinel.
+
+Also consider increasing your threat protection by using Microsoft Sentinel alongside [Microsoft 365 Defender](./microsoft-365-defender-sentinel-integration.md) and [Microsoft Defender for Cloud](../security-center/azure-defender.md) for [integrated threat protection](https://www.microsoft.com/security/business/threat-protection). Benefit from the breadth of visibility that Microsoft Sentinel delivers, while diving deeper into detailed threat analysis.
+
+For more information, see:
+
+- [Rule migration best practices](https://techcommunity.microsoft.com/t5/azure-sentinel/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417)
+- [Webinar: Best Practices for Converting Detection Rules](https://www.youtube.com/watch?v=njXK1h9lfR4)
+- [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md)
+- [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md)
+- [Microsoft Sentinel learning path](/learn/paths/security-ops-sentinel/)
+- [SC-200 Microsoft Security Operations Analyst certification](/learn/certifications/exams/sc-200)
+- [Microsoft Sentinel Ninja training](https://techcommunity.microsoft.com/t5/azure-sentinel/become-an-azure-sentinel-ninja-the-complete-level-400-training/ba-p/1246310)
+- [Investigate an attack on a hybrid environment with Microsoft Sentinel](https://mslearn.cloudguides.com/guides/Investigate%20an%20attack%20on%20a%20hybrid%20environment%20with%20Azure%20Sentinel)
diff --git a/articles/sentinel/migration-splunk-automation.md b/articles/sentinel/migration-splunk-automation.md
new file mode 100644
index 0000000000000..a8741784c519d
--- /dev/null
+++ b/articles/sentinel/migration-splunk-automation.md
@@ -0,0 +1,91 @@
+---
+title: Migrate Splunk SOAR automation to Microsoft Sentinel | Microsoft Docs
+description: Learn how to identify SOAR use cases, and how to migrate your Splunk SOAR automation to Microsoft Sentinel.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+---
+
+# Migrate Splunk SOAR automation to Microsoft Sentinel
+
+Microsoft Sentinel provides Security Orchestration, Automation, and Response (SOAR) capabilities with [automation rules](automate-incident-handling-with-automation-rules.md) and [playbooks](tutorial-respond-threats-playbook.md). Automation rules automate incident handling and response, and playbooks run predetermined sequences of actions to response and remediate threats. This article discusses how to identify SOAR use cases, and how to migrate your Splunk SOAR automation to Microsoft Sentinel.
+
+Automation rules simplify complex workflows for your incident orchestration processes, and allow you to centrally manage your incident handling automation.
+
+With automation rules, you can:
+- Perform simple automation tasks without necessarily using playbooks. For example, you can assign, tag incidents, change status, and close incidents.
+- Automate responses for multiple analytics rules at once.
+- Control the order of actions that are executed.
+- Run playbooks for those cases where more complex automation tasks are necessary.
+
+## Identify SOAR use cases
+
+Here’s what you need to think about when migrating SOAR use cases from Splunk.
+- **Use case quality**. Choose good use cases for automation. Use cases should be based on procedures that are clearly defined, with minimal variation, and a low false-positive rate. Automation should work with efficient use cases.
+- **Manual intervention**. Automated response can have wide ranging effects and high impact automations should have human input to confirm high impact actions before they’re taken.
+- **Binary criteria**. To increase response success, decision points within an automated workflow should be as limited as possible, with binary criteria. Binary criteria reduces the need for human intervention, and enhances outcome predictability.
+- **Accurate alerts or data**. Response actions are dependent on the accuracy of signals such as alerts. Alerts and enrichment sources should be reliable. Microsoft Sentinel resources such as watchlists and reliable threat intelligence can enhance reliability.
+- **Analyst role**. While automation where possible is great, reserve more complex tasks for analysts, and provide them with the opportunity for input into workflows that require validation. In short, response automation should augment and extend analyst capabilities.
+
+## Migrate SOAR workflow
+
+This section shows how key Splunk SOAR concepts translate to Microsoft Sentinel components, and provides general guidelines for how to migrate each step or component in the SOAR workflow.
+
+:::image type="content" source="media/migration-splunk-automation/splunk-sentinel-soar-workflow-new.png" alt-text="Diagram displaying the Splunk and Microsoft Sentinel SOAR workflows." lightbox="media/migration-splunk-automation/splunk-sentinel-soar-workflow-new.png" border="false":::
+
+|Step (in diagram) |Splunk |Microsoft Sentinel |
+|---------|---------|
+|1 |Ingest events into main index. |Ingest events into the Log Analytics workspace. |
+|2 |Create containers. |Tag incidents using the [custom details feature](surface-custom-details-in-alerts.md). |
+|3 |Create cases. |Microsoft Sentinel can automatically group incidents according to user-defined criteria, such as shared entities or severity. These alerts then generate incidents. |
+|4 |Create playbooks. |Azure Logic Apps uses several connectors to orchestrate activities across Microsoft Sentinel, Azure, third party and hybrid cloud environments. |
+|4 |Create workbooks. |Microsoft Sentinel executes playbooks either in isolation or as part of an ordered automation rule. You can also execute playbooks manually against alerts or incidents, according to a predefined Security Operations Center (SOC) procedure. |
+
+## Map SOAR components
+
+Review which Microsoft Sentinel or Azure Logic Apps features map to the main Splunk SOAR components.
+
+|Splunk |Microsoft Sentinel/Azure Logic Apps |
+|---------|---------|
+|Playbook editor |[Logic App designer](../logic-apps/logic-apps-overview.md) |
+|Trigger |[Trigger](../logic-apps/logic-apps-overview.md) |
+|• Connectors • App • Automation broker |• [Connector](tutorial-respond-threats-playbook.md) • [Hybrid Runbook Worker](../automation/automation-hybrid-runbook-worker.md) |
+|Action blocks |[Action](../logic-apps/logic-apps-overview.md) |
+|Connectivity broker |[Hybrid Runbook Worker](../automation/automation-hybrid-runbook-worker.md) |
+|Community |• [Automation > Templates tab](use-playbook-templates.md) • [Content hub catalog](sentinel-solutions-catalog.md) • [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser) |
+|Decision |[Conditional control](../logic-apps/logic-apps-control-flow-conditional-statement.md) |
+|Code |[Azure Function connector](../logic-apps/logic-apps-azure-functions.md) |
+|Prompt |[Send approval email](../logic-apps/tutorial-process-mailing-list-subscriptions-workflow.md) |
+|Format |[Data operations](../logic-apps/logic-apps-perform-data-operations.md) |
+|Input playbooks |Obtain variable inputs from results of previously executed steps or explicitly declared [variables](../logic-apps/logic-apps-create-variables-store-values.md) |
+|Set parameters with Utility block API utility |Manage Incidents with the [API](/rest/api/securityinsights/stable/incidents/get) |
+
+## Operationalize playbooks and automation rules in Microsoft Sentinel
+
+Most of the playbooks that you use with Microsoft Sentinel are available in either the [Automation > Templates tab](use-playbook-templates.md), the [Content hub catalog](sentinel-solutions-catalog.md), or [GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-OnPremADUser). In some cases, however, you might need to create playbooks from scratch or from existing templates.
+
+You typically build your custom logic app using the Azure Logic App Designer feature. The logic apps code is based on [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/overview.md), which facilitate development, deployment and portability of Azure Logic Apps across multiple environments. To convert your custom playbook into a portable ARM template, you can use the [ARM template generator](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/export-microsoft-sentinel-playbooks-or-azure-logic-apps-with/ba-p/3275898).
+
+Use these resources for cases where you need to build your own playbooks either from scratch or from existing templates.
+- [Automate incident handling in Microsoft Sentinel](automate-incident-handling-with-automation-rules.md)
+- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
+- [How to use Microsoft Sentinel for Incident Response, Orchestration and Automation](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/how-to-use-azure-sentinel-for-incident-response-orchestration/ba-p/2242397)
+- [Adaptive Cards to enhance incident response in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-microsoft-teams-adaptive-cards-to-enhance-incident/ba-p/3330941)
+
+## SOAR post migration best practices
+
+Here are best practices you should take into account after your SOAR migration:
+
+- After you migrate your playbooks, test the playbooks extensively to ensure that the migrated actions work as expected.
+- Periodically review your automations to explore ways to further simplify or enhance your SOAR. Microsoft Sentinel constantly adds new connectors and actions that can help you to further simplify or increase the effectiveness of your current response implementations.
+- Monitor the performance of your playbooks using the [Playbooks health monitoring workbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-monitoring-your-logic-apps-playbooks-in-azure/ba-p/1873211).
+- Use managed identities and service principals: Authenticate against various Azure services within your Logic Apps, store the secrets in Azure Key Vault, and obscure the flow execution output. We also recommend that you [monitor the activities of these service principals](https://techcommunity.microsoft.com/t5/azure-sentinel/non-interactive-logins-minimizing-the-blind-spot/ba-p/2287932).
+
+## Next steps
+
+In this article, you learned how to map your SOAR automation from Splunk to Microsoft Sentinel.
+
+> [!div class="nextstepaction"]
+> [Export your historical data](migration-splunk-historical-data.md)
\ No newline at end of file
diff --git a/articles/sentinel/migration-splunk-detection-rules.md b/articles/sentinel/migration-splunk-detection-rules.md
new file mode 100644
index 0000000000000..66dc0ac291e72
--- /dev/null
+++ b/articles/sentinel/migration-splunk-detection-rules.md
@@ -0,0 +1,410 @@
+---
+title: Migrate Splunk detection rules to Microsoft Sentinel | Microsoft Docs
+description: Learn how to identify, compare, and migrate your Splunk detection rules to Microsoft Sentinel built-in rules.
+author: limwainstein
+ms.author: lwainstein
+ms.topic: how-to
+ms.date: 05/03/2022
+---
+
+# Migrate Splunk detection rules to Microsoft Sentinel
+
+This article describes how to identify, compare, and migrate your Splunk detection rules to Microsoft Sentinel built-in rules.
+
+## Identify and migrate rules
+
+Microsoft Sentinel uses machine learning analytics to create high-fidelity and actionable incidents, and some of your existing detections may be redundant in Microsoft Sentinel. Therefore, don't migrate all of your detection and analytics rules blindly. Review these considerations as you identify your existing detection rules.
+
+- Make sure to select use cases that justify rule migration, considering business priority and efficiency.
+- Check that you [understand Microsoft Sentinel rule types](detect-threats-built-in.md#view-built-in-detections).
+- Check that you understand the [rule terminology](#compare-rule-terminology).
+- Review any rules that haven't triggered any alerts in the past 6-12 months, and determine whether they're still relevant.
+- Eliminate low-level threats or alerts that you routinely ignore.
+- Use existing functionality, and check whether Microsoft Sentinel’s [built-in analytics rules](https://github.com/Azure/Azure-Sentinel/tree/master/Detections) might address your current use cases. Because Microsoft Sentinel uses machine learning analytics to produce high-fidelity and actionable incidents, it’s likely that some of your existing detections won’t be required anymore.
+- Confirm connected data sources and review your data connection methods. Revisit data collection conversations to ensure data depth and breadth across the use cases you plan to detect.
+- Explore community resources such as the [SOC Prime Threat Detection Marketplace](https://my.socprime.com/tdm/) to check whether your rules are available.
+- Consider whether an online query converter such as Uncoder.io might work for your rules.
+- If rules aren’t available or can’t be converted, they need to be created manually, using a KQL query. Review the [rules mapping](#map-and-compare-rule-samples) to create new queries.
+
+Learn more about [best practices for migrating detection rules](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417).
+
+**To migrate your analytics rules to Microsoft Sentinel**:
+
+1. Verify that you have a testing system in place for each rule you want to migrate.
+
+ 1. **Prepare a validation process** for your migrated rules, including full test scenarios and scripts.
+
+ 1. **Ensure that your team has useful resources** to test your migrated rules.
+
+ 1. **Confirm that you have any required data sources connected,** and review your data connection methods.
+
+1. Verify whether your detections are available as built-in templates in Microsoft Sentinel:
+
+ - **If the built-in rules are sufficient**, use built-in rule templates to create rules for your own workspace.
+
+ In Microsoft Sentinel, go to the **Configuration > Analytics > Rule templates** tab, and create and update each relevant analytics rule.
+
+ For more information, see [Detect threats out-of-the-box](detect-threats-built-in.md).
+
+ - **If you have detections that aren't covered by Microsoft Sentinel's built-in rules**, try an online query converter, such as [Uncoder.io](https://uncoder.io/) to convert your queries to KQL.
+
+ Identify the trigger condition and rule action, and then construct and review your KQL query.
+
+ - **If neither the built-in rules nor an online rule converter is sufficient**, you'll need to create the rule manually. In such cases, use the following steps to start creating your rule:
+
+ 1. **Identify the data sources you want to use in your rule**. You'll want to create a mapping table between data sources and data tables in Microsoft Sentinel to identify the tables you want to query.
+
+ 1. **Identify any attributes, fields, or entities** in your data that you want to use in your rules.
+
+ 1. **Identify your rule criteria and logic**. At this stage, you may want to use rule templates as samples for how to construct your KQL queries.
+
+ Consider filters, correlation rules, active lists, reference sets, watchlists, detection anomalies, aggregations, and so on. You might use references provided by your legacy SIEM to understand [how to best map your query syntax](#map-and-compare-rule-samples).
+
+ 1. **Identify the trigger condition and rule action, and then construct and review your KQL query**. When reviewing your query, consider KQL optimization guidance resources.
+
+1. Test the rule with each of your relevant use cases. If it doesn't provide expected results, you may want to review the KQL and test it again.
+
+1. When you're satisfied, you can consider the rule migrated. Create a playbook for your rule action as needed. For more information, see [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md).
+
+Learn more about analytics rules.
+
+- [**Create custom analytics rules to detect threats**](detect-threats-custom.md). Use [alert grouping](detect-threats-custom.md#alert-grouping) to reduce alert fatigue by grouping alerts that occur within a given timeframe.
+- [**Map data fields to entities in Microsoft Sentinel**](map-data-fields-to-entities.md) to enable SOC engineers to define entities as part of the evidence to track during an investigation. Entity mapping also makes it possible for SOC analysts to take advantage of an intuitive [investigation graph (investigate-cases.md#use-the-investigation-graph-to-deep-dive) that can help reduce time and effort.
+- [**Investigate incidents with UEBA data**](investigate-with-ueba.md), as an example of how to use evidence to surface events, alerts, and any bookmarks associated with a particular incident in the incident preview pane.
+- [**Kusto Query Language (KQL)**](/azure/data-explorer/kusto/query/), which you can use to send read-only requests to your [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md) database to process data and return results. KQL is also used across other Microsoft services, such as [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) and [Application Insights](../azure-monitor/app/app-insights-overview.md).
+
+## Compare rule terminology
+
+This table helps you to clarify the concept of a rule in Microsoft Sentinel compared to Splunk.
+
+| |Splunk |Microsoft Sentinel |
+|---------|---------|---------|
+|**Rule type** |• Scheduled • Real-time |• Scheduled query • Fusion • Microsoft Security • Machine Learning (ML) Behavior Analytics |
+|**Criteria** |Define in SPL |Define in KQL |
+|**Trigger condition** |• Number of results • Number of hosts • Number of sources • Custom |Threshold: Number of query results |
+|**Action** |• Add to triggered alerts • Log Event • Output results to lookup • And more |• Create alert or incident • Integrates with Logic Apps |
+
+## Map and compare rule samples
+
+Use these samples to compare and map rules from Splunk to Microsoft Sentinel in various scenarios.
+
+### Common search commands
+
+|SPL command |Description |KQL operator |KQL example |
+|---------|---------|---------|---------|
+|`chart/ timechart` |Returns results in a tabular output for time-series charting. |[render operator](/azure/data-explorer/kusto/query/renderoperator?pivots=azuredataexplorer) |`… | render timechart` |
+|`dedup` |Removes subsequent results that match a specified criterion. |• [distinct](/azure/data-explorer/kusto/query/distinctoperator) • [summarize](/azure/data-explorer/kusto/query/summarizeoperator) |`… | summarize by Computer, EventID` |
+|`eval` |Calculates an expression. Learn about [common eval commands](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/SPL%20to%20KQL.md#common-eval-commands). |[extend](/azure/data-explorer/kusto/query/extendoperator) |`T | extend duration = endTime - startTime` |
+|`fields` |Removes fields from search results. |• [project](/azure/data-explorer/kusto/query/projectoperator) • [project-away](/azure/data-explorer/kusto/query/projectawayoperator) |`T | project cost=price*quantity, price` |
+|`head/tail` |Returns the first or last N results. |[top](/azure/data-explorer/kusto/query/topoperator) |`T | top 5 by Name desc nulls last` |
+|`lookup` |Adds field values from an external source. |• [externaldata](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuredataexplorer) • [lookup](/azure/data-explorer/kusto/query/lookupoperator) |[KQL example](#lookup-command-kql-example) |
+|`rename` |Renames a field. Use wildcards to specify multiple fields. |[project-rename](/azure/data-explorer/kusto/query/projectrenameoperator) |`T | project-rename new_column_name = column_name` |
+|`rex` |Specifies group names using regular expressions to extract fields. |[matches regex](/azure/data-explorer/kusto/query/re2) |`… | where field matches regex "^addr.*"` |
+|`search` |Filters results to results that match the search expression. |[search](/azure/data-explorer/kusto/query/searchoperator?pivots=azuredataexplorer) |`search "X"` |
+|`sort` |Sorts the search results by the specified fields. |[sort](/azure/data-explorer/kusto/query/sortoperator) |`T | sort by strlen(country) asc, price desc` |
+|`stats` |Provides statistics, optionally grouped by fields. Learn more about [common stats commands](https://github.com/Azure/Azure-Sentinel/blob/master/Tools/RuleMigration/SPL%20to%20KQL.md#common-stats-commands). |[summarize](/azure/data-explorer/kusto/query/summarizeoperator) |[KQL example](#stats-command-kql-example) |
+|`mstats` |Similar to stats, used on metrics instead of events. |[summarize](/azure/data-explorer/kusto/query/summarizeoperator) |[KQL example](#mstats-command-kql-example) |
+|`table` |Specifies which fields to keep in the result set, and retains data in tabular format. |[project](/azure/data-explorer/kusto/query/projectoperator) |`T | project columnA, columnB` |
+|`top/rare` |Displays the most or least common values of a field. |[top](/azure/data-explorer/kusto/query/topoperator) |`T | top 5 by Name desc nulls last` |
+|`transaction` |Groups search results into transactions.