From 2e46b207bdc593624f5f5cf104708df1848ba148 Mon Sep 17 00:00:00 2001 From: GitHub Actions Date: Fri, 16 Aug 2024 12:58:25 +0000 Subject: [PATCH 1/2] Bump version file --- version.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/version.txt b/version.txt index c8fe2be..1ff5860 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -v0.0.15 +v0.0.75 From 32265bf7b53d7b741cf77b829872d39ec0ac9666 Mon Sep 17 00:00:00 2001 From: GitHub Actions Date: Fri, 16 Aug 2024 12:58:25 +0000 Subject: [PATCH 2/2] Update version to v0.0.75 --- docs/capabilities/agents.md | 2 +- docs/guides/prompting-capabilities.md | 2 +- openapi.yaml | 4 ++-- version.txt | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/capabilities/agents.md b/docs/capabilities/agents.md index 2b60e39..4450458 100644 --- a/docs/capabilities/agents.md +++ b/docs/capabilities/agents.md @@ -123,7 +123,7 @@ console.log('Chat:', chatResponse.choices[0].message.content); ```bash -curl --location "https://api.mistral.ai/v1/chat/completions" \ +curl --location "https://api.mistral.ai/v1/agents/completions" \ --header 'Content-Type: application/json' \ --header 'Accept: application/json' \ --header "Authorization: Bearer $MISTRAL_API_KEY" \ diff --git a/docs/guides/prompting-capabilities.md b/docs/guides/prompting-capabilities.md index 42c3304..467063e 100644 --- a/docs/guides/prompting-capabilities.md +++ b/docs/guides/prompting-capabilities.md @@ -221,7 +221,7 @@ You will only respond with a JSON object with the key Summary and Confidence. Do #### Strategies we used: -- **JSON output**: For facilitating downstream tasks, JSON format output is frequently preferred. We can enable the JSON mode by setting the response_format to `{"type": "json_object"}` and specify in the prompt that "You will only respond with a JSON object with the key Summary and Confidence." Specifying these keys within the JSON object is beneficial for clarity and consistency. +- **JSON output**: For facilitating downstream tasks, JSON format output is frequently preferred. We can We can enable the JSON mode by setting the response_format to `{"type": "json_object"}` and specify in the prompt that "You will only respond with a JSON object with the key Summary and Confidence." Specifying these keys within the JSON object is beneficial for clarity and consistency. - **Higher Temperature**: In this example, we increase the temperature score to encourage the model to be more creative and output three generated summaries that are different from each other. ### Introduce an evaluation step diff --git a/openapi.yaml b/openapi.yaml index e5072b8..fe021f3 100644 --- a/openapi.yaml +++ b/openapi.yaml @@ -1789,7 +1789,7 @@ components: - type: string - type: "null" title: Model - description: ID of the model to use. You can use the [List Available Models](/api#operation/listModels) API to see all of your available models, or see our [Model overview](/models) for model descriptions. + description: ID of the model to use. You can use the [List Available Models](/api/#tag/models/operation/list_models_v1_models_get) API to see all of your available models, or see our [Model overview](/models) for model descriptions. examples: - mistral-small-latest temperature: @@ -2299,8 +2299,8 @@ components: type: object required: - index - - text - finish_reason + - message properties: index: type: integer diff --git a/version.txt b/version.txt index 1ff5860..c8fe2be 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -v0.0.75 +v0.0.15