Skip to content

Commit

Permalink
fix(i18n): escape special characters
Browse files Browse the repository at this point in the history
  • Loading branch information
Kinplemelon authored and 0721Betty committed Jun 12, 2024
1 parent 4eb189f commit ec53c26
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
4 changes: 2 additions & 2 deletions packages/i18n/lib/enIntegrationDesc.ts
Original file line number Diff line number Diff line change
Expand Up @@ -501,8 +501,8 @@ export const enIntegrationDesc: Record<string, Record<string, string>> = {
acl: 'The Access Control List (ACL) to use for the uploaded objects.',
content: 'The content of the object to be uploaded supports placeholders.',
bucket:
'The name of the bucket to which files will be uploaded. Needs to be pre-created in S3. Supports the ${var} placeholder format.',
key: 'The content of the object to be stored. By default, it is in JSON text format containing all fields. Supports placeholder settings such as ${payload}. The storage format depends on the format of the variable and can be stored in binary format.',
"The name of the bucket to which files will be uploaded. Needs to be pre-created in S3. Supports the ${'{'}var{'}'} placeholder format.",
key: "The content of the object to be stored. By default, it is in JSON text format containing all fields. Supports placeholder settings such as ${'{'}payload{'}'}. The storage format depends on the format of the variable and can be stored in binary format.",
column_order: `Event fields that will be ordered first as columns in the resulting CSV file.<br/>Regardless of this setting, resulting CSV will contain all the fields of aggregated events, but all the columns not explicitly mentioned here will be ordered after the ones listed here in the lexicographical order.`,
time_interval: 'Amount of time events will be aggregated in a single object before uploading.',
max_records: `Number of records (events) allowed per each aggregated object. Each aggregated upload will contain no more than that number of events, but may contain less.<br/>If event rate is high enough, there obviously may be more than one aggregated upload during the same time interval. These uploads will have different, but consecutive sequence numbers, which will be a part of S3 object key.`,
Expand Down
6 changes: 3 additions & 3 deletions packages/i18n/lib/zhIntegrationDesc.ts
Original file line number Diff line number Diff line change
Expand Up @@ -451,9 +451,9 @@ export const zhIntegrationDesc: Record<string, Record<string, string>> = {
ipv6_probe: '是否探测 IPv6 支持。',
acl: '上传的对象的访问权限。',
content:
'要存储的对象的内容。默认情况下,它是包含所有字段的 JSON 文本格式。支持如 ${payload} 的占位符设置。存储格式取决于变量的格式,支持二进制内容。',
bucket: '将要上传文件的存储桶的名称。需要在 S3 中预先创建好,支持 ${var} 占位符格式。',
key: '要存储的对象的键。支持如 ${var} 的占位符设置。',
"要存储的对象的内容。默认情况下,它是包含所有字段的 JSON 文本格式。支持如 ${'{'}payload{'}'} 的占位符设置。存储格式取决于变量的格式,支持二进制内容。",
bucket: "将要上传文件的存储桶的名称。需要在 S3 中预先创建好,支持 ${'{'}var{'}'} 占位符格式。",
key: "要存储的对象的键。支持如 ${'{'}var{'}'} 的占位符设置。",
column_order: `在生成的 CSV 文件中首先按列排序的事件字段。<br/>无论此设置如何,生成的 CSV 都将包含聚合事件的所有字段,但此处未明确提及的所有列将按字典顺序排在这里列出的字段之后。`,
time_interval: '在上传前将事件聚合到单个对象中的时间量。',
max_records: `每个聚合对象允许的记录(事件)数量。每次聚合上传包含的事件数量不会超过此数值,但可能会更少。<br/>如果事件速率足够高,在同一时间间隔内显然可能会有多个聚合上传。这些上传将具有不同但连续的序列号,这些序列号将是 S3 对象键的一部分。`,
Expand Down

0 comments on commit ec53c26

Please sign in to comment.