Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

en,zh: remove docs about tidb-binlog and tikv-importer #2672

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions en/TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@
- [Deploy TiDB Across Multiple Kubernetes Clusters](deploy-tidb-cluster-across-multiple-kubernetes.md)
- [Deploy a Heterogeneous TiDB Cluster](deploy-heterogeneous-tidb-cluster.md)
- [Deploy TiCDC](deploy-ticdc.md)
- [Deploy TiDB Binlog](deploy-tidb-binlog.md)
- Monitor and Alert
- [Deploy Monitoring and Alerts for TiDB](monitor-a-tidb-cluster.md)
- [Monitor and Diagnose TiDB Using TiDB Dashboard](access-dashboard.md)
Expand Down Expand Up @@ -118,8 +117,6 @@
- [Required RBAC Rules](tidb-operator-rbac.md)
- Tools
- [TiDB Toolkit](tidb-toolkit.md)
- Configure
- [Configure tidb-drainer Chart](configure-tidb-binlog-drainer.md)
- [Log Collection](logs-collection.md)
- [Monitoring and Alert on Kubernetes](monitor-kubernetes.md)
- [PingCAP Clinic Diagnostic Data](clinic-data-collection.md)
Expand Down
2 changes: 1 addition & 1 deletion en/backup-by-ebs-snapshot-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ To initialize the restored volume more efficiently, it is recommended to **separ
- For TiKV configuration, do not set `resolved-ts.enable` to `false`, and do not set `raftstore.report-min-resolved-ts-interval` to `"0s"`. Otherwise, it can lead to backup failure.
- For PD configuration, do not set `pd-server.min-resolved-ts-persistence-interval` to `"0s"`. Otherwise, it can lead to backup failure.
- To use this backup method, the TiDB cluster must be deployed on AWS EC2 and use AWS EBS volumes.
- This backup method is currently not supported for TiFlash, TiCDC, DM, and TiDB Binlog nodes.
- This backup method is currently not supported for TiFlash, TiCDC and DM nodes.

> **Note:**
>
Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-aws-s3-by-snapshot.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ If you have any other requirements, select an appropriate backup method based on
- For TiKV configuration, do not set [`resolved-ts.enable`](https://docs.pingcap.com/tidb/stable/tikv-configuration-file#enable-2) to `false`, and do not set [`raftstore.report-min-resolved-ts-interval`](https://docs.pingcap.com/tidb/stable/tikv-configuration-file#report-min-resolved-ts-interval-new-in-v600) to `"0s"`. Otherwise, it can lead to backup failure.
- For PD configuration, do not set [`pd-server.min-resolved-ts-persistence-interval`](https://docs.pingcap.com/tidb/stable/pd-configuration-file#min-resolved-ts-persistence-interval-new-in-v600) to `"0s"`. Otherwise, it can lead to backup failure.
- To use this backup method, the TiDB cluster must be deployed on AWS EKS and uses AWS EBS volumes.
- This backup method is currently not supported for TiFlash, TiCDC, DM, and TiDB Binlog nodes.
- This backup method is currently not supported for TiFlash, TiCDC and DM nodes.

> **Note:**
>
Expand Down
8 changes: 4 additions & 4 deletions en/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,15 +37,15 @@ The cluster name can be configured by changing `metadata.name` in the `TiDBCuste

### Version

Usually, components in a cluster are in the same version. It is recommended to configure `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.baseImage` and `spec.version`, if you need to configure different versions for different components, you can configure `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.version`.
Usually, components in a cluster are in the same version. It is recommended to configure `spec.<pd/tidb/tikv/tiflash/ticdc>.baseImage` and `spec.version`, if you need to configure different versions for different components, you can configure `spec.<pd/tidb/tikv/tiflash/ticdc>.version`.

Here are the formats of the parameters:

- `spec.version`: the format is `imageTag`, such as `v8.5.0`

- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.baseImage`: the format is `imageName`, such as `pingcap/tidb`
- `spec.<pd/tidb/tikv/tiflash/ticdc>.baseImage`: the format is `imageName`, such as `pingcap/tidb`

- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.version`: the format is `imageTag`, such as `v8.5.0`
- `spec.<pd/tidb/tikv/tiflash/ticdc>.version`: the format is `imageTag`, such as `v8.5.0`

### Recommended configuration

Expand Down Expand Up @@ -246,7 +246,7 @@ To mount multiple PVs for PD microservices (taking the `tso` microservice as an

### HostNetwork

For PD, TiKV, TiDB, TiFlash, TiProxy, TiCDC, and Pump, you can configure the Pods to use the host namespace [`HostNetwork`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy).
For PD, TiKV, TiDB, TiFlash, TiProxy and TiCDC, you can configure the Pods to use the host namespace [`HostNetwork`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy).

To enable `HostNetwork` for all supported components, configure `spec.hostNetwork: true`.

Expand Down
12 changes: 3 additions & 9 deletions en/configure-storage-class.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ aliases: ['/docs/tidb-in-kubernetes/dev/configure-storage-class/','/docs/dev/tid

# Persistent Storage Class Configuration on Kubernetes

TiDB cluster components such as PD, TiKV, TiDB monitoring, TiDB Binlog, and `tidb-backup` require persistent storage for data. To achieve this on Kubernetes, you need to use [PersistentVolume (PV)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). Kubernetes supports different types of [storage classes](https://kubernetes.io/docs/concepts/storage/volumes/), which can be categorized into two main types:
TiDB cluster components such as PD, TiKV, TiDB monitoring, and BR require persistent storage for data. To achieve this on Kubernetes, you need to use [PersistentVolume (PV)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). Kubernetes supports different types of [storage classes](https://kubernetes.io/docs/concepts/storage/volumes/), which can be categorized into two main types:

- Network storage

Expand All @@ -28,9 +28,9 @@ TiKV uses the Raft protocol to replicate data. When a node fails, PD automatical

PD also uses Raft to replicate data. PD is not an I/O-intensive application, but rather a database for storing cluster meta information. Therefore, a local SAS disk or network SSD storage such as EBS General Purpose SSD (gp2) volumes on AWS or SSD persistent disks on Google Cloud can meet the requirements.

To ensure availability, it is recommended to use network storage for components such as TiDB monitoring, TiDB Binlog, and `tidb-backup` because they do not have redundant replicas. TiDB Binlog's Pump and Drainer components are I/O-intensive applications that require low read and write latency, so it is recommended to use high-performance network storage such as EBS Provisioned IOPS SSD (io1) volumes on AWS or SSD persistent disks on Google Cloud.
To ensure availability, it is recommended to use network storage for components such as TiDB monitoring, and BR because they do not have redundant replicas.

When deploying TiDB clusters or `tidb-backup` with TiDB Operator, you can configure the `StorageClass` for the components that require persistent storage via the corresponding `storageClassName` field in the `values.yaml` configuration file. The `StorageClassName` is set to `local-storage` by default.
When deploying TiDB clusters or BR with TiDB Operator, you can configure the `StorageClass` for the components that require persistent storage via the corresponding `storageClassName` field in the `values.yaml` configuration file. The `StorageClassName` is set to `local-storage` by default.

## Network PV configuration

Expand Down Expand Up @@ -80,12 +80,6 @@ Currently, Kubernetes supports statically allocated local storage. To create a l
>
> The number of directories you create depends on the planned number of TiDB clusters. Each directory has a corresponding PV created, and each TiDB cluster's monitoring data uses one PV.

- For a disk that stores TiDB Binlog and backup data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories on the disk and bind mount the directories into the `/mnt/backup` directory.

>**Note:**
>
> The number of directories you create depends on the planned number of TiDB clusters, the number of Pumps in each cluster, and your backup method. Each directory has a corresponding PV created, and each Pump and Drainer use one PV. All [Ad-hoc full backup](backup-to-s3.md#ad-hoc-full-backup-to-s3-compatible-storage) tasks and [scheduled full backup](backup-to-s3.md#scheduled-full-backup-to-s3-compatible-storage) tasks share one PV.

The `/mnt/ssd`, `/mnt/sharedssd`, `/mnt/monitoring`, and `/mnt/backup` directories mentioned above are discovery directories used by local-volume-provisioner. For each subdirectory in the discovery directory, local-volume-provisioner creates a corresponding PV.

### Step 2: Deploy local-volume-provisioner
Expand Down
53 changes: 0 additions & 53 deletions en/configure-tidb-binlog-drainer.md

This file was deleted.

3 changes: 0 additions & 3 deletions en/deploy-cluster-on-arm64.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,6 @@ Before starting the process, make sure that Kubernetes clusters are deployed on
tikv:
baseImage: pingcap/tikv-arm64
# ...
pump:
baseImage: pingcap/tidb-binlog-arm64
# ...
ticdc:
baseImage: pingcap/ticdc-arm64
# ...
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-failures.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ kubectl describe restores -n ${namespace} ${restore_name}

The Pending state of a Pod is usually caused by conditions of insufficient resources, for example:

- The `StorageClass` of the PVC used by PD, TiKV, TiFlash, Pump, Monitor, Backup, and Restore Pods does not exist or the PV is insufficient.
- The `StorageClass` of the PVC used by PD, TiKV, TiFlash, Monitor, Backup, and Restore Pods does not exist or the PV is insufficient.
- No nodes in the Kubernetes cluster can satisfy the CPU or memory resources requested by the Pod.
- The number of TiKV or PD replicas and the number of nodes in the cluster do not satisfy the high availability scheduling policy of tidb-scheduler.
- The certificates used by TiDB or TiProxy components are not configured.
Expand Down
Loading
Loading