在中控机上部署 TiUP 组件
  GPYyDLfgzzIb 2023年11月02日 156 0

以普通用户身份登录中控机。以 tidb 用户为例,后续安装 TiUP 及集群管理操作均通过该用户完成:

  1. 执行如下命令安装 TiUP 工具:

在中控机上部署 TiUP 组件_bash


刷新环境

重新声明全局环境变量:
[tidb@azkaban01 opt]$ source /home/tidb/.bash_profile
确认 TiUP 工具是否安装:
[tidb@azkaban01 opt]$ which tiup

在中控机上部署 TiUP 组件_bash_02


安装 TiUP cluster 组件:

[tidb@azkaban01 opt]$ tiup cluster
tiup is checking updates for component cluster ...
A new version of cluster is available:
   The latest version:         v1.12.2
   Local installed version:    
   Update current component:   tiup update cluster
   Update all components:      tiup update --all

The component `cluster` version  is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.12.2-linux-amd64.tar.gz 8.68 MiB / 8.68 MiB 100.00% ? MiB/s                       
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.12.2/tiup-cluster
Deploy a TiDB cluster for production

Usage:
  tiup cluster [command]

Available Commands:
  check       Perform preflight checks for the cluster.
  deploy      Deploy a cluster for production
  start       Start a TiDB cluster
  stop        Stop a TiDB cluster
  restart     Restart a TiDB cluster
  scale-in    Scale in a TiDB cluster
  scale-out   Scale out a TiDB cluster
  destroy     Destroy a specified cluster
  clean       (EXPERIMENTAL) Cleanup a specified cluster
  upgrade     Upgrade a specified TiDB cluster
  display     Display information of a TiDB cluster
  prune       Destroy and remove instances that is in tombstone state
  list        List all clusters
  audit       Show audit log of cluster operation
  import      Import an exist TiDB cluster from TiDB-Ansible
  edit-config Edit TiDB cluster config
  show-config Show TiDB cluster config
  reload      Reload a TiDB cluster's config and restart if needed
  patch       Replace the remote package with a specified package and restart the service
  rename      Rename the cluster
  enable      Enable a TiDB cluster automatically at boot
  disable     Disable automatic enabling of TiDB clusters at boot
  replay      Replay previous operation and skip successed steps
  template    Print topology template
  tls         Enable/Disable TLS between TiDB components
  meta        backup/restore meta information
  rotatessh   rotate ssh keys on all nodes
  help        Help about any command
  completion  Generate the autocompletion script for the specified shell

Flags:
  -c, --concurrency int     max number of parallel tasks allowed (default 5)
      --format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
  -h, --help                help for tiup
      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  -v, --version             version for tiup
      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
  -y, --yes                 Skip all confirmations and assumes 'yes'

Use "tiup cluster help [command]" for more information about a command.

如果已经安装,则更新 TiUP cluster 组件至最新版本:

tiup update --self && tiup update cluster
预期输出 “Update successfully!” 字样。

在中控机上部署 TiUP 组件_bash_03

验证当前 TiUP cluster 版本信息。执行如下命令查看 TiUP cluster 组件版本:

tiup --binary cluster

在中控机上部署 TiUP 组件_版本信息_04

[tidb@azkaban01 ~]$ tiup cluster deploy tidb-online  v5.4.3 /opt/tidb/topology.yaml --user tidb -p
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.12.2/tiup-cluster deploy tidb-online v5.4.3 /opt/tidb/topology.yaml --user tidb -p
Input SSH password: 



+ Detect CPU Arch Name
  - Detecting node 39.101.72.116 Arch info ... Error
  - Detecting node 47.243.20.211 Arch info ... Error
  - Detecting node 8.218.213.164 Arch info ... Done

Error: failed to fetch cpu-arch or kernel-name: executor.ssh.execute_failed: Failed to execute command over SSH for 'tidb@39.101.72.116:22' {ssh_stderr: sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set
, ssh_stdout: , ssh_command: export LANG=C; PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin /usr/bin/sudo -H bash -c "uname -m"}, cause: Process exited with status 1

Verbose debug logs has been written to /home/tidb/.tiup/logs/tiup-cluster-debug-2023-05-25-11-20-39.log.
[tidb@azkaban01 ~]$ tiup cluster deploy tidb-online  v5.4.3 /opt/tidb/topology.yaml --user root -p
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.12.2/tiup-cluster deploy tidb-online v5.4.3 /opt/tidb/topology.yaml --user root -p
Input SSH password: 



+ Detect CPU Arch Name
  - Detecting node 39.101.72.116 Arch info ... Done
  - Detecting node 47.243.20.211 Arch info ... Done
  - Detecting node 8.218.213.164 Arch info ... Done



+ Detect CPU OS Name
  - Detecting node 39.101.72.116 OS info ... Done
  - Detecting node 47.243.20.211 OS info ... Done
  - Detecting node 8.218.213.164 OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidb-online
Cluster version: v5.4.3
Role          Host           Ports                            OS/Arch       Directories
----          ----           -----                            -------       -----------
pd            39.101.72.116  2379/2380                        linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            47.243.20.211  2379/2380                        linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd            8.218.213.164  2379/2380                        linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          39.101.72.116  20160/20180                      linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          47.243.20.211  20160/20180                      linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          8.218.213.164  20160/20180                      linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb          39.101.72.116  4000/10080                       linux/x86_64  /tidb-deploy/tidb-4000
tiflash       39.101.72.116  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus    47.243.20.211  9090/12020                       linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       47.243.20.211  3000                             linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  47.243.20.211  9093/9094                        linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v5.4.3 (linux/amd64) ... Done
  - Download tikv:v5.4.3 (linux/amd64) ... Done
  - Download tidb:v5.4.3 (linux/amd64) ... Done
  - Download tiflash:v5.4.3 (linux/amd64) ... Done
  - Download prometheus:v5.4.3 (linux/amd64) ... Done
  - Download grafana:v5.4.3 (linux/amd64) ... Done
  - Download alertmanager: (linux/amd64) ... Done
  - Download node_exporter: (linux/amd64) ... Done
  - Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 8.218.213.164:22 ... Done
  - Prepare 39.101.72.116:22 ... Done
  - Prepare 47.243.20.211:22 ... Done
+ Deploy TiDB instance
+ Deploy TiDB instance
  - Copy pd -> 39.101.72.116 ... Done
  - Copy pd -> 47.243.20.211 ... Done
  - Copy pd -> 8.218.213.164 ... Done
  - Copy tikv -> 39.101.72.116 ... Done
  - Copy tikv -> 47.243.20.211 ... Done
  - Copy tikv -> 8.218.213.164 ... Done
  - Copy tidb -> 39.101.72.116 ... Done
  - Copy tiflash -> 39.101.72.116 ... Done
  - Copy prometheus -> 47.243.20.211 ... Done
  - Copy grafana -> 47.243.20.211 ... Done
  - Copy alertmanager -> 47.243.20.211 ... Done
  - Deploy node_exporter -> 39.101.72.116 ... Done
  - Deploy node_exporter -> 47.243.20.211 ... Done
  - Deploy node_exporter -> 8.218.213.164 ... Done
  - Deploy blackbox_exporter -> 39.101.72.116 ... Done
  - Deploy blackbox_exporter -> 47.243.20.211 ... Done
  - Deploy blackbox_exporter -> 8.218.213.164 ... Done
+ Copy certificate to remote host
+ Init instance configs
  - Generate config pd -> 39.101.72.116:2379 ... Done
  - Generate config pd -> 47.243.20.211:2379 ... Done
  - Generate config pd -> 8.218.213.164:2379 ... Done
  - Generate config tikv -> 39.101.72.116:20160 ... Done
  - Generate config tikv -> 47.243.20.211:20160 ... Done
  - Generate config tikv -> 8.218.213.164:20160 ... Done
  - Generate config tidb -> 39.101.72.116:4000 ... Done
  - Generate config tiflash -> 39.101.72.116:9000 ... Done
  - Generate config prometheus -> 47.243.20.211:9090 ... Done
  - Generate config grafana -> 47.243.20.211:3000 ... Done
  - Generate config alertmanager -> 47.243.20.211:9093 ... Done
+ Init monitor configs
  - Generate config node_exporter -> 39.101.72.116 ... Done
  - Generate config node_exporter -> 47.243.20.211 ... Done
  - Generate config node_exporter -> 8.218.213.164 ... Done
  - Generate config blackbox_exporter -> 39.101.72.116 ... Done
  - Generate config blackbox_exporter -> 47.243.20.211 ... Done
  - Generate config blackbox_exporter -> 8.218.213.164 ... Done
Enabling component pd
        Enabling instance 8.218.213.164:2379
        Enabling instance 39.101.72.116:2379
        Enabling instance 47.243.20.211:2379
        Enable instance 39.101.72.116:2379 success
        Enable instance 8.218.213.164:2379 success
        Enable instance 47.243.20.211:2379 success
Enabling component tikv
        Enabling instance 8.218.213.164:20160
        Enabling instance 39.101.72.116:20160
        Enabling instance 47.243.20.211:20160
        Enable instance 39.101.72.116:20160 success
        Enable instance 47.243.20.211:20160 success
        Enable instance 8.218.213.164:20160 success
Enabling component tidb
        Enabling instance 39.101.72.116:4000
        Enable instance 39.101.72.116:4000 success
Enabling component tiflash
        Enabling instance 39.101.72.116:9000
        Enable instance 39.101.72.116:9000 success
Enabling component prometheus
        Enabling instance 47.243.20.211:9090
        Enable instance 47.243.20.211:9090 success
Enabling component grafana
        Enabling instance 47.243.20.211:3000
        Enable instance 47.243.20.211:3000 success
Enabling component alertmanager
        Enabling instance 47.243.20.211:9093
        Enable instance 47.243.20.211:9093 success
Enabling component node_exporter
        Enabling instance 39.101.72.116
        Enabling instance 47.243.20.211
        Enabling instance 8.218.213.164
        Enable 39.101.72.116 success
        Enable 8.218.213.164 success
        Enable 47.243.20.211 success
Enabling component blackbox_exporter
        Enabling instance 39.101.72.116
        Enabling instance 47.243.20.211
        Enabling instance 8.218.213.164
        Enable 39.101.72.116 success
        Enable 8.218.213.164 success
        Enable 47.243.20.211 success
Cluster `tidb-online` deployed successfully, you can start it with command: `tiup cluster start tidb-online --init`
[tidb@azkaban01 ~]$ 
[tidb@azkaban01 ~]$ tiup cluster start tidb-online --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.12.2/tiup-cluster start tidb-online --init
Starting cluster tidb-online...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-online/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-online/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=root, host=47.243.20.211
+ [Parallel] - UserSSH: user=root, host=8.218.213.164
+ [Parallel] - UserSSH: user=root, host=39.101.72.116
+ [Parallel] - UserSSH: user=root, host=39.101.72.116
+ [Parallel] - UserSSH: user=root, host=47.243.20.211
+ [Parallel] - UserSSH: user=root, host=47.243.20.211
+ [Parallel] - UserSSH: user=root, host=47.243.20.211
+ [Parallel] - UserSSH: user=root, host=39.101.72.116
+ [Parallel] - UserSSH: user=root, host=47.243.20.211
+ [Parallel] - UserSSH: user=root, host=8.218.213.164
+ [Parallel] - UserSSH: user=root, host=39.101.72.116
+ [ Serial ] - StartCluster
Starting component pd
        Starting instance 8.218.213.164:2379
        Starting instance 39.101.72.116:2379
        Starting instance 47.243.20.211:2379
        Start instance 39.101.72.116:2379 success
        Start instance 8.218.213.164:2379 success
        Start instance 47.243.20.211:2379 success
Starting component tikv
        Starting instance 8.218.213.164:20160
        Starting instance 47.243.20.211:20160
        Starting instance 39.101.72.116:20160
【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

推荐阅读
  KRe60ogUm4le   2024年03月22日   59   0   0 linux算法
GPYyDLfgzzIb
最新推荐 更多

2024-05-31