绑定完请刷新页面
取消
刷新

分享好友

×
取消 复制
Greenplum监控工具:Greenplum-CC-web4.3 安装使用
2022-06-06 14:33:36

Greenplum监控工具:Greenplum-CC-web4.3 安装使用

1 概述:

本文档是安装Greenplum 5.10集群后,安装官方的监控工具Greenplum Command Center Software

2 安装软件介绍

首先介绍官方文档:https://gpcc.docs.pivotal.io/pdf/GPCC-431-guide.pdf

选择软件之前,需要确认Greenplum集群版本

[gpadmin@mdw ~]$ gpstate -s|grep -i ' Greenplum Version' 20200424:12:07:08:005618 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4' 20200424:12:07:08:005618 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Aug 10 2018 07:30:24' [gpadmin@mdw ~]$ ps -ef|grep gpmon

然后进去官方网站下载:

https://network.pivotal.io/

![image-20200424164535164](D:\SYBASE&informix&DB2\Greenplum\4 学习笔记\部署安装\6、GPCC的安装和升级.assets\image-20200424164535164.png)

https://network.pivotal.io/products/gpdb-command-center

![image-20200424164927557](D:\SYBASE&informix&DB2\Greenplum\4 学习笔记\部署安装\6、GPCC的安装和升级.assets\image-20200424164927557.png)

3 安装准备

3.1 激活gpmmon agent

3.1.1 登录gpadmin【仅在master节点】

[root@mdw ~]# su - gpadmin
Last login: Fri Apr 24 12:05:57 CST 2020 on pts/0

3.1.2 source 环境变量

# source /usr/local/greenplum-db/greenplum_path.sh

3.1.3 gpperfmon_install

$ gpperfmon_install --enable --password changeme --port 5432
[gpadmin@mdw ~]$ gpperfmon_install --enable --password changeme --port 5432
20200424:17:03:07:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-createdb gpperfmon >& /dev/null
20200424:17:03:08:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 psql -f /usr/local/greenplum-db/./lib/gpperfmon/gpperfmon.sql gpperfmon >& /dev/null
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 psql template1 -c "DROP ROLE IF EXISTS gpmon"  >& /dev/null
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 psql template1 -c "CREATE ROLE gpmon WITH SUPERUSER CREATEDB LOGIN ENCRYPTED PASSWORD 'changeme'"  >& /dev/null
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-echo "local    gpperfmon         gpmon         md5" >> /greenplum/gpdata/master/gpseg-1/pg_hba.conf
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-echo "host     all         gpmon         127.0.0.1/28    md5" >> /greenplum/gpdata/master/gpseg-1/pg_hba.conf
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-echo "host     all         gpmon         ::1/128    md5" >> /greenplum/gpdata/master/gpseg-1/pg_hba.conf
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-touch /home/gpadmin/.pgpass >& /dev/null
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-mv -f /home/gpadmin/.pgpass /home/gpadmin/.pgpass.1587718987 >& /dev/null
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-echo "*:5432:gpperfmon:gpmon:changeme" >> /home/gpadmin/.pgpass
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-cat /home/gpadmin/.pgpass.1587718987 >> /home/gpadmin/.pgpass
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-chmod 0600 /home/gpadmin/.pgpass >& /dev/null
20200424:17:03:09:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gp_enable_gpperfmon -v on >& /dev/null
20200424:17:03:10:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gpperfmon_port -v 8888 >& /dev/null
20200424:17:03:10:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gp_external_enable_exec -v on --masteronly >& /dev/null
20200424:17:03:11:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gpperfmon_log_alert_level -v warning >& /dev/null
20200424:17:03:12:014015 gpperfmon_install:mdw:gpadmin-[INFO]:-gpperfmon will be enabled after a full restart of GPDB

这个命令是在greenplum安装完成的时候就已经有的,如果是编译安装,编译的时候一定要加–enable-gpperfmon这个参数,才能保证有这个命令。
功能大致是:
创建greenplum监控用数据库(gpperfmon)
创建greenplum监控用数据库角色(gpmon)
配置greenplum数据库接受来自perfmon监控的链接文件(pg_hba.conf和.pgpass)
设置postgresql.conf文件,增加启用监控的参数。(这些参数默认会添加在文件的末尾)
设置pg_hba.conf文件,增加如下信息

3.1.4 重启集群生效agent

gpstop -r
[gpadmin@mdw ~]$ gpstop -r
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-Starting gpstop with args: -r
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-Gathering information and validating the environment...
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4'
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:---------------------------------------------
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-Master instance parameters
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:---------------------------------------------
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   Master Greenplum instance process active PID   = 20984
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   Database                                       = template1
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   Master port                                    = 5432
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   Master directory                               = /greenplum/gpdata/master/gpseg-1
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   Shutdown mode                                  = smart
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   Timeout                                        = 120
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   Shutdown Master standby host                   = On
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:---------------------------------------------
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-Segment instances that will be shutdown:
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:---------------------------------------------
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   Host   Datadir                             Port    Status
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw1   /greenplum/gpdata/primary1/gpseg0   40000   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw2   /greenplum/gpdata/mirror1/gpseg0    50000   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw1   /greenplum/gpdata/primary2/gpseg1   40001   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw2   /greenplum/gpdata/mirror2/gpseg1    50001   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw2   /greenplum/gpdata/primary1/gpseg2   40000   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw3   /greenplum/gpdata/mirror1/gpseg2    50000   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw2   /greenplum/gpdata/primary2/gpseg3   40001   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw3   /greenplum/gpdata/mirror2/gpseg3    50001   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw3   /greenplum/gpdata/primary1/gpseg4   40000   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw1   /greenplum/gpdata/mirror1/gpseg4    50000   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw3   /greenplum/gpdata/primary2/gpseg5   40001   u
20200424:17:03:40:014393 gpstop:mdw:gpadmin-[INFO]:-   sdw1   /greenplum/gpdata/mirror2/gpseg5    50001   u

Continue with Greenplum instance shutdown Yy|Nn (default=N):
> y
20200424:17:03:42:014393 gpstop:mdw:gpadmin-[INFO]:-There are 0 connections to the database
20200424:17:03:42:014393 gpstop:mdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20200424:17:03:42:014393 gpstop:mdw:gpadmin-[INFO]:-Master host=mdw
20200424:17:03:42:014393 gpstop:mdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart
20200424:17:03:42:014393 gpstop:mdw:gpadmin-[INFO]:-Master segment instance directory=/greenplum/gpdata/master/gpseg-1
20200424:17:03:43:014393 gpstop:mdw:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20200424:17:03:43:014393 gpstop:mdw:gpadmin-[INFO]:-Terminating processes for segment /greenplum/gpdata/master/gpseg-1
20200424:17:03:43:014393 gpstop:mdw:gpadmin-[INFO]:-Stopping master standby host smdw mode=fast
20200424:17:03:45:014393 gpstop:mdw:gpadmin-[INFO]:-Successfully shutdown standby process on smdw
20200424:17:03:45:014393 gpstop:mdw:gpadmin-[INFO]:-Targeting dbid [2, 8, 3, 9, 4, 10, 5, 11, 6, 12, 7, 13] for shutdown
20200424:17:03:45:014393 gpstop:mdw:gpadmin-[INFO]:-Commencing parallel primary segment instance shutdown, please wait...
20200424:17:03:45:014393 gpstop:mdw:gpadmin-[INFO]:-0.00% of jobs completed
20200424:17:03:47:014393 gpstop:mdw:gpadmin-[INFO]:-100.00% of jobs completed
20200424:17:03:47:014393 gpstop:mdw:gpadmin-[INFO]:-Commencing parallel mirror segment instance shutdown, please wait...
20200424:17:03:47:014393 gpstop:mdw:gpadmin-[INFO]:-0.00% of jobs completed
20200424:17:03:48:014393 gpstop:mdw:gpadmin-[INFO]:-100.00% of jobs completed
20200424:17:03:48:014393 gpstop:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200424:17:03:48:014393 gpstop:mdw:gpadmin-[INFO]:-   Segments stopped successfully      = 12
20200424:17:03:48:014393 gpstop:mdw:gpadmin-[INFO]:-   Segments with errors during stop   = 0
20200424:17:03:48:014393 gpstop:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200424:17:03:48:014393 gpstop:mdw:gpadmin-[INFO]:-Successfully shutdown 12 of 12 segment instances 
20200424:17:03:48:014393 gpstop:mdw:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
20200424:17:03:48:014393 gpstop:mdw:gpadmin-[INFO]:-Cleaning up leftover gpmmon process
20200424:17:03:48:014393 gpstop:mdw:gpadmin-[INFO]:-No leftover gpmmon process found
20200424:17:03:48:014393 gpstop:mdw:gpadmin-[INFO]:-Cleaning up leftover gpsmon processes
20200424:17:03:49:014393 gpstop:mdw:gpadmin-[INFO]:-No leftover gpsmon processes on some hosts. not attempting forceful termination on these hosts
20200424:17:03:49:014393 gpstop:mdw:gpadmin-[INFO]:-Cleaning up leftover shared memory
20200424:17:03:49:014393 gpstop:mdw:gpadmin-[INFO]:-Restarting System...

[gpadmin@mdw ~]$ gpstate
20200424:17:04:19:014760 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: 
20200424:17:04:19:014760 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4'
20200424:17:04:19:014760 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.3.23 (Greenplum Database 5.10.2 build commit:b3c02f3acd880e2d676dacea36be015e4a3826d4) on x86_64-pc-linux-gnu, compiled by GCC gcc (GCC) 6.2.0, 64-bit compiled on Aug 10 2018 07:30:24'
20200424:17:04:19:014760 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
20200424:17:04:19:014760 gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments...
. 
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-Greenplum instance status summary
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Master instance                                           = Active
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Master standby                                            = smdw
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Standby master state                                      = Standby host passive
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total segment instance count from metadata                = 12
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Primary Segment Status
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total primary segments                                    = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total primary segment valid (at master)                   = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total primary segment failures (at master)                = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Mirror Segment Status
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total mirror segments                                     = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total mirror segment valid (at master)                    = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total mirror segment failures (at master)                 = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number postmaster processes found                   = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number mirror segments acting as primary segments   = 0
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-   Total number mirror segments acting as mirror segments    = 6
20200424:17:04:20:014760 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------

3.1.5 检测进程

$ ps -ef | grep gpmmon
[gpadmin@mdw ~]$ ps -ef | grep gpmmon
gpadmin  14641 14632  0 17:03 ?        00:00:00 /usr/local/greenplum-db-5.10.2/bin/gpmmon -D /greenplum/gpdata/master/gpseg-1/gpperfmon/conf/gpperfmon.conf -p 5432
gpadmin  14854 13716  0 17:04 pts/0    00:00:00 grep --color=auto gpmmon

3.1.6 确认Performance Monitor数据库写入数据是否正常,检查是否有记录写入。

[gpadmin@mdw ~]$ psql -d gpperfmon -c 'select * from system_now;'
        ctime        | hostname |  mem_total  |  mem_used  | mem_actual_used | mem_actual_free | swap_total  | swap_used | swap_page_in | swap_page_out | cpu_user |
 cpu_sys | cpu_idle | load0 | load1 | load2 | quantum | disk_ro_rate | disk_wo_rate | disk_rb_rate | disk_wb_rate | net_rp_rate | net_wp_rate | net_rb_rate | net_wb
_rate 
---------------------+----------+-------------+------------+-----------------+-----------------+-------------+-----------+--------------+---------------+----------+
---------+----------+-------+-------+-------+---------+--------------+--------------+--------------+--------------+-------------+-------------+-------------+-------
------
 2020-04-24 17:05:15 | mdw      | 16615038976 | 6392963072 |      1319911424 |     15295127552 | 34355539968 |         0 |            0 |             0 |      0.1 |
    0.12 |    99.78 |     0 |  0.01 |  0.05 |      15 |            0 |            1 |            0 |         1566 |          12 |          10 |        2322 |       
 1988
 2020-04-24 17:05:15 | sdw1     | 16615038976 | 3556290560 |      1227321344 |     15387717632 | 34355539968 |         0 |            0 |             0 |     0.05 |
    0.17 |    99.78 |     0 |  0.01 |  0.05 |      15 |            1 |            1 |         4357 |         4357 |          10 |           6 |        4686 |       
 4432
 2020-04-24 17:05:15 | sdw2     | 16615038976 | 3644481536 |      1189396480 |     15425642496 | 68715278336 |         0 |            0 |             0 |     0.03 |
    0.12 |    99.85 |  0.01 |  0.02 |  0.05 |      15 |            1 |            1 |         4357 |         7080 |          10 |           6 |        4686 |       
 4432
 2020-04-24 17:05:15 | sdw3     | 16615038976 | 4537630720 |      1250275328 |     15364763648 | 34355539968 |         0 |            0 |             0 |     0.12 |
    0.48 |     99.4 |  0.02 |  0.03 |  0.05 |      15 |            1 |            1 |         4357 |         6978 |          12 |           8 |        5067 |       
 4769
(4 rows)

3.2 配置standy master

3.2.1 copy Master主机拷贝配置文件到Standby Master的相应目录

gpscp -h smdw MASTER_DATA_DIRECTORY/pg_hba.conf =:MASTER_DATA_DIRECTORY/

gpscp -h smdw ~/.pgpass =:~/

注意权限600

将primary master节点上的$MASTER_DATA_DIRECTORY/pg_hba.conf文件复制到standby master的对应数据目录下。

将将primary master节点上的~/.pgpass文件复制到standby master的~目录下,注意权限问题

chmod 0600 ~/.pgpass。

————————————————
版权声明:本文为CSDN博主「奋斗, 拼」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/u011563666/article/details/97804497

[gpadmin@mdw ~]$ gpscp -h smdw $MASTER_DATA_DIRECTORY/pg_hba.conf =:$MASTER_DATA_DIRECTORY/
[OUT smdw] Warning: the ECDSA host key for 'smdw' differs from the key for the IP address '10.102.254.26'

[OUT smdw] Offending key for IP in /home/gpadmin/.ssh/known_hosts:1

[OUT smdw] Matching host key in /home/gpadmin/.ssh/known_hosts:9

[gpadmin@mdw ~]$ 
[gpadmin@mdw ~]$ gpscp -h smdw ~/.pgpass =:~/
[OUT smdw] Warning: the ECDSA host key for 'smdw' differs from the key for the IP address '10.102.254.26'

[OUT smdw] Offending key for IP in /home/gpadmin/.ssh/known_hosts:1

[OUT smdw] Matching host key in /home/gpadmin/.ssh/known_hosts:9

[gpadmin@mdw ~]$ 

4 安装Performance Monitor控制台

4.1 安装前提条件

安装前提条件:

确保greenplum已经启动并正常运行
MASTER_DATA_DIRECTORY已配置在环境变量中(~/.bashrc)
gpperfmon数据库和gpmon用户已创建,并且gpperfmon agents正常运行
gpcc对外服务端口未被占用,默认端口:28080
gpcc内部端口8899未被占用
————————————————
版权声明:本文为CSDN博主「奋斗, 拼」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/u011563666/article/details/97804497

4.2 unzip软件包和运行安装

[root@mdw mpp510]# ls
greenplum-cc-web-4.3.1-LINUX-x86_64.zip  greenplum-db  greenplum-db-5.10.2-rhel6-x86_64.bin  greenplum-db-5.10.2-rhel6-x86_64.zip  subnet1.out  yes
[root@mdw mpp510]# unzip greenplum-cc-web-4.3.1-LINUX-x86_64.zip 
Archive:  greenplum-cc-web-4.3.1-LINUX-x86_64.zip
   creating: greenplum-cc-web-4.3.1-LINUX-x86_64/
  inflating: greenplum-cc-web-4.3.1-LINUX-x86_64/gpccinstall-4.3.1 
$ source /usr/local/greenplum-db/greenplum_path.sh
 ./gpccinstall-4.3.1 -W
 
 
I HAVE READ AND AGREE TO THE TERMS OF THE ABOVE PIVOTAL GREENPLUM DATABASE

END USER LICENSE AGREEMENT.

 

必须输入YES才可继续安装。


默认GPDB的安装路径为/usr/local/greenplum-cc-web-**,确认需要输入YES,如果需要安装到其他路径,直接输入要按照的路径即可
Would you like enable SSL? Yy/Nn (Default=N)


Installation in progress...
Fail on host:  sdw3
Error when remove remote binary on smdw 
Error when remove remote binary on sdw2 
Error when remove remote binary on mdw 
Error when remove remote binary on sdw3 rm: cannot remove ‘/tmp/gpcc.tar.gz’: No such file or directory

Error when remove remote binary on sdw1
报错处理


[root@mdw greenplum-cc-web-4.3.1-LINUX-x86_64]# gpssh-exkeys -f /home/gpadmin/gpconfig/all_host
[STEP 1 of 5] create local ID and authorize on local host
  ... /root/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts
  ... send to sdw1
  ... send to sdw2
  ... send to sdw3

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
  ... finished key exchange with sdw1
  ... finished key exchange with sdw2
  ... finished key exchange with sdw3

[INFO] completed successfully
[root@mdw greenplum-cc-web-4.3.1-LINUX-x86_64]# 


4.3 修改权限

[root@mdw gpdb]# chown -R gpadmin:gpadmin /usr/local/greenplum-cc-web-4.3.1

4.4、修改Master主机gpadmin的.bashrc配置,增加

# vim ~/.bachrc

source /gpdb/greenplum-cc-web/gpcc_path.sh

source /usr/local/greenplum-cc-web-4.3.1/gpcc_path.sh

4.5 检测状态

[gpadmin@mdw ~]$ gpcc status
2020/04/24 18:06:56 pq: no pg_hba.conf entry for host "10.102.254.27", user "gpmon", database "gpperfmon", SSL off
[gpadmin@mdw ~]$ 
Add the below line to the pg_hba.conf file:

host    gpperfmon       gpmon   ::1/128 trust
Once the line is added, issue a gpstop -u for the changes to take effect immediately without DB restart.

Now retry the command center setup and it should now succeed.

host    gpperfmon         gpmon  10.102.254.27/32        trust 

gpcc start
gpcc status

[gpadmin@mdw gpAdminLogs]$ gpcc start
Starting the gpcc agents and webserver…
2020/04/24 18:50:14 Agent successfully started on 5/5 hosts
2020/04/24 18:50:14 View Greenplum Command Center at http://mdw:28080

4.6、测试Performance Monitor前台连接

打开IE输入Performance Monitor控制台地址:10.102.254.27

http://mdw:28080

来源 https://www.modb.pro/db/24842

分享好友

分享这个小栈给你的朋友们,一起进步吧。

Greenplum
创建时间:2022-04-08 15:36:19
Greenplum
展开
订阅须知

• 所有用户可根据关注领域订阅专区或所有专区

• 付费订阅:虚拟交易,一经交易不退款;若特殊情况,可3日内客服咨询

• 专区发布评论属默认订阅所评论专区(除付费小栈外)

技术专家

查看更多
  • itt0918
    专家
戳我,来吐槽~